Archive for June, 2008


June 29, 2008

These are a few of my most important concerns regarding Linux in general and are not distribution specific.


Because of the rapid development of many programs it seems to me that documentation is rapidly falling way behind the curve. All too often the information is so out of date that it’s almost useless. Conflicts abound between versions. Most documentation is still based on using the time honored command line editing and information programs, but high-level GUI programs are ignored as if these newer programs don’t exist. That’s fine for experienced users, but confusing for newbies. If we are to capture the market and lead the Linux revolution, documentation needs to be up to date and easily accessed by GUI. More interactive tutorials that pop up after an installation are needed.


The current Windows-based model for hardware drivers is a mess. There’s too many incompatibilities, especially when manufacturers change chipsets even in the same model. Why should there be separate drivers? Some years ago there was talk about a universal driver layer for the operating system that a new peripheral or card could communicate with and automatically configure the system. No kernel driver would be required. The driver would be in firmware on the card, but not accessed by the OS. The chipset would identify itself to the kernel, tell it what it function it provides and that would be all that is required. After all, firmware in most cards now can identify themselves to the OS. It would take only a bit more hardware programming to make this possible, and companies wouldn’t have to release drivers or hardware information to third parties.


Hardware detection in Linux has gotten a lot better in the last few years, but there’s still some problems with chipsets that aren’t directly on the PCI bus or ISA plug and play, especially with newer motherboards that have softmodem chipsets. As an example, on my new Averatec 7155 laptop, a state-of-the-art computer at the time I bought it, I had to go through many hoops online finding information about the High Definition Audio (HDA) chipset so I could find the right modem module. I finally got it working by having to compile slmodem with ALSA support – yes, the driver and chipset are part of the sound system! Hardware detection simply can’t see it, although on installation (PCLinuxOS) it found and set up the sound chipset just fine.

However, this problem goes back a few years. The classic IBM Thinkpad 600 had a DSP modem and a Crystal 4236 sound chipset that wasn’t ISA PNP. Earlier hardware detection, for example Red Hat 6, couldn’t see the modem, but could detect the sound chip and set it up. Later versions couldn’t see the sound chipset; this must have been when kernel 2.4 was released. I don’t know why, but workarounds were found to make it function. IBM released the Mwave modem driver and daemon sourcecode, and upon successful compile, installing the module and daemon, the modem would work, but the hardware detection still couldn’t see it. I used this old laptop as a test machine until recently I decided to give it to a friend. I installed Puppy Linux on it and it recognized the sound chip! SUSE 10.2 also found the sound chip, but there’s still problems getting it to work; it still doesn’t see the modem chipset. Why should this condition still exist?

Why are some distributions excellent at hardware detection and others not? Because Linux is open source and all code is available, it would seem that the distribution builders could find and use the best code from other distributions and include it. When better code is written, that should be incorporated. It’s Open Source – steal it!


Whether from a standard CD or DVD install set or livecd, most Linux distros must have all possible hardware drivers available, from the Xorg video drivers to kernel modules. Most distros install all of these, taking up unnecessary drive space, usually not a big problem with the size of current hard drives. The biggest part of a kernel download is modules, most of which aren’t needed for a specific machine. There are hundreds of modules on any Linux machine that will never be used! They must be included because of the diversity of available hardware. The kernel itself is perhaps 1.4MB, depending on what support is built in. Some extra modules are needed to support PNP devices or services software that might be added at any time, but module support for built-in devices is usually fixed for any specific machine. The hard drive(s), video card, monitor, sound card, usb, firewire, pointing device, etc., are usually built into the motherboard and aren’t usually changed. There is needed a more intelligent installation procedure that only installs needed drivers. In the case of kernel modules, instead of installing all of them, why can’t it install just the needed ones seen at install time and allow the user to install others from the install media as requirements change?


There is still needed a Linux distribution for dummies that won’t allow them to change or bork anything at system level. Freespire and Linspire haven’t lived up to their hype. Too many computer users don’t even know what an operating system is or how computers work, they just want to get work done. Newbies and the computer disabled are confused by too many options that they may not need to know. Even installing Windows is too much of a hassle for most people, after all, most people get Windows already installed. Because Linux is so scalable and configurable, let’s limit their choices and give them something that they can’t break. Such a distribution might be offered pre-installed on a new computer or would install from a live CD or DVD and automatically set up separate swap, / and /home partitions, recognize Windows, if installed, ask if the user wishes to delete it or resize the Win partition, if needed, and after a successful install, a tutorial would pop up telling the user what application to use to get and install more packages. Root wouldn’t be accessible, except in limited cases using su or sudo. The package repositories would be fixed and wouldn’t allow installation from any other distribution’s repositories. No compiler or development files or other package manager would be included, although they might be available in extra repositories that wouldn’t normally show up in the installer. No Internet server software would be available – it should be exclusively a workstation distribution. A firewall should be set up automatically during install with no user option to turn it off. This distribution might be sold with phone or web tech support to help with installation problems, with the option to buy extended tech support.

The development team should choose the best packages for user friendliness. Alternative packages that duplicate functions, as in most major Linux distributions, simply wouldn’t be available, thus limiting unnecessary confusion. How many movie and MP3 players do you really need? There should be an option for automatic updates if the user chooses. This system should include Open Office, for example, and all other basic applications for home or SOHO. However, the menu selections would not reference applications, but what function a user wants to do, i.e., write a letter, get/send email, browse web, etc. Whatever desktop is included would be fully configurable, as in other Linux distributions, but don’t give the user another desktop choice. Do give them tutorials on how to configure the desktop and all applications. These restrictions create less problems for the development and packaging teams and make the distribution a more viable option.

I propose we call it KISS (Keep it simple, stupid) Linux.


I Hate Mice

June 21, 2008

I hate mice. A mouse is a bad excuse for a pointing device and an even worse drawing and manipulation device. Mice take up valuable desk space – they must be freely moved around on a surface to function. That surface must be just the right texture, hence the advent of mouse pads. The ball collects dirt and dust which gums up the works. Optical mice solve the later problem, but otherwise are no improvement. Because you must use your whole arm to move it, rather than your hand or fingers, a mouse is more energy intensive and a cause of user fatigue. Face it, mice are difficult to control. Why they remain the most popular GUI control device is beyond me. I surmise it must be a conspiracy. There can’t be that much stupidity.

The trackball was a great improvement. The device was stationary and so didn’t need much deskspace. If the ball was large and the buttons positioned in ergonomic relationship, it gave the user much finer control. Then some manufacturers made the ball much smaller and therefore harder to control. However, many newer trackballs are designed for thumb control. The thumb doesn’t have much dexterity – bad idea. Ergonomic? Not. You see, the best control of a trackball is by the middle finger, and if the ball is of sufficient size, say the size of a ping-pong ball, the index and fourth finger can offer even more control. The best trackball I ever used was built into a Chicony keyboard – a large ball on the right side and the three buttons on the left below the keys so that the hands never had to leave the keyboard and weren’t far from the typing position. The user didn’t have to attempt to manipulate the ball and push buttons with one hand – what a brilliant idea! Unfortunately, most trackball keyboards used very small balls with the buttons surrounding the ball, another very bad design, especially for drag-and-drop, because the user must keep a button pushed in while moving the ball with another finger. User to Earth: “Is there any intelligent life here?”

The touchpad was another sort of good idea, and has been popular on laptop computers, but also suffered from inadequate implementation. The idea of double tapping the surface to execute commands wasn’t well thought out. The surface also suffers from dirt and chemical contamination. Moist hands can cause strange behavior. There’s still a couple of standard pushbuttons for other functions. The pads aren’t sufficiently large to allow fine finger movement for drawing.

The IBM Trackpoint device for laptops wasn’t a bad idea because the user’s hands didn’t need to leave the keyboard, but fine control isn’t possible, so it’s a lousy device for drawing and manipulating objects.

The drawing tablet is a fine instrument for drawing and manipulation of objects and can even be used as an alternate pointing device, but it’s another sizable device alongside the keyboard. Using a pen, as if on paper, is a natural function. Some of the better boards also have a puck with additional controls to replace the pen, but that’s just a supermouse. The drawback is that good drawing tablets are expensive.

Still, there’s no ideal pointing/drawing/manipulating device for GUIs. I’ve often thought that a device similar to a game joystick might be a workable replacement for a mouse. There is already software allowing the user to use a joystick as a GUI control device. Joysticks have three dimensional movement and a thumb button on top. It could be designed with additional buttons on the stick under the tips of the fingers for more functions. The problem is that one hand must be away from the keyboard.

The ideal computer control would be voice command, which already exists, however imperfectly, but is improving slowly. Then, we could even eliminate the keyboard. In the meantime, a well designed trackball gets my vote as the best available device. It’s a shame that they are disappearing and there’s no better device in the offing. Optical control anyone?

Remapping Hard Drives

June 19, 2008

The problem with current hard drives is the legacy device mapping that was created for DOS. The Master Boot Record (MBR) on the first sector of the drive, which holds the operating system boot information, usually in the form of a boot loader, is only 512 bytes, an incredibly small sector. Modern boot loaders must store only a stub of information in the MBR and refer that to the remaining parts of the loader usually located within the partition of the operating system (OS) it is booting because there isn’t enough space to do all the functions required to boot a modern OS. Both Windows and Linux suffer this problem, but Windows is locked in to this legacy; Linux isn’t.

Why shouldn’t an alternative drive mapping be created that would have an MBR of at least 1MB that could hold a modern boot loader? The firmware of the hard drive would have to be updated, not a small concern. For the computer to recognize this new drive configuration a new BIOS is necessary. A Linux BIOS that can replace the manufacturer’s BIOS has been in development for some years that eliminates the restrictions of legacy DOS support. I don’t think it will work with Windows, so its use is limited. The upshot of using this BIOS is that boot speed is improved by orders of magnitude – seconds, rather than minutes. What would be the requirements to remap drives? I suspect it would require a BIOS update across all platforms.

Partition Locking:

There is a need for being able to password lock hard drive partitions from being changed, overwritten, deleted outside of the OS, yet that allows an installed OS to access the file system in that partition without problem. If this were implemented, another OS could not overwrite the MBR or an installed OS without a password. It should be installed on the drive in a protected, invisible partition, so that even if the drive were removed and installed into another computer it wouldn’t change the protection. Perhaps it could be installed in a new rewritable ROM chip built into the drive electronics, then it could be accessed from the BIOS or an OS.

What do you think?

A 3D GUI Operating System

June 17, 2008


This is a thought experiment. I’m sitting before a new, state of the art computer: large, wide LCD screen, a cool looking keyboard with some strange controls around the edges – these are optical sensors for your fingers that replace a mouse, trackball, touchpad, joystick, etc. You need many more controls than a mouse or joystick can provide, and they must be ergonomically placed in relation to the keys, so that hand movement is minimized.

The screen background is a virtual 3D display that can be panned by moving the pointer to an edge of the screen, so if you have many documents open you can easily move to them – no separate, numbered desktops. Of course, you can move your document to any position on the screen/desktop, enlarge or shrink it, rotate it, tilt it in any direction.

The screen is black, but the computer is up and running, there’s just nothing selected to display. I’m asking you to throw away all you know about the desktop metaphor, generic windows, background image and icons. You can have them if you want, but they aren’t necessary. I put my finger on one of the optical sensors, a pointer appears on the screen, I make another move and a cursor appears at the pointer, now I can type in a command, make another move on the sensor and the results will appear in any typeface, size and color I desire. I want to open a book I’ve been working on; I move my finger to another sensor, a dot of light appears in the dark background and quickly enlarges into a book. It looks like a real book, the pages will turn forward or back. It opens to where I left off writing. I can smoothly enlarge this book until a pixel of one character fills the entire screen or shrink it into the background until it disappears.

You will see no menus until you choose the appropriate sensor, then one pops up according to context. Tool bars work similarly.

There are no named applications as such. The OS only asks what you want to do: write a letter or a book, draw, database information, send email, browse the web, etc. Everything is functionally oriented.

I’m putting this idea for a new OS out on the net as a request for comments and to see what interest there is making it a reality. I think what I have envisioned is possible now. And what are the difficulties to be overcome? Is anyone out there interested in attempting it?

I am not a programmer, but an experienced and savvy computer user and creative thinker who has run Linux for 15 years and very much want an OS like this to fulfill my needs, my thinking and working styles.

The business, programmer and engineering installed user base legacies severely hamper any really user-friendly OS. These are outmoded ideas not relevant to many potential users in other fields, notably in the arts and humanities and many other fields of research. The interfaces are too square, functions are too separate and kludgy, the hierarchies of files/directories and approaches limits flexibility and usability. Current OS interfaces are not like the real world and life in general. The metaphor of a VR world that can mirror the real world is a good start, but the electronic medium can be a world to itself with its own peculiar and unique properties that might make user interface easier, more creative, etc. Therefore, the interface must begin as a graphical drawing/processing object-manipulation engine.

I would like to see an interface that is 3-D object oriented. Documents/objects should look like real world objects. For example, a book should appear as a book with turnable pages. To access an object the user could type in its title or could search for objects of preferred type. There need not even be icons. The operating system would know where everything is and what it is. All information about an object should be included in object headers. No more opening an application for specific kinds of data. When a work-object file is selected the OS would run the appropriate modules and display it the way the user wants. In a compound object, such as a book with pictures, tables, charts, etc., the objects would retain their identity but be linked to the master object. If saved for archiving, all the associated objects could be included in one compound file, like a UNIX TAR file. If an object is linked to another, the OS would not allow deletion or at least warn the user of the link. When an object is selected, the appropriate toolbox/palette opens and can be positioned anywhere on the screen and expanded or contracted in size. The GUI object can also be brought near and thus expanded or pushed away into the background until it disappears.

You might want a background image, but it could be an interactive desktop similar to an HTML image map, so that by clicking certain areas specific actions could be invoked.


On bootup, the kernel first establishes a 3-D VR GUI space. Therefore, graphic recognition is built in at system level. Writing this interface for Xwindows probably won’t work, but some legacy code might be used.

The OS is not exclusively graphically-based; a command line may appear anywhere the cursor is. Text is both character and graphic object, i.e., all text displayed has graphical properties.

The OS is an application/database/very high-level programming language in one, using plug-in data/function/process modules that are 100% compatible. These modules are not separate applications.

The OS GUI draws objects according to the user’s preferences. There need be no permanent look and feel, as in current applications. The corporate/office model for software branding should go away.

Everything is an object, each occupying its own memory space.

Objects can have any size, shape, colors and position in VR space.

Objects are divisible, groupable (compound objects), linkable, can communicate, have inheritance, user permissions and passwords.

The OS databases all objects and their properties/characteristics.

Multitasking and multithreading is inherent.

No software memory or storage limit. Only limited by the hardware, file system and OS.

The file system is an object-based tangled hierarchy, spanning all drives. No directories or folders need exist except for user convenience.

The file system/drives might be partitioned into five sections: system/functions, data object templates, workspace (users), archive (compressed), caching (swap). These need not seen by the user.

No drive letters or numbers need be displayed. Only the removable drives and external storage (backup) are user accessible and might have icons.

All system utilities automatically run in background: anti-virus, anti-spam-adware-malware, file checking, compression, drive check, etc.

Digital sound recognition is built in.

Handwriting recognition built in.

Voice recognition built in.

Plug and play. Most other hardware recognition, especially scanning.

A new driver layer that eliminates the need for separate hardware drivers. Drivers should be built into hardware, solving the problem of manufacturers having to release proprietary hardware code to third parties.


Fully user configurable. Can be set up as any metaphor: desktop, artist’s studio, laboratory, library, etc. Modules to create graphical user metaphors and widgets would be a major component. Just drag and drop a widget anywhere, link them, create a functional display for what you want to do.

No windows (unless the user wants them) or program icons. Get rid of the rectangular box metaphor. Work-object icons might be appropriate, depending on the user’s needs, but optional.

A 3-D pointing device with many more buttons or optical sensors would be needed to control the interface.

A graphics tablet could be another standard input device.

At first, a standard keyboard would be one input device, then others might be designed to work better with this new metaphor.

High resolution stereo monitor glasses could become a monitor replacement.

Toolboxes/palettes to select tools to operate on a work-object may be selected on screen.

Tools: pen, brush, open/close hand (mover/grabber), outliner-selector (used with grabber), scissors or knife, eraser. What else?

Displayed objects can be any recognizable object, such as a piece of paper or book.

Displayed objects are sized by a zoom function using perspective of the VR space, so that they can fill the screen or vanish in the distance.

There should be only one desktop, but of potentially infinite size, depending on memory, processor, monitor, etc., that can be panned by moving the pointer to an edge of the screen if many documents are being displayed.


networking/communications, word processor, table (spreadsheet, etc.), MIDI, digital audio, video (MPEG-1-4), Extended graphics editor, Extended drawing editor (vector, raster, CAD), charting/graphing, Equation processor, data acquisition, special databases for specific fields, statistics, dictionary (spell checker, thesaurus, definitions), OCR, etc.


The display and work model metaphor goes beyond document-centric to work-object-centric. And it is process-oriented.

No more major applications programs would have to be written, only small function modules, which should make writing and debugging easier and eliminate application bloat.

The program/module installer would be built into the OS. All installs would follow the same procedure.

At first, we night want to create a single-user system, but it must eventually be capable of multi-user/network server functions.

OS should recognize most file data formats, especially from DOS/Windows, Macintosh and UNIX.

A PDA version that interfaces with the desktop might be a good idea.

OS should run on faster Pentiums and x86 clones, Power PCs, MIPS, Ultra Sparc, DEC Alpha, etc.

Could Linux be adapted? Could code from X Windows be used or should the graphical interface be written from scratch?

I’d appreciate input. Is anyone willing to tackle it?