WHY LINUX IS A SUPERIOR OPERATING SYSTEM

January 25, 2009

The issues I wish to discuss in this article are mostly technical, but the technical infrastructure is what operating systems are all about. Let’s take a look under the hood.

First, a bit of history. Linux is a UNIX clone or variant, first written from scratch by Linus Torvalds in 1991, and is free to use by anyone. UNIX was written at Bell Labs over 40 years ago and was a proprietary operating system owned by AT&T, now by other companies. Originally, it ran only on mainframe computers, and only recently with the advent of powerful 32-bit microcomputers could UNIX variants be brought to the desktop. No original UNIX code is in Linux, so there is no copyright violation. All UNIX variants are very similar in their underlying structure. Most of the commands and the file system structure are the same, but these things can be implemented in many different ways.

Other open source UNIX variants are Free BSD, Net BSD, PC-BSD, OpenBSD, Solaris, but Linux has become the most popular. Mac OS X is partially based on BSD; only their GUI interface and some applications are proprietary.

The programs or applications run by Linux and other free UNIX variants are not Linux, only the kernel, the core of the system is actually Linux. The programs are mostly open source clones of original UNIX programs or completely new ones from the Free Software Foundation GNU Project, founded by Richard Stallman in 1985, and are written by thousands of programmers simply for the love of it. Open source means that you, the end user, are entitled by the license to have access to the original source code that a program is written in and may be modified by that user. Most of these programs are free. Many of these open source programs have been ported to Mac OS X and Microsoft Windows.

There are many, perhaps 300, distributions of Linux. A distribution is the Linux kernel with hundreds or even thousands of GNU programs included. The look and feel of distributions are decided by one person or a group of people. Because there are many programs that do the same thing, their inclusion into a distribution is made by personal preference in an attempt to make a distinctive whole. In this way Linux distributions can be created for anyone’s needs.  There are very small distributions, such as Puppy Linux, which works well on older, limited hardware, all the way up to enterprise servers from Red Hat. This tends to be confusing for those only familiar with Windows or Macintosh, which offer one way. Linux is having it your way. The advent of Linux has caused more development and advancement in ease of use in UNIX variant operating systems than any other.

The following are the major technical issues that make Linux a superior operating system, compared with Microsoft Windows.

All UNIX-type operating systems are multi-user systems wherein each user has a private, password-protected account that cannot compromise other user’s accounts or the underlying  system. The user can corrupt or destroy only his account. Only a knowledgeable system administrator should have access to the whole system. Users can choose to share some files with other users.

Scalability:

UNIX operating systems are extremely scalable. From embedded devices that do one thing, i.e., a cell phone or MP3 player, to multi-user systems in corporations, universities and government or enterprise servers that can simultaneously serve thousands of people over the Internet.

DOS was not scalable, but extremely limited; it couldn’t do multitasking, the running of more than one program at the same time. Windows has suffered from this legacy and was forced to create many addon kluges to make multitasking work. With Windows NT and its heirs (XP, Vista) some of these problems have been reduced.

File systems:

UNIX has long had file systems that were superior to the Microsoft DOS fat-16 and later fat-32 and NTFS file systems, even before DOS was born. Particularly, there are symbolic and hard links and permissions native to the file system. Early UNIX file systems used long file names greater than the eight character limit of DOS. Now, filenames can be 256 characters. In Linux, even the original ext2 (extended file system) did not have to be defragmented, as does even the updated NTFS in Windows Vista. Defragmenting a file system is simply not an issue in UNIX variants. Journaling, that ability to record operations on files, so that there is a database of operations in progress, allows for more stability and recovery should there be a power failure or other hard disk problem, was an early introduction in UNIX file systems. Microsoft’s NTFS was the fat-32 file system with journaling added. Windows cannot natively recognize any UNIX file system. Linux and most other UNIX variants can read and write to the vast majority of file systems.

There are many file systems to choose from:  ext2, ext3 (journaling added), the just released ext4, JFS (Journaling File System from IBM, released in the mid-1990s as open source), XFS from Silicon Graphics, ReiserFS, ZFS from Sun Microsystems (although there is an incompatible licensing problem that probably will be worked out), the soon to be released Btrfs and Tuxfs are all journaling file systems that are faster, more reliable, more secure and can handle larger partitions and files than Microsoft’s NTFS. Each has its strengths for particular uses.

All of these file systems will format a large drive in seconds. Just try that with NTFS – go take a long nap.

Linux and UNIX variants can also read, write and modify hard disk partitions of many operating systems.

The layout of the UNIX file system structure that is seen by users is far more logical and rigorously maintained than in Windows. Programs, configuration files, data files, etc. are assigned to particular directories by a standard known as POSIX. The file system(s) are unified into one tree structure. Users do not see separate hard drives or partitions.

Recognizing File Types:

Linux and UNIX variants use MIME types to  recognize file extensions for programs and users, but extensions are not recognized by the operating system. The file data type is read from the file header. Extensions are not needed, except as a convenience for the user. Windows requires extensions to recognize executable programs and data types as a legacy holdover from DOS.

Command Line:

Yes, the dreaded command line. It’s pretty well hidden in Windows now, but it’s still there, although you can’t do much with it. In UNIX variants it’s an incredibly powerful way to operate your computer. There are potentially thousands of commands. The problem with the command line is that the user needs to memorize commands; it’s not intuitive like GUIs, but there are “man pages” (manual) that offer help using the command “man programname”. The UNIX command interpreters also support fairly complex scripting commands and running script files, which make the old DOS .bat files look like primitive children’s toys. Most of the system configuration is done from script files on bootup. Almost anything can be programmed with scripts.

Some command line knowledge can be very handy if you need to find errors about why a program won’t run or malfunctions or if you lose your GUI desktop.

Networking:

Networking was first created on UNIX and built in at kernel level, as was the Internet. The syntax of Internet commands, email addresses, web URLs is UNIX syntax. Networking on Windows was an afterthought, an add-on. About half of the Internet now runs on UNIX variants, with Linux predominating.

Installing Software:

In UNIX variants the installer is built into the operating system, enforcing one way of installing programs, very unlike Windows or Macintosh where software comes with its own installer. This system reduces the chance for conflicts or missing library dependencies. When you install software you don’t need to reboot your computer.

Most Linux distributions now have specific Internet repositories for their software that is known to work correctly.

Security:

UNIX variants are more secure than Windows because of permissions and separate password-protected user accounts. This hierarchy of privileges and permissions makes it much more difficult for a hacker to gain access to the system. Certainly, a careless user might download malware and have his account corrupted, but a virus or malware can’t spread throughout the entire system. Most viruses, worms, trojans and malware are written for Windows, but they can’t run on Linux. There are a few viruses, worms and malware that can infect Linux, but they are rarely seen. Most Linux users never use an anti-virus program and don’t get infected. If Linux were to gain a greater market share, then more miscreant programmers might be given the incentive to write infective programs for Linux.

Stability:

Linux and UNIX variants do not use shared memory, as does Windows, so a program crash cannot crash the system or interfere with any other running program. Everything, even the kernel, runs in a separate virtual machine in its own memory space. It is almost impossible to crash a Linux machine unless you really know what you are doing or if there is hardware failure. Many Linux machines have been running for years without rebooting.

Configuration Files vs. the Registry:

Windows uses a single file known as the “Registry” to store all system and program configuration data. Much of this file is cryptic, using 32-bit hexadecimal numbers and other technical text not understandable to most users. If this file becomes corrupted, which it frequently can, Windows may not work at all. Windows does backup an older version of the file that can be used to return to a known good configuration, but often that file has been rewritten with the corrupted data before the user knows it. The only solution is to reinstall Windows.

In Linux, configuration files are all separate, human-readable text files, often having explanatory comments to suggest how to configure. The system configuration files are primarily stored in the /etc directory, can be read but not written to by users. In each user’s account are stored program environment variables in separate configuration files exclusively pertaining to that user’s preferences. It would be difficult for all those files to be corrupted or destroyed.

Memory:

Windows Vista requires at least 2GB of memory to run well. Most Linux distributions require only a quarter as much (512MB), unless you are doing high-level graphics, audio or video work. Some of the smaller distributions can run in 128MB! All of these are at least as powerful if not much more powerful than Windows. Linux has much more efficient memory management.

Error Logging:

All UNIX variants record system processes, errors and warnings to log files that can be very helpful in debugging problems.

Drivers:

In Linux all hardware drivers are included as kernel modules. There is no need to search for and install a driver for a new hardware peripheral. There are some peripherals that only have proprietary Windows drivers and will not work on Linux, but those are becoming fewer as more manufacturers release their hardware specs to the open source community.

Live CD:

Many Linux distributions are now available on CD or DVD in a runnable live format that can boot your computer and run the operating system as if it were installed, but without altering anything on your computer. This way you can test the OS before you decide to install.

In the early days of Linux and most UNIX variants there was no automatic detection and configuration of hardware and software. Users had to know a lot of command line terminology and had to spend hours configuring their system. I know this well, because I have been running Linux for over 15 years. Those days are past. The ease of installing many of the major Linux distributions now surpasses Windows. In the majority of cases everything works.

There are many graphical desktops to choose from in Linux, some simple and basic, others very powerful, with their own suite of integrated programs. Each has its own look and feel. Most of then are far more configurable than Windows and Mac. This situation can be a cause of confusion to Windows and Mac users who are used to a limited palette.

Now there are alternative GUI applications that substitute for almost all Windows and Mac applications that are as good or better than those commercially produced programs.

Linux is more flexible than Windows. That means it has more power but greater complexity. I can’t believe how much Windows Vista has been dumbed down from previous versions. It may make it easier for the average user, but for the computer savvy, it’s lack of configurability and informative error messages makes it a blank wall if you want to fix it.

I am a supporter of the “just works” philosophy for those computer users who can’t or won’t bother learning about computers in general and just want to get work done. This could be achieved in Linux. Linspire and Freespire were attempts to do this, but haven’t been successful because they’re still too complex. Such a system would come preinstalled and configured on a computer and much access to the working innards of the system would be turned off or not provided so that the user simply wouldn’t be allowed to make changes that might possibly bork the system. For instance, no program development applications would be installed. The only extra software would be available through a well-controlled repository. Technical assistance would be available through telephone or online, where a technician might be able to control the user’s computer through a secure connection and make repairs for a fee. Still, the user must learn how to use the software. Making more interactive tutorials available should be the solution.

SOME LINUX CONCERNS

June 29, 2008

These are a few of my most important concerns regarding Linux in general and are not distribution specific.

DOCUMENTATION

Because of the rapid development of many programs it seems to me that documentation is rapidly falling way behind the curve. All too often the information is so out of date that it’s almost useless. Conflicts abound between versions. Most documentation is still based on using the time honored command line editing and information programs, but high-level GUI programs are ignored as if these newer programs don’t exist. That’s fine for experienced users, but confusing for newbies. If we are to capture the market and lead the Linux revolution, documentation needs to be up to date and easily accessed by GUI. More interactive tutorials that pop up after an installation are needed.

A NEW DRIVER MODEL

The current Windows-based model for hardware drivers is a mess. There’s too many incompatibilities, especially when manufacturers change chipsets even in the same model. Why should there be separate drivers? Some years ago there was talk about a universal driver layer for the operating system that a new peripheral or card could communicate with and automatically configure the system. No kernel driver would be required. The driver would be in firmware on the card, but not accessed by the OS. The chipset would identify itself to the kernel, tell it what it function it provides and that would be all that is required. After all, firmware in most cards now can identify themselves to the OS. It would take only a bit more hardware programming to make this possible, and companies wouldn’t have to release drivers or hardware information to third parties.

HARDWARE DETECTION

Hardware detection in Linux has gotten a lot better in the last few years, but there’s still some problems with chipsets that aren’t directly on the PCI bus or ISA plug and play, especially with newer motherboards that have softmodem chipsets. As an example, on my new Averatec 7155 laptop, a state-of-the-art computer at the time I bought it, I had to go through many hoops online finding information about the High Definition Audio (HDA) chipset so I could find the right modem module. I finally got it working by having to compile slmodem with ALSA support – yes, the driver and chipset are part of the sound system! Hardware detection simply can’t see it, although on installation (PCLinuxOS) it found and set up the sound chipset just fine.

However, this problem goes back a few years. The classic IBM Thinkpad 600 had a DSP modem and a Crystal 4236 sound chipset that wasn’t ISA PNP. Earlier hardware detection, for example Red Hat 6, couldn’t see the modem, but could detect the sound chip and set it up. Later versions couldn’t see the sound chipset; this must have been when kernel 2.4 was released. I don’t know why, but workarounds were found to make it function. IBM released the Mwave modem driver and daemon sourcecode, and upon successful compile, installing the module and daemon, the modem would work, but the hardware detection still couldn’t see it. I used this old laptop as a test machine until recently I decided to give it to a friend. I installed Puppy Linux on it and it recognized the sound chip! SUSE 10.2 also found the sound chip, but there’s still problems getting it to work; it still doesn’t see the modem chipset. Why should this condition still exist?

Why are some distributions excellent at hardware detection and others not? Because Linux is open source and all code is available, it would seem that the distribution builders could find and use the best code from other distributions and include it. When better code is written, that should be incorporated. It’s Open Source – steal it!

INSTALLING UNNEEDED DRIVERS

Whether from a standard CD or DVD install set or livecd, most Linux distros must have all possible hardware drivers available, from the Xorg video drivers to kernel modules. Most distros install all of these, taking up unnecessary drive space, usually not a big problem with the size of current hard drives. The biggest part of a kernel download is modules, most of which aren’t needed for a specific machine. There are hundreds of modules on any Linux machine that will never be used! They must be included because of the diversity of available hardware. The kernel itself is perhaps 1.4MB, depending on what support is built in. Some extra modules are needed to support PNP devices or services software that might be added at any time, but module support for built-in devices is usually fixed for any specific machine. The hard drive(s), video card, monitor, sound card, usb, firewire, pointing device, etc., are usually built into the motherboard and aren’t usually changed. There is needed a more intelligent installation procedure that only installs needed drivers. In the case of kernel modules, instead of installing all of them, why can’t it install just the needed ones seen at install time and allow the user to install others from the install media as requirements change?

LINUX FOR DUMMIES

There is still needed a Linux distribution for dummies that won’t allow them to change or bork anything at system level. Freespire and Linspire haven’t lived up to their hype. Too many computer users don’t even know what an operating system is or how computers work, they just want to get work done. Newbies and the computer disabled are confused by too many options that they may not need to know. Even installing Windows is too much of a hassle for most people, after all, most people get Windows already installed. Because Linux is so scalable and configurable, let’s limit their choices and give them something that they can’t break. Such a distribution might be offered pre-installed on a new computer or would install from a live CD or DVD and automatically set up separate swap, / and /home partitions, recognize Windows, if installed, ask if the user wishes to delete it or resize the Win partition, if needed, and after a successful install, a tutorial would pop up telling the user what application to use to get and install more packages. Root wouldn’t be accessible, except in limited cases using su or sudo. The package repositories would be fixed and wouldn’t allow installation from any other distribution’s repositories. No compiler or development files or other package manager would be included, although they might be available in extra repositories that wouldn’t normally show up in the installer. No Internet server software would be available – it should be exclusively a workstation distribution. A firewall should be set up automatically during install with no user option to turn it off. This distribution might be sold with phone or web tech support to help with installation problems, with the option to buy extended tech support.

The development team should choose the best packages for user friendliness. Alternative packages that duplicate functions, as in most major Linux distributions, simply wouldn’t be available, thus limiting unnecessary confusion. How many movie and MP3 players do you really need? There should be an option for automatic updates if the user chooses. This system should include Open Office, for example, and all other basic applications for home or SOHO. However, the menu selections would not reference applications, but what function a user wants to do, i.e., write a letter, get/send email, browse web, etc. Whatever desktop is included would be fully configurable, as in other Linux distributions, but don’t give the user another desktop choice. Do give them tutorials on how to configure the desktop and all applications. These restrictions create less problems for the development and packaging teams and make the distribution a more viable option.

I propose we call it KISS (Keep it simple, stupid) Linux.

I Hate Mice

June 21, 2008

I hate mice. A mouse is a bad excuse for a pointing device and an even worse drawing and manipulation device. Mice take up valuable desk space – they must be freely moved around on a surface to function. That surface must be just the right texture, hence the advent of mouse pads. The ball collects dirt and dust which gums up the works. Optical mice solve the later problem, but otherwise are no improvement. Because you must use your whole arm to move it, rather than your hand or fingers, a mouse is more energy intensive and a cause of user fatigue. Face it, mice are difficult to control. Why they remain the most popular GUI control device is beyond me. I surmise it must be a conspiracy. There can’t be that much stupidity.

The trackball was a great improvement. The device was stationary and so didn’t need much deskspace. If the ball was large and the buttons positioned in ergonomic relationship, it gave the user much finer control. Then some manufacturers made the ball much smaller and therefore harder to control. However, many newer trackballs are designed for thumb control. The thumb doesn’t have much dexterity – bad idea. Ergonomic? Not. You see, the best control of a trackball is by the middle finger, and if the ball is of sufficient size, say the size of a ping-pong ball, the index and fourth finger can offer even more control. The best trackball I ever used was built into a Chicony keyboard – a large ball on the right side and the three buttons on the left below the keys so that the hands never had to leave the keyboard and weren’t far from the typing position. The user didn’t have to attempt to manipulate the ball and push buttons with one hand – what a brilliant idea! Unfortunately, most trackball keyboards used very small balls with the buttons surrounding the ball, another very bad design, especially for drag-and-drop, because the user must keep a button pushed in while moving the ball with another finger. User to Earth: “Is there any intelligent life here?”

The touchpad was another sort of good idea, and has been popular on laptop computers, but also suffered from inadequate implementation. The idea of double tapping the surface to execute commands wasn’t well thought out. The surface also suffers from dirt and chemical contamination. Moist hands can cause strange behavior. There’s still a couple of standard pushbuttons for other functions. The pads aren’t sufficiently large to allow fine finger movement for drawing.

The IBM Trackpoint device for laptops wasn’t a bad idea because the user’s hands didn’t need to leave the keyboard, but fine control isn’t possible, so it’s a lousy device for drawing and manipulating objects.

The drawing tablet is a fine instrument for drawing and manipulation of objects and can even be used as an alternate pointing device, but it’s another sizable device alongside the keyboard. Using a pen, as if on paper, is a natural function. Some of the better boards also have a puck with additional controls to replace the pen, but that’s just a supermouse. The drawback is that good drawing tablets are expensive.

Still, there’s no ideal pointing/drawing/manipulating device for GUIs. I’ve often thought that a device similar to a game joystick might be a workable replacement for a mouse. There is already software allowing the user to use a joystick as a GUI control device. Joysticks have three dimensional movement and a thumb button on top. It could be designed with additional buttons on the stick under the tips of the fingers for more functions. The problem is that one hand must be away from the keyboard.

The ideal computer control would be voice command, which already exists, however imperfectly, but is improving slowly. Then, we could even eliminate the keyboard. In the meantime, a well designed trackball gets my vote as the best available device. It’s a shame that they are disappearing and there’s no better device in the offing. Optical control anyone?

Remapping Hard Drives

June 19, 2008

The problem with current hard drives is the legacy device mapping that was created for DOS. The Master Boot Record (MBR) on the first sector of the drive, which holds the operating system boot information, usually in the form of a boot loader, is only 512 bytes, an incredibly small sector. Modern boot loaders must store only a stub of information in the MBR and refer that to the remaining parts of the loader usually located within the partition of the operating system (OS) it is booting because there isn’t enough space to do all the functions required to boot a modern OS. Both Windows and Linux suffer this problem, but Windows is locked in to this legacy; Linux isn’t.

Why shouldn’t an alternative drive mapping be created that would have an MBR of at least 1MB that could hold a modern boot loader? The firmware of the hard drive would have to be updated, not a small concern. For the computer to recognize this new drive configuration a new BIOS is necessary. A Linux BIOS that can replace the manufacturer’s BIOS has been in development for some years that eliminates the restrictions of legacy DOS support. I don’t think it will work with Windows, so its use is limited. The upshot of using this BIOS is that boot speed is improved by orders of magnitude – seconds, rather than minutes. What would be the requirements to remap drives? I suspect it would require a BIOS update across all platforms.

Partition Locking:

There is a need for being able to password lock hard drive partitions from being changed, overwritten, deleted outside of the OS, yet that allows an installed OS to access the file system in that partition without problem. If this were implemented, another OS could not overwrite the MBR or an installed OS without a password. It should be installed on the drive in a protected, invisible partition, so that even if the drive were removed and installed into another computer it wouldn’t change the protection. Perhaps it could be installed in a new rewritable ROM chip built into the drive electronics, then it could be accessed from the BIOS or an OS.

What do you think?

A 3D GUI Operating System

June 17, 2008

INTRODUCTION

This is a thought experiment. I’m sitting before a new, state of the art computer: large, wide LCD screen, a cool looking keyboard with some strange controls around the edges – these are optical sensors for your fingers that replace a mouse, trackball, touchpad, joystick, etc. You need many more controls than a mouse or joystick can provide, and they must be ergonomically placed in relation to the keys, so that hand movement is minimized.

The screen background is a virtual 3D display that can be panned by moving the pointer to an edge of the screen, so if you have many documents open you can easily move to them – no separate, numbered desktops. Of course, you can move your document to any position on the screen/desktop, enlarge or shrink it, rotate it, tilt it in any direction.

The screen is black, but the computer is up and running, there’s just nothing selected to display. I’m asking you to throw away all you know about the desktop metaphor, generic windows, background image and icons. You can have them if you want, but they aren’t necessary. I put my finger on one of the optical sensors, a pointer appears on the screen, I make another move and a cursor appears at the pointer, now I can type in a command, make another move on the sensor and the results will appear in any typeface, size and color I desire. I want to open a book I’ve been working on; I move my finger to another sensor, a dot of light appears in the dark background and quickly enlarges into a book. It looks like a real book, the pages will turn forward or back. It opens to where I left off writing. I can smoothly enlarge this book until a pixel of one character fills the entire screen or shrink it into the background until it disappears.

You will see no menus until you choose the appropriate sensor, then one pops up according to context. Tool bars work similarly.

There are no named applications as such. The OS only asks what you want to do: write a letter or a book, draw, database information, send email, browse the web, etc. Everything is functionally oriented.

I’m putting this idea for a new OS out on the net as a request for comments and to see what interest there is making it a reality. I think what I have envisioned is possible now. And what are the difficulties to be overcome? Is anyone out there interested in attempting it?

I am not a programmer, but an experienced and savvy computer user and creative thinker who has run Linux for 15 years and very much want an OS like this to fulfill my needs, my thinking and working styles.

The business, programmer and engineering installed user base legacies severely hamper any really user-friendly OS. These are outmoded ideas not relevant to many potential users in other fields, notably in the arts and humanities and many other fields of research. The interfaces are too square, functions are too separate and kludgy, the hierarchies of files/directories and approaches limits flexibility and usability. Current OS interfaces are not like the real world and life in general. The metaphor of a VR world that can mirror the real world is a good start, but the electronic medium can be a world to itself with its own peculiar and unique properties that might make user interface easier, more creative, etc. Therefore, the interface must begin as a graphical drawing/processing object-manipulation engine.

I would like to see an interface that is 3-D object oriented. Documents/objects should look like real world objects. For example, a book should appear as a book with turnable pages. To access an object the user could type in its title or could search for objects of preferred type. There need not even be icons. The operating system would know where everything is and what it is. All information about an object should be included in object headers. No more opening an application for specific kinds of data. When a work-object file is selected the OS would run the appropriate modules and display it the way the user wants. In a compound object, such as a book with pictures, tables, charts, etc., the objects would retain their identity but be linked to the master object. If saved for archiving, all the associated objects could be included in one compound file, like a UNIX TAR file. If an object is linked to another, the OS would not allow deletion or at least warn the user of the link. When an object is selected, the appropriate toolbox/palette opens and can be positioned anywhere on the screen and expanded or contracted in size. The GUI object can also be brought near and thus expanded or pushed away into the background until it disappears.

You might want a background image, but it could be an interactive desktop similar to an HTML image map, so that by clicking certain areas specific actions could be invoked.

PROPERTIES OF THE OS

On bootup, the kernel first establishes a 3-D VR GUI space. Therefore, graphic recognition is built in at system level. Writing this interface for Xwindows probably won’t work, but some legacy code might be used.

The OS is not exclusively graphically-based; a command line may appear anywhere the cursor is. Text is both character and graphic object, i.e., all text displayed has graphical properties.

The OS is an application/database/very high-level programming language in one, using plug-in data/function/process modules that are 100% compatible. These modules are not separate applications.

The OS GUI draws objects according to the user’s preferences. There need be no permanent look and feel, as in current applications. The corporate/office model for software branding should go away.

Everything is an object, each occupying its own memory space.

Objects can have any size, shape, colors and position in VR space.

Objects are divisible, groupable (compound objects), linkable, can communicate, have inheritance, user permissions and passwords.

The OS databases all objects and their properties/characteristics.

Multitasking and multithreading is inherent.

No software memory or storage limit. Only limited by the hardware, file system and OS.

The file system is an object-based tangled hierarchy, spanning all drives. No directories or folders need exist except for user convenience.

The file system/drives might be partitioned into five sections: system/functions, data object templates, workspace (users), archive (compressed), caching (swap). These need not seen by the user.

No drive letters or numbers need be displayed. Only the removable drives and external storage (backup) are user accessible and might have icons.

All system utilities automatically run in background: anti-virus, anti-spam-adware-malware, file checking, compression, drive check, etc.

Digital sound recognition is built in.

Handwriting recognition built in.

Voice recognition built in.

Plug and play. Most other hardware recognition, especially scanning.

A new driver layer that eliminates the need for separate hardware drivers. Drivers should be built into hardware, solving the problem of manufacturers having to release proprietary hardware code to third parties.

GUI INTERFACE

Fully user configurable. Can be set up as any metaphor: desktop, artist’s studio, laboratory, library, etc. Modules to create graphical user metaphors and widgets would be a major component. Just drag and drop a widget anywhere, link them, create a functional display for what you want to do.

No windows (unless the user wants them) or program icons. Get rid of the rectangular box metaphor. Work-object icons might be appropriate, depending on the user’s needs, but optional.

A 3-D pointing device with many more buttons or optical sensors would be needed to control the interface.

A graphics tablet could be another standard input device.

At first, a standard keyboard would be one input device, then others might be designed to work better with this new metaphor.

High resolution stereo monitor glasses could become a monitor replacement.

Toolboxes/palettes to select tools to operate on a work-object may be selected on screen.

Tools: pen, brush, open/close hand (mover/grabber), outliner-selector (used with grabber), scissors or knife, eraser. What else?

Displayed objects can be any recognizable object, such as a piece of paper or book.

Displayed objects are sized by a zoom function using perspective of the VR space, so that they can fill the screen or vanish in the distance.

There should be only one desktop, but of potentially infinite size, depending on memory, processor, monitor, etc., that can be panned by moving the pointer to an edge of the screen if many documents are being displayed.

MODULES

networking/communications, word processor, table (spreadsheet, etc.), MIDI, digital audio, video (MPEG-1-4), Extended graphics editor, Extended drawing editor (vector, raster, CAD), charting/graphing, Equation processor, data acquisition, special databases for specific fields, statistics, dictionary (spell checker, thesaurus, definitions), OCR, etc.

NOTES

The display and work model metaphor goes beyond document-centric to work-object-centric. And it is process-oriented.

No more major applications programs would have to be written, only small function modules, which should make writing and debugging easier and eliminate application bloat.

The program/module installer would be built into the OS. All installs would follow the same procedure.

At first, we night want to create a single-user system, but it must eventually be capable of multi-user/network server functions.

OS should recognize most file data formats, especially from DOS/Windows, Macintosh and UNIX.

A PDA version that interfaces with the desktop might be a good idea.

OS should run on faster Pentiums and x86 clones, Power PCs, MIPS, Ultra Sparc, DEC Alpha, etc.

Could Linux be adapted? Could code from X Windows be used or should the graphical interface be written from scratch?

I’d appreciate input. Is anyone willing to tackle it?