~ Ken Thompson on Unix
~ Bill Gates on Eunuchs and Unix
Historically, Unix was a piece of software designed to mimic the behavior a genuine operating system. It was originally released by Bell Laboratories.Unix took on different appearances and eventually settling in the form of open-sourced projects.
In the 1960s, Massachusetts Institute of Technology, AT&T, and General Electric worked on an experimental operating system called "Multics" (or "Multiplexed Information and Computing Service"), which was designed to run on the GE-645 mainframe computer. Although the project seemed to have some sizable profitability at the time, AT&T nevertheless bailed out due to several delays in product delivery. As a result, two malevolent Bell Labs scientists assigned to the project, Ken Thompson and Dennis Ritchie, were left with much time and company resources to design their world-domination scheme a la James-Bond villains as well as simply fooling around unproductively - with the latter being the main emphasis.
Having successfully developed "Space Travel", a software program designed to interfere with productivity (and thus the economy in general), on the GE-645, Thompson felt that such payload would probably have a greater impact on social stability if it were able to execute on a much smaller device - the DEC PDP-7, in particular. In order to sustain the basic input/output functions of the DEC machine, Thompson and Ritchie hastily swept together a code base that was not necessarily sound as a production software platform but nevertheless adequate for the runtime requirements of Space Travel. This code base, "Unix" - or, as it was originally named, "Unics" (from the word eunuchs) - would eventually be discovered by the masterminds behind AT&T and reimplemented for destructions of an even wider scale.
The henchmen at the AT&T executives board had longed figured that a trivial payload program would unlikely send a significant message to the world governments that they were up against (a fact as proven via the latter Solitaire by Microsoft). Rather than further investing on an unworkable idea, they decided it would be better if they simply allocated the money they had extorted from their phone customers to Unix itself. With the Computer Science Research Group being the first victim of the diabolical plan, the terrorists at AT&T were able to sell Unix as a legitimate operating system without raising too much suspicions in the public or alarming government intelligence. In fact, Unix was even licensed to numerous US federal agencies as well as educational institutes under such guise despite concerns of a systemic technological meltdown from critics. To make matter worse, University of Berkeley even initiated their project for a Unix clone known as "Berkeley Software Distribution", or "BSD".
Fortunately for most users, the problems of Unix were simply too apparent to go unnoticed (or left untreated) in a production environment. Due to the unpredictability of the software, many Unix-based installations were often found to be unable to correctly interpolate with their non-Unix counterparts or even among themselves. Seasoned technicians were, in most cases, able to find ways to work around problems present in the Unix systems - or, at least, those rear ends in a top hat who insisted on using them - and by the time that the AT&T headquarters were eventually overrun by a crack team of Navy Seal commandos, an MI6 agent with a Scottish accent and United Nations Emergency Force land division personnel (or "meat shields" as they were affectionately called) on January 1, 1984, the world's economy was still largely in a healthy state.
A Unix or Unix-like operating system is usually designed to masquerade a proper operating system in order to lure any unsuspecting victim into adopting Unix into their computer networks. To achieve this, a loose set of applications and features are usually packaged with the Unix distribution in question and it is often capable of maintaining basic network functions to a certain degree. However, once these applications and features are fully deployed in the victim's network environment, they will proceed to silently sabotage normal traffic, causing data to mysteriously vanish in the transit or forcing techician to expend countless man-hours in order to rectify the issue.
The Unix file storage semantics follow the same pattern that is shared amongst (historically) popular operating systems such as the Microsoft Windows series (NTFS and V/FAT12/16/32), the Apple Macintosh line of products (HFS/Plus), the Amiga system series (Amiga FFS), the VMS operating system series (Files-11) and Commodore/CBM DOS (1541/1581). This means that all files inside a disk storage are divided and sub-divided using a containers known as "directories" (or "folders" in Windows). In other words, the Unix file storage mechanism (or "filesystem") is none other than a typical hierarchical filing logic.
In order to slowly cripple a victim's business operations, however, the Unix filesystem is, in addition, designed to leave mission-critical data in a vulnerable state and to bury important information in a convoluted directory structure. Even with the advent of Unix-like filesystems such as the Linux ext series still display much of these traits desipte numerous attempts at addressing the issues. In fact, none of the problems associated with the Unix filesystem are at all resolved.
Inside the Unix filesystem, a symbolic link is a reference to a file or directory located in the storage media that can serve as though it were what it was to represent. On the surface, this may seem to be a perfect solution in places where storage space is too limited for multiple copies of a file to exist. However, when used on directories, there is a possibility that a link-agnostic program (e.g. most Unix applications) may get caught in an infinite loop and thus unable to complete a task or be safely terminated. Consider this scenario:
A directory structure containing a circular symbolic link: . . . |-Bobs_Documents/ | |-Junk/ | |-L (symbolic link -> "/Bobs_Documents")
The diagram shows a simple directory structure with "Bobs_Documents" located in "/" (or "root directory") and with "Junk" placed inside "Bobs_Documents" and a symbolic link in "Junk" pointing back to "Bobs_Documents" itself (as shown above). From the perspective of a typical application, there is simply no way to tell the difference between a symbolic link and a real directory. Hence, what once appeared to be a healthy file arrangement is now turned into a surrealistic construct a la M.C. Escher's "Ascending and Descending", leading unsuspecting applications to run around in an endless circle between "Bobs_Documents" and "Junk" until these applications exhaust all available resources and crash:
The above structure, from the perspective of an application: . . . |-Bobs_Documents/ | |-Junk/ | |-Bobs_Documents | |-Junk | |-Bobs_Documents | |-Junk | |-Bobs_Documents | . . .
To compound the issues associated with the symbolic linkage mechanisms, the Unix filesystem also comes with a rather arcane feature known as "hard links". To understand the concept of hard links, simply picture the files and directories inside a Unix filesystem as a collection of non-deterministic quantum particles (i.e. electrons, photons etc.). In quantum physics, the position of a quantum particle is represented with a probability function known as the "wave-function", and the meaning of it is basically that given a confinement (such as a box), a quantum particle may appear appear anywhere inside it depending on what pattern it chooses to follow (i.e. the wave-function). Likewise, a file or directory inside a Unix filesystem is never fixed to a single point of the file organization structure, but, rather, it simply takes on an non-deterministic presence inside it. Hence, when one creates a file inside a Unix filesystem, one is in effect performing the following two separate actions at the same time:
- creating a blob of data on the disk, and
- linking to it from a designated point of the file organisation structure.
This designated point, or "hard link", is essentially the name given the blob of data (e.g. "feline.doc"). One may also create an extra hard link (e.g. "feline2.doc") to the same blob of data and it will simply appear as a separate file on the disk. The rationale behind hard links is that one may then create hundreds upon hundreds of such things pointing to one single file and then forget where they are or how many of them are present. This is exactly like Schrödinger's cat: unless one pries the filesystem open and plows through every byte of it, there is no way to know exactly whether a file exists, how many applications are sharing it or if any two files are just hard links pointing to the same data.
Hard-linking to directory is also possible in Unix. To grasp the implications of such an ability, simply take the example for circular symbolic links, replace "L" with a hard link to "Bobs_Documents" and attempt to delete "Junk".
A directory structure containing a circular hard link: . . . |-Bobs_Documents/ | |-Junk/ | |-L | |-Junk | . . .
To the operating system, there is essentially no difference between "Bobs_Documents" and "L" except "L" appears to be something inside "Junk". When attempting to delete "Junk", the file system will go inside "L", find everything and delete it. Again, this is simply another "Ascending and Descending" -like trap, but this time it is the operating system that takes the bait, bringing itself down along with everything running on it.