Today, the people using UNIX and other UNIX-like systems typically think of those systems as being free and open source systems as they use BSD, Linux, SerenityOS, or Open Indiana. While macOS/Darwin is a UNIX system and Darwin is mostly open source, the majority of people using it either do not know that or they at least do not think of it in that way. I find this association of UNIX with free and open source software fascinating because it is rather at odds with the majority of the history of such systems. Let’s dig in to that history a bit.
EARLY DEVELOPMENTS AT MIT
In the 1930s, two men were responsible for starting research into computation at the Massachusetts Institute of Technology. These were Vannevar Bush and Claude Shannon. These two guys each came at the topic from both ends of computation. They were interested in the organization of information and in numerical computation. Vannevar Bush, together with Harold Locke Hazen, built the first practical general-purpose differential analyzer. This was a mechanical analogue computer comprising six integrators, and it was capable of solving differential equations. Claude Shannon was employed in Bush’s lab at MIT running this machine.
In Bush’s famous essay, “As We May Think,” Bush envisioned an electro-mechanical machine for the storing, sorting, linking, and searching of information. This was monumental in making many different ideas popular.
Bush’s name is a little more well-known than Claude Shannon’s (probably because Vannevar Bush dreamed up the memex, and also likely because he founded Raytheon). This is tragic, because Claude Shannon is responsible for perhaps the largest breakthrough in in the history of the last century. In 1937, his master’s thesis proved that Boolean algebra could be used to arrange electro-mechanical relays in telephone exchanges, and further that such arrangements of relays could be used to solve all Boolean algebraic problems. As if this wasn’t enough, the guy then designed a 4-bit full adder. This thesis became the basis for digital circuit design.
Bush left MIT for the Carnegie Institution of Washington in 1938, and later worked within the government during WWII among other endeavors. Shannon left MIT after getting his PhD in 1940, and he also ended up doing WWII research work within the halls of government and elsewhere.
With people like Bush and Shannon having established the relationship between MIT and the US government/military, the US Navy and the US Air Force reached out to MIT in 1944 to build a digital flight simulator for bomber crews. This resulted in Project Whirlwind and thereby in MIT’s first digital computer. In the course of this endeavor, the focus shifted entirely from the flight simulator to the computer itself. Slightly off topic, but this machine resulted in the invention of magnetic core memory by Jay W. Forrester. This was vital for the development of later computers. Whirlwind was announced publicly as operational in 1951, and the machine was shutdown in 1959. Projects carried out using Whirlwind created a community of computer researchers and computer enthusiasts within MIT. This was unusual at the time as most people viewed computers merely as a tool. Any notion of computer science as its own field of study would have been considered a bit absurd before this.
IBM, naturally, contributed funds to build the MIT Computation Center in 1955, and this was completed by 1957. The center had an IBM 704 owned and maintained by IBM, but use of this computer was not billed, which was rather unique for Big Blue. MIT was able to use the machine part of the time, other colleges and universities in New England for another part of time, and IBM for another part of time. Any unused time was MIT’s. In the years that followed, more computers would be added to center, but the issue of computer time usage remained. This led to the creation of the Compatible Time Sharing System, which would allow many users to make use of a single machine at the same time.
Outside of MIT, the world was shaken. The Soviet Union launched Sputnik in 1957. That this was accomplished by the Soviets and not be the Americans struck many as a sign that the United States was falling behind both in science and in engineering. As a direct result of this, Dwight Eisenhower appointed then President of MIT James Killian as the Presidential Assistant for Science. Again, the relationship established by Vannevar and Claude came into play. To advance the USA’s efforts in science in engineering, Eisenhower also created the Advanced Research Projects Agency. The first ARPA director was a VP of General Electric. The director of ARPA in 1962 was one Jack Ruina. He created a computing focused arm of ARPA called IPTO (Information Processing Techniques Office). He was later a professor at MIT and eventually the President of the University. Joe Licklider was the first director of the IPTO, and he had a mission to create an interactive computer time sharing system.
In 1961, MIT purchased machines from DEC. The first was a PDP-1. On these DEC machines, early hacker culture developed; a culture of playful creativity and code sharing with cheap time sharing as the medium in which that culture lived. Just a few years later, they would get a PDP-10. On the PDP-10, MIT would make ITS (the Incompatible Time Sharing System). The first LISP compiler written in LISP was written in ‘62 at MIT as well.
Again the world was shaken. This time, it was 1962 and the Cuban Missile Crisis. The Pentagon was having trouble with their computer systems. Specifically, the systems were crashing under extreme demand (according to Fernando Corbató). A meeting was held in Hot Springs, Virginia to discuss solutions. Present at this meeting were members of the military, MIT, and IPTO. This was Licklider’s chance to pitch his system, and his pitch was successful.
In 1963 Project MAC (Project on Mathematics and Computation) was started with 2 million dollars in funding from ARPA. Fernando Corbató joined Project MAC from the MIT Computation Center, and brought CTSS with him. He arranged for the project to use an IBM 7090. With these two items, Project MAC began working on MULTICS in 1965.
The broad goal of Project MAC is the experimental investigation of new ways in which on-line use of computers can aid people in their individual intellectual work, whether research, engineering design, management, or education. One envisions an intimate collaboration between man and computer system in the form of a real-time dialogue where both parties contribute their best capabilities. Thus, an essential part of the research effort is the evolutionary development of a large, multiple-access computer system that is easily and independently accessible to a large number of people, and truly responsive to their individual needs.
-- Project MAC Progress Report I
MULTICS
As of 1964, there were multiple players in this project: MIT, GE, ARPA, Bell Labs. Bell was the first to drop out in ‘69 but not before allowing the MULTICS project to extend to PDP-7 and to PDP-11 computers within Bell Labs. With Bell out, MIT took over the leadership of the project. MULTICS (Multiplexed Information and Computing Service) became available internally at MIT to Information Processing Center customers around that same time. The operating system was designed to be computing as a utility, similar in nature to electrical, to telephone, or to water. So, you’d have many terminals connected to the computer, and in the minds of General Electric, every minute of every terminal’s access to the computer would be billable… not unlike access to Amazon’s or Microsoft’s clouds.
In 1970, GE exited the computer business entirely. They sold that arm of their company to Honeywell, and that sale included Multics.
MULTICS was a mixed success. It was designed to support hundreds of users on a machine only slightly more powerful than an Intel 386-based PC, although it had much more I/O capacity. This is not quite as crazy as it sounds. since in those days people knew how to write small, efficient programs, a skill that has subsequently been completely lost. There were many reasons that MULTICS did not take over the world, not the least of which is that it was written in the PL/I programming language, and the PL/I compiler was years late and barely worked at all when it finally arrived. In addition, MULTICS was enormously ambitious for its time, much like Charles Babbage's analytical engine in the nineteenth century.
— Andrew Tanenbaum
In the end, only about 85 sites had MULTICS installations, and the system was a failure commercially. Honeywell sold its computing arm to Groupe Bull in ‘87. The need for MULTICS waned by the mid ‘70s as UNIX took over the operating system market for large/powerful computers.
UNICS/UNIX
Given that MULTICS was a Bell Labs endeavor at one point, many AT&T employees were exposed to the system. One of the employees who experienced MULTICS was Ken Thompson. MULTICS had been derided as overly complex, overly large, and just generally bad and bloated. Ken wanted to write his own leaner system (potentially because he had also written a game, Space Travel, which needed a leaner system) and while he had access to MULTICS, he designed the new file system and new paging system for his new OS. Using a PDP-7, he continued his work. By August of ‘69, Ken Thompson, Dennis Ritchie, and Doug McIlroy had a self-hosting operating system that featured processes, device files, a hierarchical file system, an assembler, an editor, and an interactive command-line shell. By 1970, Brian Kernighan had given the system the name UNICS (Uniplexed Information and Computing Service) as a pun on MULTICS. The spelling was later changed to UNIX. Eventually, others within Ma Bell caught on to what was developing, and departments within Bell had started requesting UNIX for the systems they used. This caused a bit of heartache for the group of developers since everything was written in assembly language. UNIX had compilers for B (BCPL but smaller, made by Ken Thompson) and PL/I (TMG compiler made by McIlroy), but the system itself was written in neither, just assembly.
Due to memory constraints, CPU power, and other limitations of early computers nearly all system software was written in assembly language. By the ‘70s, computers were more powerful but prior limitations had created a bit of a cultural predilection for systems software being written in assembly. With the PDP-11 and UNIX, Dennis Ritchie began the process of improving B. With an improving B, Ken Thompson began writing the kernel in this evolving language. Requirements of the kernel created language requirements. The new language was called C, and UNIX version 2 included a C compiler.
In 1973, UNIX version 4 was released, and v4 was written almost entirely in C with only about a quarter of the code being truly machine dependent parts written in assembly. Version 5 was licensed to educational institutions later in ‘73. In 1975, UNIX made its commercial debut with UNIX version 6. As the system’s source code and documentation were included with licenses, the system was widely studied. This made UNIX a common system for educational study. In 1977, AT&T began purchasing various computers for the explicit purpose of porting UNIX to them. The fact that almost the entire system was written in C made this porting effort far easier than it would otherwise have been. UNIX was also starting to be sold by resellers. Despite being licensed, UNIX versions 1 through 10 are referred to as Research UNIX. UNIX 7 was the last widely released/licensed version of Research UNIX, and it is upon this version that the Berkley Software Distribution (BSD) was based. Internally, UNIX had other flavors, among which was PWB (Programmers Workbench). Version 7 and PWB were eventually combined to create UNIX System III in 1981.
THE UNIX WARS
By 1983, versions of UNIX with varying features and compatibility were being sold by Onyx Systems, Microsoft, Sun Microsystems, SCO, Interactive Systems Corporation, and others. The intense rivalry between all these different versions created a demand for standardization from within the UNIX market. To answer this demand, AT&T created UNIX System V which was a blend of PWD, System III, and improvements ported over from BSD. AT&T tried to push System V as a standard. In 1984, this pressure created the X/Open Consortium that aimed at creating compatible open systems. In 1985, AT&T released the System V Interface Definition with the same aim. In 1988, IEEE released the POSIX specification. AT&T then collaborated with SCO to merge System V and Xenix into System V/386, and thereafter with Sun Microsystems to merge System V, BSD, SunOS and Xenix into UNIX System V release 4. By 1993, things had settled down a bit with most systems being System V.4 with some BSD enhancement.
The guys at Berkley weren’t idle during any of this time. They had made BSD both POSIX compliant and largely free of original AT&T code (efforts starting in ‘88 as far as I can tell). When Berkley started selling this system commercially, the legal validity of BSD came into question and AT&T came after them. Lynne Jolitz and William Jolitz left around that time to start 386BSD which is the ancestor of FreeBSD, OpenBSD, and NetBSD.
GNU’S NOT UNIX
On September 27th of 1983 at 13:35:59 EST, Richard Stallman posted the following to usenet groups net.unix-wizards and net.usoft with the subject: “new Unix implementation”
Free Unix!
Starting this Thanksgiving I am going to write a complete Unix-compatible software system called GNU (for Gnu's Not Unix), and give it away free to everyone who can use it. Contributions of time, money, programs and equipment are greatly needed.
To begin with, GNU will be a kernel plus all the utilities needed to write and run C programs: editor, shell, C compiler, linker, assembler, and a few other things. After this we will add a text formatter, a YACC, an Empire game, a spreadsheet, and hundreds of other things. We hope to supply, eventually, everything useful that normally comes with a Unix system, and anything else useful, including on-line and hardcopy documentation.
GNU will be able to run Unix programs, but will not be identical to Unix. We will make all improvements that are convenient, based on our experience with other operating systems. In particular, we plan to have longer filenames, file version numbers, a crashproof file system, filename completion perhaps, terminal-independent display support, and eventually a Lisp-based window system through which several Lisp programs and ordinary Unix programs can share a screen. Both C and Lisp will be available as system programming languages. We will have network software based on MIT's chaosnet protocol, far superior to UUCP. We may also have something compatible with UUCP.
Who Am I?
I am Richard Stallman, inventor of the original much-imitated EMACS
editor, now at the Artificial Intelligence Lab at MIT. I have worked
extensively on compilers, editors, debuggers, command interpreters, the
Incompatible Timesharing System and the Lisp Machine operating system.
I pioneered terminal-independent display support in ITS. In addition I
have implemented one crashproof file system and two window systems for
Lisp machines.Why I Must Write GNU
I consider that the golden rule requires that if I like a program I must share it with other people who like it. I cannot in good conscience sign a nondisclosure agreement or a software license agreement.
So that I can continue to use computers without violating my principles, I have decided to put together a sufficient body of free software so that I will be able to get along without any software that is not free.
How You Can Contribute
I am asking computer manufacturers for donations of machines and money. I'm asking individuals for donations of programs and work.
One computer manufacturer has already offered to provide a machine. But we could use more. One consequence you can expect if you donate machines is that GNU will run on them at an early date. The machine had better be able to operate in a residential area, and not require sophisticated cooling or power.
Individual programmers can contribute by writing a compatible duplicate of some Unix utility and giving it to me. For most projects, such part-time distributed work would be very hard to coordinate; the independently-written parts would not work together. But for the particular task of replacing Unix, this problem is absent. Most interface specifications are fixed by Unix compatibility. If each contribution works with the rest of Unix, it will probably work with the rest of GNU.
If I get donations of money, I may be able to hire a few people full or part time. The salary won't be high, but I'm looking for people for whom knowing they are helping humanity is as important as money. I view this as a way of enabling dedicated people to devote their full energies to working on GNU by sparing them the need to make a living in another way.
For more information, contact me.
Arpanet mail:
R...@MIT-MC.ARPAUsenet:
...!mit-eddie!RMS@OZ
...!mit-vax!RMS@OZUS Snail:
Richard Stallman
166 Prospect St
Cambridge, MA 02139
By June of 1987, the system had a working kernel, an assembler, compilers, and basic utilities. The system could be used for further development of itself. However, GNU was failing to attract enough development to make the system practical for daily usage. The user land application count grew, and the utilities and applications got better, but not the kernel. GNU Hurd was stalled. With the UNIX wars raging and subsequent announcement of a free version of BSD, GNU languished.
MINIX
As noted, the fact that UNIX source code and documentation were issued with each license, UNIX was often used as the basis for the educational study of operating systems. Andrew Tanenbaum at Vrije Universiteit Amsterdam in Amsterdam Netherlands created an operating system called MINIX in 1987. MINIX version 1 was system call compatible with UNIX v7, but was intended to be run on the IBM PC and compatibles (16-bit x86). This was used with his course’s textbook Operating Systems: Design and Implementation. The source and binaries were on a floppy disk that came with the book.
LINUX
Linus Torvalds was born in Helsinki Finland on the 28th of December in 1969. Linus apparently loved computers at an early age. He had a Commodore VIC-20 when he was 11. It’s on that VIC-20 that he learned to program using Commodore BASIC. He also had a Sinclair QL. He attended the University of Helsinki and earned his master’s degree in computer science there (‘88 - ‘96). His university life was interrupted due to mandatory military service in Finland (Linus earned the rank of 2nd lieutenant). Along the way, he bought a copy of Andrew Tanenbaum’s book. In 1991, Linus bought a 386 machine and together with Andrew Tanenbaum’s book and OS he began using MINIX and developing his operating system kernel: Linux.
He posted this to usenet in comp.os.minix on August 26th of 1991 at 02:14:
Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Linus (torva...@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.
This was followed up on October 4th of 1991 with this post:
Free minix-like kernel sources for 386-AT
Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on minix? No more all-nighters to get a nifty program working? Then this post might be just for you :-)
As I mentioned a month(?) ago, I'm working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it's even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02 (+1 (very small) patch already), but I've successfully run bash/gcc/gnu-make/gnu-sed/compress etc under it.
Sources for this pet project of mine can be found at nic.funet.fi (128.214.6.100) in the directory /pub/OS/Linux. The directory also contains some README-file and a couple of binaries to work under linux (bash, update and gcc, what more can you ask for :-). Full kernel source is provided, as no minix code has been used. Library sources are only partially free, so that cannot be distributed currently. The system is able to compile "as-is" and has been known to work. Heh. Sources to the binaries (bash and gcc) can be found at the same place in /pub/gnu.
ALERT! WARNING! NOTE! These sources still need minix-386 to be compiled (and gcc-1.40, possibly 1.37.1, haven't tested), and you need minix to set it up if you want to run it, so it is not yet a standalone system for those of you without minix. I'm working on it. You also need to be something of a hacker to set it up (?), so for those hoping for an alternative to minix-386, please ignore me. It is currently meant for hackers interested in operating systems and 386's with access to minix.
The system needs an AT-compatible harddisk (IDE is fine) and EGA/VGA. If you are still interested, please ftp the README/RELNOTES, and/or mail me for additional info.
I can (well, almost) hear you asking yourselves "why?". Hurd will be out in a year (or two, or next month, who knows), and I've already got minix. This is a program for hackers by a hacker. I've enjouyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I'm looking forward to any comments you might have.
I'm also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain), I'd like to hear from you, so I can add them to the system. I'm using Earl Chews estdio right now (thanks for a nice and working system Earl), and similar works will be very wellcome. Your (C)'s will of course be left intact. Drop me a line if you are willing to let me use your code.
Linus
PS. to PHIL NELSON! I'm unable to get through to you, and keep getting "forward error - strawberry unknown domain" or something.
Version 0.11 was released in December of 1991 and was already self-hosting.
In January of 1992, there was a bit of a kerfuffle with Professor Tanenbaum, as Tanenbaum posted a message to comp.os.minix titled “LINUX is obsolete”:
I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it is worth, I have a couple of comments now.
As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.
As a result of my occupation, I think I know a bit about where operating are going in the next decade or so. Two aspects stand out:
1. MICROKERNEL VS MONOLITHIC SYSTEM
Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.
While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.
MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.
2. PORTABILITY
Once upon a time there was the 4004 CPU. When it grew up it became an 8008. Then it underwent plastic surgery and became the 8080. It begat the 8086, which begat the 8088, which begat the 80286, which begat the 80386, which begat the 80486, and so on unto the N-th generation. In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80x86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.MINIX was designed to be reasonably portable, and has been ported from the Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016. LINUX is tied fairly closely to the 80x86. Not the way to go.
Don`t get me wrong, I am not unhappy with LINUX. It will get all the people who want to turn MINIX in BSD UNIX off my back. But in all honesty, I would suggest that people who want a **MODERN** "free" OS look around for a microkernel-based, portable OS, like maybe GNU or something like that.
Andy Tanenbaum (a...@cs.vu.nl)
P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user space), but it is far from complete. If there are any people who would like to work on that, please let me know. To run Amoeba you need a few 386s, one of which needs 16M, and all of which need the WD Ethernet card.
When Linus chimed in, it was in the true Linus fashion we’ve all come to know and love and hate:
Well, with a subject like this, I'm afraid I'll have to reply. Apologies to minix-users who have heard enough about linux anyway. I'd like to be able to just "ignore the bait", but ... Time for some serious flamefesting!
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>>I was in the U.S. for a couple of weeks, so I haven't commented much on
>LINUX (not that I would have said much had I been around), but for what
>it is worth, I have a couple of comments now.
>
>As most of you know, for me MINIX is a hobby, something that I do in the
>evening when I get bored writing books and there are no major wars,
>revolutions, or senate hearings being televised live on CNN. My real
>job is a professor and researcher in the area of operating systems.You use this as an excuse for the limitations of minix? Sorry, but you loose: I've got more excuses than you have, and linux still beats the pants of minix in almost all areas. Not to mention the fact that most of the good code for PC minix seems to have been written by Bruce Evans.
Re 1: you doing minix as a hobby - look at who makes money off minix, and who gives linux out for free. Then talk about hobbies. Make minix freely available, and one of my biggest gripes with it will disappear. Linux has very much been a hobby (but a serious one: the best type) for me: I get no money for it, and it's not even part of any of my studies in the university. I've done it all on my own time, and on my own machine.
Re 2: your job is being a professor and researcher: That's one hell of a good excuse for some of the brain-damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.
>1. MICROKERNEL VS MONOLITHIC SYSTEM
True, linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint linux looses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now.
> MINIX is a microkernel-based system. [deleted, but not so that you
> miss the point ] LINUX is a monolithic style system.If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damndest to make others forget about the fiasco.
[ yes, I know there are multithreading hacks for minix, but they are
hacks, and bruce evans tells me there are lots of race conditions ]>2. PORTABILITY
"Portability is for people who cannot write new programs" -me, right now (with tongue in cheek)
The fact is that linux is more portable than minix. What? I hear you say. It's true - but not in the sense that ast means: I made linux as conformant to standards as I knew how (without having any POSIX standard in front of me). Porting things to linux is generally /much/ easier than porting them to minix.
I agree that portability is a good thing: but only where it actually has some meaning. There is no idea in trying to make an operating system overly portable: adhering to a portable API is good enough. The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place.
I also agree that linux takes the non-portability to an extreme: I got my 386 last January, and linux was partly a project to teach me about it. Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last april when I started the thing, I didn't think anybody would actually want to use it. I'm happy to report I was wrong, and as my source is freely available, anybody is free to try to port it, even though it won't be easy.
Linus
PS. I apologise for sometimes sounding too harsh: minix is nice enough if you have nothing else. Amoeba might be nice if you have 5-10 spare 386's lying around, but I certainly don't. I don't usually get into flames, but I'm touchy when it comes to linux :)
The entire thing is available for your reading pleasure with all of the rejoinder both vituperative and otherwise.
In February of 1992, Linux hit version 0.12 and adopted the GPLv2. Version 0.95 was released on the 8th of March in 1992, and this version enabled the use of X Windows. March of 1994 brought the world version 1.0.0. This was when Linux was considered ready for production environments. Version 2 was a big deal as well. SMP came to Linux with version 2, as did the make config
series of commands.
In August of 1992, the Softlanding Linux System was released. This was the first Linux distribution that resembles what we currently think of as a Linux operating system. This was a buggy and poorly supported system and was superseded by Slackware in short order. In July of 1993, the first release of Slackware Linux was made. Ian Murdock was also frustrated with SLS, and he released the first version of Debian GNU/Linux in September of 1993. The first public release of Red Hat Linux was made on October 31st of 1994.
Linux exploded in popularity throughout the 90s, and it began to take market share from the BSDs and other UNICES in the server space. It also started taking market share away from UNIX and other systems in the super computer space. It was taking market share away from the UNICES and other systems in the educational space as well (servers, desktops, mainframes) which meant it was taking minds with it.
In June of 1998 the Avalon Cluster was completed. This was a super computer running Linux and it was the first super computer that was intended to run Linux from the start. AC was built at Los Alamos National Laboratory for $152k and comprised of 68 531MHz EV56 CPU cores of DEC Alpha awesomeness. Within two years, the number of super computers running Linux would reach 50 of the top 500. By 2006, Linux would claim over 80% of the super computer market.
Also in 1998, but in October, the first Halloween document was leaked to Eric S Raymond. He immediately published an annotated version on his website. This document was an internal Microsoft memo that was intended to help set the strategy to compete with open source software (and especially Linux) within Microsoft. The document makes note that Linux and open source developers had made high quality software, and also that Linux and open source software had gained a notable foothold in the market. It stated that Microsoft would need to “de-commoditize protocols” in order to effectively compete against Linux and other open source software. The Halloween Documents served Linux as exceptionally good marketing. Microsoft was the x86 software market at the time, and if they feared Linux, then Linux must have been good, right? There had to be something to fear.
Linux went on to completely dominate the server market over the course of the 2000s and 2010s. In the super computer market, Linux reached 100% market dominance. Due to being free of financial cost, highly configurable, and modular, Linux also spread to embedded computers in the early days of the 21st century. With the introduction of Android and ChromeOS, Linux became one of the single most common end-user operating systems on Earth.
FINAL THOUGHTS
Like gossamer dotted with dew, we can see threads crossing from MIT, to corporate-governmental-university work at ARPA, to a few software engineers at AT&T, and then to a disgruntled engineer back at MIT who really wanted a libre system, and to a professor in Netherlands who wanted something hands-on for his students, and finally to young man in Finland who just wanted to build a system as a hobby. A web of ideas hopping from place to place, from person to person. The impression, if you look at this causal domain too long, is that Linux is somehow the end of the earliest work; somehow the outcome of Vannevar’s dream of the memex. In the sense that Linux runs the internet which is Vannevar’s vision of a repository of linked data made manifest, I suppose that that is exactly what happened.