A page about computers and computer science

Contents

Other computer-science pages on this site

A short history of computers and their science

The construction of the first mechanical computing engine is generally attributed to the french philosopher (and mathematician) Blaise Pascal (1623–1662). This device was extremely primitive, though. Something much more like a computer is Charles Babbage (1792–1871)'s computing engine (around 1837). But the first real computer was the ENIAC (1946 — actually the Mark 1 came earlier, but it wasn't really a computer). The transistor was invented in 1948 and was first used in computers in 1955 ; integrated circuits in 1959 and were first used in 1964. As for the microchip, it came to be around 1971. RISC processors started coming out in 1992.

The Mark 1 was built in 1944, and the ENIAC in 1946. The first mass-produced commercial computer was the UNIVAC in 1951. The IBM701 was mass-produced in 1953. The first minicomputer, the DEC PDP-1, appeared in 1961. The first commercial microprocessor was the Intel 4004, a 4-bit chip initially conceived as a display controller, released in 1971, and whose successors, by way of the 8080 and the 8086, are among the most popular microchips now in use — 64-bit superscalar million-transistor chips directly descended from, and almost upwards compatible with, a 4-bit programmable display controler — makes you think, doesn't it? The most successful minicomputer of all timed, the VAX, was released in 1978. [If somebody could tell me the dates for the PDP-6, PDP-7, PDP-10 and PDP-11, I would be very grateful.] The first PC, the Mark 8, was introduced in 1974, and the IBM PC in 1981.

Some would attribute the invention of software to the arab mathematician Al-Khârezmi (whose name is the origin of the word “algorithm”). Or perhaps to Augusta Ada Byron, Lady Lovelace (1815–1852), who “programmed” the Babbage engine. More reasonably, however, the invention of the program is due to Alan Mathinson Turing (1912–1954), who imagined a machine capable of operating on an infinite string of digits according to certain well-defined rules, and introduced the notion of calculability. The program of the Turing Machine, as that of the ENIAC, however, is coded in hardware (programming the ENIAC involved plugging a lot of wires in the appropriate places). It is John von Neumann who first had (in 1946) the truly revolutionary idea of placing the program in memory together with the data on which it acts; he can therefore be recognized as the inventor of software (even though that idea is implicit in the notion of a universal Turing machine). The legend is that von Neumann got tired of numerically solving partial differential equations by head, and he invented the computer to do it for him.

The oldest programming language ever (apart from binary :-) is the language Fortran, dating from 1954. It is a pretty poor language in many respects (its syntax and semantics are… well, they just aren't). Algol, the ancestor of the imperative languages such as Pascal or C, was invented in 1958. Somewhat around the same time, John McCarthy introduced LISP (based on a notation commonly in use at the time for writing functions), a much more elegant language, for which dedicated hardware was even built. What is now considered as the most vanilla programming language, C, was invented (initially for the DEC PDP-11) by Dennis Ritchie in 1972, and it is based on the language B by Ken Thompson (1970, for the PDP-7), itself inspired by Martin Richard's language, BCPL (Basic Compiled Programming Language, 1969), that is derived from CPL (1965). The language C was standardized by the American National Standardization Institute (ANSI) in 1988, then later by ISO.

Another important part of the history of computer science is that of operating systems. The most important operating system is Unix. The story began in the mid 60's when the Massachusetts Institute of Technology, the AT&T Bell labs, General Electric and Honeywell started the Multics (Multiplexed Information and Computing Service) project. The Multics goal was to provide an operating system (for the GE645 mainframe computer) that would provide 24h operation, and a reliable security protection mechanism. Many concepts now currently in use in OS design (devices as files, dynamic linking, protection rings, call gates) were pioneered by the group. Though the Multics project did in the end reach its goal, and Honeywell released the OS, it is especially important by the consequences it had on computing. One such consequence is the birth of Unix. In 1969, the AT&T labs pulled out of the Multics project. Ken Thompson, who had been part of the project, developped, on an unused PDP-7 of the Bell labs, his own operating system, for which Peter Neumann suggested the name “Unix”. He was soon joined by Dennis Ritchie, and around 1973, he rewrote Unix in Ritchie's language C. Soon afterward, Unix began to spread outside of AT&T, and in 1977, commercial support Unix Version 6 appeared.

In 1978, two graduate students of the University of California, Berkeley, Bill Joy and Chuck Haley, started distributing their own version of Unix, BSD (Berkeley Software Distribution). The BSD release of Unix introduced several enhancements over AT&T's version, notably the long filenames, job control and LAN support of BSD4.2 (in 1983). AT&T management retorted by creating a commercial version, System V, in 1983, and declaring BSD as non-standard and incompatible. This was the beginning of the Great Unix Schism, the last “common” version of Unix being Version 7 (1978). Apart from Sun Microsystems, most commercial Unix vendors chose to follow AT&T's System V. In 1988, however, AT&T and Sun signed an agreement in order to merge the two branches of Unix in what became System V, Release 4, incorporating the features of both System V, Release 3 and BSD4.3. Other companies (notably Hewlett-Packard and IBM) felt threatened, and founded the OSF (Open Software Foundation) and went on to develop their own versions of Unix (OSF/1, AIX…). The Berkeley versions of Unix, for their part, continued with BSD4.3 TAHOE in 1989, BSD4.3 RENO in 1991, and the final BSD4.4 in 1992 (which is free of AT&T code; it is this release from which FreeBSD and OpenBSD are derived). In 1993, AT&T sold the Unix trademark to Novell, which in turn gave it to the X/Open Consortium (which later became the X Consortium), and thus as far as I can make it it must now be owned by The Open Group. These organizations have tried to standardized Unix; one important attempt at that are the POSIX specifications (P1003.x), which are then made norms by the IEEE and ISO organizations.

In 1983, Richard Stallman (“RMS”) of the MIT founded the GNU project. Its goal is to provide a free operating software (that is, an operating software, with all its tools and utilities, available with source code, and which the users have the freedom to share, copy, distribute and modify, even for profit). As its name indicate, Gnu's Not Unix; however, it tries to be as much as possible compatible with Unix and POSIX-compliant. By the early 1990's, all the major components of the system were written, except the kernel. Then Linux came along unexpectedly and filled the gap. Linux is a free kernel which was started in 1991 by a Finnish student named Linus Torvalds. The GNU operating system running with the Linux kernel is called a GNU/Linux system, and sometimes (rather unfairly it must be said) simply a Linux system. Linux, however, is not the kernel which RMS had in mind when he initiated the GNU project: the real GNU operating system uses the so-called Hurd as a kernel (not really a kernel, in fact, but a Hird of Unix Replacing Demons (a “Hird” is a Hurd of Interfaces Representing Depth), running on top of the Mach microkernel, developped by Carnagie Mellon University), and the latter is still under development.

More links on Unix history:

Hackers and Hackers' slang

First of all, a point must be settled. It has become quite common to use the term “hacker” to designate someone who tries to break through the security defenses of computers — one who is skilled at using computers and misuses that skill. This is not a correct use of the term. If you need a term for that, call it a “cracker”, or simply a “vandal”.

So, what is a hacker? My favorite definition is, a hacker is someone for whom the computer is a goal in itself, rather than a mean. Thus, someone who uses his computer exclusively for writing text, managing bank accounts, playing games, composing music or creating pictures, is not a hacker. Even a programmer who just programs because he's paid to do it — or because he has a specific task to do — is not a hacker. On the other hand, someone who plays with his operating system just to explore it, and because it's fun, is definitely something of a hacker. However, the only operating system that can really qualify you (now that the Elder Days are gone, that is) is Unix. Even very skillful programming with DirectX doesn't make you a hacker. Nor does making a fortune selling software — but writing a book that becomes a Bible is definitely a way.

The very greatest hackers, those who are said to have “ascended to demigod-hood” (the term coined by the game Nethack), or to have “reached enlightenment” (an allusion to Zen philosophy), are Donald Knuth (the inventor of TeX and the writer of the illustrious Art of Computer Programming), Ken Thompson (the inventor of Unix), Dennis Ritchie (the inventor of C — all three are ACM Turing Award nominees), Richard Stallman (the founder of the GNU project and author of Emacs), Bill Joy (the originator of BSD), Larry Wall (the inventor of Perl), Linus Torvalds (the author of Linux), Brian Kernighan, Gerald Sussman, Guy Steele, Richard Stevens, possibly a few others (to whom I apologize for forgetting them). The main centers of hackerdom are the MIT, UCB and CMU (all right, Massachusetts Institute of Technology, University of Califonia Berkeley and Carnegie-Mellon University), and they are more scarce in Europe. It was once thought that hackers appeared by spontaneous generation around DEC PDP machines — but that is now known not to be true.

Hackers use a particular dialect of english when they are together, or sometimes when on the 'net. It consists largely of computer metaphor for Real World objects, and Real World metaphors for computer-specific terms. Thus, when a hacker asks “Hello, World! Hungry-P anyone? I suggest we SIGSTOP and route ourselves to the cafeteria to ftp something to /dev/mouth.” and another answers “Nil, time++. I've locked a mutex and I don't want my stack to crash due to bit rot. I'll be ready in half a millifortnight.”, what was meant was “Hi! Is anyone hungry? I suggest we pause for a minute and go to the cafeteria to get ourselves something to eat.” and “No, later. I'm doing something critical right now and I don't want to lose my train of thought if I get interrupted for too long. I'll be ready in ten minutes.”

Fortunately, there is a way through which the layman, or the aspiring hacker, might learn to understand the sophisticated hackerspeak. For the benevolent Gods Over Djinn have deigned reveal the Hidden Lore to the face of the world. The form of this revelation is the New Hacker's Dictionary, also known as “The Jargon File”. It will teach you everything you need to understand hackers (speaking like a hacker, on the other hand, is not something one learns in a file but by imitation). Here, then, is the Jargon File (version 4.0.0, compressed using gzip), and here is the Jargon File's page where the latest (but not necessarily best) version can be found.

A poem about X11R6.4

I don't know where else to put this one, so I just copy it here:


6.3 was a release of X.
Of it the hackers sadly perplex:
the last whose code was fair and free
between the Hardware and the C.


Its Xlib was long, its toolkit was keen,
its shining pixmaps afar were seen;
the countless methods of a widget's field
would many brilliant callbacks yield.


But long ago it rode away,
and where it dwelleth none can say;
for into license fell its tar
in .4 where the shadows are.

(with apologies to J. R. R. Tolkien)

(And, yes, I'm the author of this.)

(In case you don't get it, it refers to the original distribution terms (aka license) for X11R6.4, which was not Free Software. Since then, the distribution terms have changed, but the poem has remained.)

My visions for the future

Some people see the future as computers which one can talk to, and who will understand the human voice. I think this is fallacious; it is part of this general waste of computer resources that is being allocated to the user interface. Let me explain. There is a tendency nowadays to invent user interfaces which make it presumably simpler to use a computer, but which also limit the possibilities of what one can do with it. The move from a text-based interface to a graphical interface takes us back before the invention of the alphabet to the time of ideograms. The move to a speach-based interface takes us even before the invention of writing.


LUKE: […] Is the dark side stronger?
YODA: No… no… no. Quicker, easier, more seductive.

There is nothing wrong with a nice, intuitive, user interface, provided is does not limit one's possibilities. Take the example of Gimp: it is a very powerful image manipulation program, and as such it would be difficult to imagine without a graphical interface. But Gimp also includes a Scheme interpreter that makes it possible to combine several Gimp functions into one, write one's own functions, and on the whole configure the program in an extremely versatile manner.

As a manner of fact, any interactive program whatsoever should come with an internal programming language. Indeed, if a program allows the user to perform some function (by choosing an entry from a menu, say), the user might some day wish to perform that function a thousand times in a row, say, or perform it on those files which match a certain criterion, or some such thing. Only with a true programming language will one be assured that any possible desire of the user can be coped with. (Even Micro$oft has understood that to some extent, since M$Word allows the possibility of defining “macros” that are some kind of programming language.)

Now there is more. The user might wish to combine the possibilities of two different programs in a very sophisticated way. If the programs are non interactive, we can use a command line (shell) to this effect — this is the true power of Unix over graphical-interface based systems like Windoze. If the programs are interactive, however, this is more difficult. Even if both programs come with their own programming language, those might have some trouble communicating. So we need to develop the communication possibilities between programs, for example by using one common language for all of them (say Guile-Scheme) and making it possible to call the functions of one or the other program from this common language.


David Madore
Last modified: $Date: 2002/06/17 22:41:26 $