The history of computers until 1949 is mostly about mathematical advances, and better ideas.
Thank goodness we do not use Roman numerals for arithmetic. The replacement of Roman numbers by the ten Arabic numerals and the decimal system starting in the 9th Century. Only remnants of Roman numerals remain, on old clock faces.
Computers today work with a binary system, consisting of the numerals zero and one. This works just like decimal numbers, if you are missing nine fingers. Binary numbers were known to the Indian and Chinese thousands of years ago. An 11th Century commentary on the IChing, or Book of Changes, made much of the Ying Yang nature of the world. In Europe, Gottfried Leibniz wrote On the Art of Combination
in 1666, but binary was ignored, even in his own early mechanical calculator. Computers work well with binary, because electrical switches are best designed to be either on or off, a binary choice.
In 1847, George Boole developed Boolean algebra. This proved fundamental to the AND, OR, XOR and NOT logic gates in computers.
Claude Shannon did an MIT thesis in 1937. It was called A Symbolic Analysis of Relay and Switching Circuits. It was the first time Boolean algebra and binary numbers were considered for use in machinery.
Sequential logic was another advance, where outputs depend also on the history of the input, and is needed to allow a computer to remember. Talk about clocks, using metronome and learning music analogy.
There were also thought experiments by Alan Turing and John von Neumann). World War II speeded development of electromechanical calculators, and led directly to the use of valves (vacuum tubes) as computer switches.
I mentioned 4 bit, 8 bit and 16 bit computers. Bit is computer shorthand for binary digit. A 4 bit computer has components that can handle four binary digits at once. In other words, numbers between 0 and 1111. In our terms, sixteen different numbers, from 0 to 15.
An 8 bit computer handles eight binary digits, 0 to 1111 1111, or 0 to 255 (256 different numbers). Eight bits are called a byte. That is plenty enough to run a washing machine or a microwave. The early personal desktop computers like a Commodore or TRS80 were 8 bit. A 16 bit computer 0 to 1111 1111 1111 1111, or 0 to 65535 (65536 different numbers, since 0 counts as a number). This is starting to get to be a serious number. The first personal IBM computers were 16 bit.
A 32 bit computer handles numbers up to 4,294,967,296. Now we are in the billions, but not large enough to handle the trillion dollar Australian economy. All the computers we use on our desks these days are at least 32 bit.
Recent computer chips handle 64 bits, which is 18,446,744,073,709,551,616. That is enough to even handle the US government deficit. So at each doubling of the number of bits a computer can handle, the size of the numbers it can easily manipulate increases enormously. This is one of the secrets to how the power of computers keeps increasing.
In the beginning was the command line, as Neal Stephenson's fascinating and discursive essay explains. You memorised how to use a computer operating system. You typed the required commands. In full, without any errors. That all changed because of one person.
Douglas Engelbart invented the computer mouse in 1963. He and researchers at the Augmentation Research Center at Stanford University devised the On-Line System, a computer collaboration system. On 9 December 1968, Engelbart demonstrated mouse, on-screen windows, hypertext, full-screen word processing, and shared screen collaboration with audio and video to around 1000 computer professionals. The demonstration was enormously influential, but it took almost two decades before a graphics user interface was generally available.
Xerox used a graphic user interface in their Palo Altos Research Center (PARC) designed Alto computer experiment which started in 1972. In 1978, Xerox donated 50 Altos to Stanford, MIT and Carnegie-Melon universities. They became the standard by which all other computers were judged, by the research community. The $32,000 Xerox Star 8010 from PARC was the first commercial system to use the windows interface, in April 1981.
In January 1983, Apple Computer officially unveiled the $10,000 Lisa computer, with a graphic interface. Too slow, too expensive. In January 1984, Apple released the Macintosh, the first commercially successful computer with a graphics interface.
In March 2001, Apple released the first version of OS X, an operating system based on NeXTStep, itself based on the BSD variant of Unix.
In August 1981 Steve Jobs of Apple visited Microsoft to give Bill Gates a sneak preview of the revolutionary Macintosh computer. Apple asked Microsoft to write programs for the Macintosh, before it was launched in March 1984. Microsoft became the first major company to develop programs for the Mac. Microsoft shipped Microsoft BASIC and Microsoft Multiplan simultaneously with the introduction of the Macintosh. Microsoft also announced Word, Chart, and File for the Macintosh. Later in 1984, Microsoft Excel arrived for the Macintosh.
Microsoft first announced Windows in November 1983. In what became a typical Microsoft manner, Windows did not actually become available until November 1985. In 1983, Microsoft introduced Word for MS-DOS, and also produced the first Microsoft mouse. IBM did not use the mouse until 1987. Excel would not appear in Windows until October 1987. Microsoft Word for Windows appeared in 1989.
Success came to Windows, with the release of Windows 3 in May 1990. This was the first year a computer software company had sales exceeding a billion dollars. Windows 95 appeared in … guess when? Microsoft released Windows XP in October 2001. They announced Vista in 2005, and released it in early 2007. It was not as well received as they hoped. Microsoft released Windows 7 in October 2009. Windows 7 recently overtook Vista installations, however at the moment, Windows XP is on more than half of all computers.
Moore's Law ensures microprocessors are getting faster, cheaper, and more capable. Being able to make integrated circuits using larger diameter silicon substrates means you can fit more chips into each step of the manufacturing process. The cost increases of handling larger silicon slices is far less than the value of the increased number of chips you obtain.
However, clock speed increases hit a thermal limit around 2005. Chips got too hot to make them faster. For decades, programmers have known a slow program will speed up on the next processor clock increase. For the first time, this free ride for programs has disappeared.
Decreasing the size of transistors cuts costs, and means more of them in any area. Tens of thousands fit where there was one. Modern computer processors contain up to two billion transistors. You can use these extra parts to make smarter chips, able to handle more than one instruction each clock cycle.
You can make computer chips that have multiple processors, all working at once. However programs that are not written to make use of multiple processors typically do not gain any advantage. So far, we are not really very good at writing programs designed to run on multiple processors.
Having more transistors available means you can bring more functions into the processor, instead of having additional chips do some of them. Since the additional chips are often made by rival companies (such as nVidia or AMI for graphics) eliminating their chips means higher prices for Intel. There is a big incentive to keep shrinking chips, and adding more functions.
The ultimate result is the System in a Chip, where almost the entire computer is in a single chip. Most phones are built this way.
Quantum computers suggested by Richard Feynman, who also suggested nanotechnology in his 1959 talk There's Plenty of Room at the Bottom. See this explanation of quantum computers.
Next computers.
Never go to a doctor whose office plants have died.