Hardware History Overview
Modern computing can probably be traced back to
the 'Harvard Mk I' and Colossus (both of 1943). Colossus was an
electronic computer built in Britain at the end 1943 and designed
to crack the German coding system - Lorenz cipher. The 'Harvard
Mk I' was a more general purpose electro-mechanical programmable
computer built at Harvard University with backing from IBM. These
computers were among the first of the 'first generation' computers.
First generation computers were normally based
around wired circuits containing vacuum valves and used punched
cards as the main (non-volatile) storage medium. Another general
purpose computer of this era was 'ENIAC' (Electronic Numerical Integrator
and Computer) which was completed in 1946. It was typical of first
generation computers, it weighed 30 tonnes contained 18,000 electronic
valves and consumed around 25KW of electrical power. It was, however,
capable of an amazing 100,000 calculations a second.
The next major step in the history of computing
was the invention of the transistor in 1947. This replaced the inefficient
valves with a much smaller and more reliable component. Transistorised
computers are normally referred to as 'Second Generation' and dominated
the late 1950s and early 1960s. Despite using transistors and printed
circuits these computers were still bulky and strictly the domain
of Universities and governments.
The explosion in the use of computers began with
'Third Generation' computers. These relied Jack St. Claire Kilby's
invention - the integrated circuit or microchip; the first integrated
circuit was produced in September 1958 but computers using them
didn't begin to appear until 1963. While large 'mainframes' such
as the I.B.M. 360 increased storage and processing capabilities
further, the integrated circuit allowed the development of Minicomputers
that began to bring computing into many smaller businesses. Large
scale intergration of circuits led to the development of very small
processing units, an early example of this is the processor used
for analyising flight data in the US Navy's F14A `TomCat' fighter
jet. This processor was developed by Steve Geller, Ray Holt and
a team from AiResearch and American Microsystems.
On November 15th, 1971, Intel
released the world's first commercial microprocessor, the 4004.
Fourth generation computers developed, using a microprocessor to
locate much of the computer's processing abilities on a single (small)
chip. Coupled with one of Intel's inventions - the RAM chip (Kilobits
of memory on a single chip) - the microprocessor allowed fourth
generation computers to be even smaller and faster than ever before.
The 4004 was only capable of 60,000 instructions per second, but
later processors (such as the 8086
that all of Intel's processors for the IBM PC and compatibles is
based) brought ever increasing speed and power to the computers.
Supercomputers of the era were immensely powerful, like the Cray-1
which could calculate 150 million floating point operations per
second. The microprocessor allowed the development of microcomputers,
personal computers that were small and cheap enough to be available
to ordinary people. The first such personal computer was the MITS
Altair 8800, released at the end of 1974, but it was followed by
computers such as the Apple I & II, Commodore PET and eventually
the original IBM PC in 1981.
Although processing power and storage capacities
have increased beyond all recognition since the 1970s the underlying
technology of LSI (large scale integration) or VLSI (very large
scale integration) microchips has remained basically the same, so
it is widely regarded that most of today's computers still belong
to the fourth generation.
|