Subsections

Interesting Facts

Thanks to Andrew Loewe for collecting some of this information.

Why does a Computer need to be so big?

Computers for Research

Powerful computers have over the last few years given researchers a new way to do science. You can now calculate things rather than doing the eqivalent experiment (although scientific theories still have to be validated to ensure the calculations will be realistic). Such calculations can be quicker than an actual experiment if the computer if powerful enough. They can also be safer - some experiments are hazardous. For instance testing of weapons (especially nuclear ones) is prohibited, so some large government establishments paticularly in the USA, use comuters instead. The same people might also be looking into nuclear power generation.

We don't do arms testing, but we do use big computers for physics, chemistry, engineering and environmental studies.

The Knowledge Centre for Materials Chemistry is doing research to develop novel materials for uses in many advanced products. Ask Rick more about what they do and take a look at their Web site: http://www.materialschemistry.org/kcmc/index.html .

If you can do this kind of research faster, by simulating more materials, you can make a big scientific discovery or get your product to market more quickly. The use of computers can thus give a competitive edge. Big computera aren't just used for scientific research, they are also used in banks and businesses where they can simulate business processes, allow us to test strategies and help with maing decisions.

More Processors makes a faster Computer

A modern PC has one, two or four processors (cores) of around 3GHz clock speed (three thousand million ticks per second). Roughly one floating point or integer arithmetic operation can be done per tick, so around 3Gflop/s per core (some special processors do more).

If you add in more processors you get more operations per second. So the fastest computers are equivalent to many PCs connected together. Look at the TOP500 Web site to see what are the fastest ones in the world today: http://www.top500.org.

The biggest computer we had at Daresbury was called HPCx. The biggest one used for academic research in the UK is called HECToR. Can you find out about these?

The world's largest computers in May 2010.

Information from the TOP500 list, Dec'2009, see http://www.top500.org.

Name Location Compute Elements Link
Jaguar Oak ridge national lab 224,526 cores http://computing.ornl.gov/news/11122009_breakthrough.shtml
Road Runner Los Alomos 13,000 cells http://www.lanl.gov/discover/roadrunner_fastest_computer
Kraken XT5 National Institute for Computational Sciences/University of Tennessee 99,072 cores http://www.nics.tennessee.edu/computing-resources/kraken
JUGENE Forschungszentrum Juelich (FZJ) 294,912 cores http://www.fz-juelich.de/jsc/bg-ws10/
Tianhe-1 National SuperComputer Center in Tianjin/ NUDT 71,680 cores http://www.pcworld.com/businesscenter/article/182225/two_rival_supercomputers_duke_it_out_for_top_spot.html
Pleiades NASA Ames Research Center 56,320 cores http://www.nas.nasa.gov/News/Releases/2009/11-18-09.html
BlueGene/L Livermore 212,992 cores http://www.top500.org/system/7747
Blue Gene/P Argonne National Laboratory 163,840 cores http://www.top500.org/system/performance/9158
Ranger Texas Advanced Computing Center/Univ. of Texas 62,976 cores http://www.tacc.utexas.edu/ta/ta_display.php?ta_id=100379
Red Sky Sandia National Laboratories / National Renewable Energy Laboratory 41,616 http://www.top500.org/system/performance/10188
HECToR University of Edinburgh 44,544 +12,288 +112 http://www.hector.ac.uk
HPCx Daresbury Lab 2,560 cores http://hpcx.ac.uk

Some supercomputers in the UK in 1997

This note appeared in HPCProfile Jan'1997 [2].

Browsing the TOP500 list at the University of Mannheim gives useful information about supercomputers installed all over the World. We extracted the current UK situation below. You can compare with other countries by browsing http://parallel.rz.uni-mannheim.de/top500.html. In releasing the 8th edition of the TOP500 list the authors commented about the growing number of industrial systems, which they imply may indicate a transfer of parallel technology out of the academic world.

location system processors LINPACK Gflop/s World rank
ECMWF Reading Fujitsu VPP700 46 94.3 10
EPCC Cray T3D 512 50.8 32
UK Met Office Cray T3E 128 50.43 35
DRA Farnborough Cray T3D 256 25.3 62
EPCC Cray T3E 64 25.2 *
AWE Aldermaston IBM SP2 75 14.38 75
University of Cambridge Hitachi SR2201 96 21.3 **
ECMWF Reading Cray Y-MP C916 16 13.7 128
UK Govt Communication Cray Y-MP C916 16 13.7 136
Headquarters, Benhall        
UK Met Office Cray Y-MP C916 16 13.7 147
ECMWF Reading Cray T3D 128 12.8 164
Ensign IBM SP2 48 9.53 199
Fujitsu Uxbridge Fujitsu VX/4 4 8.6 214
Western Geophysical IBM SP2 36 8.2 225
Western Geophysical IBM SP2 40 8.05 229

* The EPCC T3E system was acquired by PPARC for installation in early December 1996. Actual number of processors to be installed was unknown at the time of writing, but we assumed 64.

** TOP500 had the Cambridge system listed in place 119 but only counted 64 processors, but if you take into account that it actually has 96 processors it would have been in position 80.

Top supercomputers in the UK in 2012

The following list is from the June 2012 Top500.

location system processors LINPACK Tflop/s World rank
Daresbury Laboratory Blue Joule IBM BG/Q 114,688 1,208 13
University of Edinburgh Dirac IBM BG/Q 98,304 1,035 20
UoE HPCx Ltd. HECToR Cray XE6 90,112 660 32
ECMWF IBM Power 775 24,576 549 34
ECMWF IBM Power 775 24,576 549 35
Met. Office IBM Power 775 18,432 412 43
Met. Office IBM Power 775 15,360 343 51
University of Cambridge Darwin Dell 9.728 183 93
Daresbury Laboratory Blue Wonder IBM iDataPlex 8,192 159 114
Durham University IBM iDataPlex 6,720 130 134
Met. Office IBM Power 775 5,120 125 143
AWRE Blackthorn Bull B500 12,936 125 144
ECMWF IBM Power 575 8,320 116 153
ECMWF IBM Power 575 8,320 116 154
UK Govt. HP Cluster Platform 3000 19,536 115 155
RAL Emerald HP Cluster Platform SL390 G7 GPU cluster 6,960 114 159
University of Southampton IBM iDataPlex 11,088 94.9 203
A financial institution IBM BladeCenter 15,744 88.7 237
University of Leeds N8 SGI Rackable cluster 5,088 81.2 291
A financial institution IBM iDataPlex 14,400 81.1 292
Classified site IBM BladeCenter 13,356 75.3 349
A bank IBM xSeries cluster 12,312 69.4 395
A bank IBM xSeries cluster 12,312 69.4 396
An IT Service Provider HP Cluster Platform 3000 7,968 68.6 404
An IT Service Provider HP Cluster Platform 4000 14,556 65.8 439

Blue Joule and Blue Wonder are part of the Daresbury Future Software Centre.

The Dirac BlueGene system in Edinburgh and the iDataPlex in Durham are part of the STFC funded DIRAC consortium.

Emerald and the Southampton system are part of the e-Infrastructure South Tier-2 consortium.

Leeds N8 provides the service for the Northern-8 Tier-2 consortium

Need to move data around

To make the processors work together to do a big calculation, e.g. part of a research problem, they need to communicate and share out data. This requires a network consisting of cables and switches. There are several types.

What is bandwidth?

Bandwidth refers to how much data you can send through a network or modem connection. It is usually measured in bits per second, or "bps." You can think of bandwidth as a highway with cars travelling on it. The highway is the network connection and the cars are the data. The wider the highway, the more cars can travel on it at one time. Therefore more cars can get to their destinations faster. The same principle applies to computer data - the more bandwidth, the more information that can be transferred within a given amount of time.

What is latency?

This is the amount of time it takes a packet of data to move across a network connection. When a packet is being sent, there is "latent" time, when the computer that sent the packet waits for confirmation that the packet has been received. Latency and bandwidth are the two factors that determine your network connection speed.

Amdahl's Law

If you have any mathematical calculation to do it can usually be split into parts. Some parts are independent (can be done in parallel) some have to be done in order (serial). If the time taken for parallel work on one processor is Tp and the time for serial work is Ts then Amdah's Law predicts the ideal speedup which can be achieved with n processors.

Time on one processor = Ts + Tp

So time on n processors = Tn = Ts + Tp/n

So speedup = T1/Tn = (Ts + Tp) / (Ts + Tp/n)

When n is very large, the maximum speedup is (Ts + Tp)/Ts and the serial part becomes relatively very important.

To find out more, take a look at http://en.wikipedia.org/wiki/Amdahl's_law.

Unfortunatly, the additional movement of data over the network takes extra time and make the actual speedup less than ideal. Why does latency become important when you have a large number of processors?

Moore's Law

In 1965 Gordon Moore, co founder of Intel, observed that over time the number of transistors that can integrated in a computer chip doubles every two years. This has been called Moore's Law. The power of the chip is also roughly proportional to the number of transistors. For more information see http://en.wikipedia.org/wiki/Moore's_law. The term ``Moore's law'' was coined around 1970 by the Caltech professor, VLSI pioneer, and entrepreneur Carver Mead. Predictions of similar increases in computer power had existed years prior. Alan Turing in a 1950 paper had predicted that by the turn of the millennium, computers would have a billion words of memory. Moore may have heard Dougla Engelbart a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture. A New York Times article published August 31, 2009, credits Engelbart as having made the prediction in 1959. Moore's original statement that transistor counts had doubled every year can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965.

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

Making things smaller

It is however hard to make very big computer chips, so to get more transistors into the same space each has to made smaller and multiple units provided. A corollary of Moore's Law is that for the same size chip the transistors must halve in size every two years.

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. See http://www.scientificamerican.com/article.cfm?id=microprocessor-computer-chip.

Power consumption and heat

Central processing unit power dissipation or CPU power dissipation is the process in which central processing units (CPUs) consume eletrical energy, and dissipate this energy by both the action of the switching devices contained in the CPU, such as transistors or vacuum tubes, and via the energy lost in the form of heat due to the impedance of the electronic circuits. Designing CPUs that perform these tasks efficiently without overheating is a major consideration in nearly all CPU manufacturers to date.

Power proportional V**2 * frequency

F proportional V, so P proportional F**3

This means that a fast single processor consumes a lot of power and therefore gets very hot (because its not perfectly conducting).

What are the limiting factors?

Manufacturing tolerances - can lead to low yield or mal-functions, has an effect on cost Physics - getting transistors and conductors down to the size of atoms is difficult Electronics - high frequency components close together cause interference Heat and stress - build up of localised hot spots and difficulty of heat dissipation

Interesting Facts and Figures.

Some events in Computing History

Some other events since 1980. (source Google Groups Usenet Timeline http://www.good-stuff.co.uk/useful/google_usenet_timeline.php).

May 1981 - first mention of Microsoft
May 1981 - first mention of MS-DOS
Aug'1981 - first review of an IBM PC
Apr'1982 - first mention of Sun Microsystems
Jun'1982 - first mention of a compact disc
Aug'1982 - first mention of the Commodore 64
Aug'1982 - first mention of Apple's Lisa and Macintosh products
Dec'1982 - announcement of first cell phone deployment in Chicago
Feb'1983 - first mention of a Fax machine
Sep'1983 - Richard Stallman's announcement of GNU
Nov'1983 - first mention of Microsoft Windows
Aug'1984 - first mention of the Commodore Amiga
Jul'1986 - first mention of Cisco
Mar'1988 - first mention of the term ``search engine''
Nov'1988 - first warning about the Morris Internet Worm
Feb'1989 - first mention of Internet Relay Chat (IRC)
Aug'1991 - Tim Berners-Lee's announcement of the World Wide Web project
Sep'1991 -announcement of Internet Gopher
Oct'1991 - Linus Torvalds' Linux announcement
Mar'1993 - Marc Andreessen's Mosaic announcement
Jun'1994 - Announcement of WebCrawler launch
Oct'1994 - Marc Andreessen's Netscape announcement
Dec'1994 - early mentions of Yahoo! and Lycos
Sep'1995 - eBay founder Pierre Omidyar advertises new auctioning service
Dec'1995 - announcement of AltaVista launch
Mar'1998 - first mention of Google
May'1998 - first mention of Mac OSX

Evolution of the Internet worldwide

date -- number of computers connected
1968 start of ARPANet
1969         4       
1982       200       
1991   1000000       
1996  13000000       
2001 494320000       
2002 568820000       
2003 657077000       
2004 743257000       
2005 814312000       
can you complete this table?

There is an interesting Web site with more information here: http://ithare.com/it-hares-brief-history-of-the-internet.

Web browsers

You are probably using a browser to read this right now. A Web browser, often just called a "browser," is the program people use to access the World Wide Web. It interprets HTML code including text, images, hypertext links, Javascript, and Java applets. After rendering the HTML code, the browser displays a nicely formatted page. Some common browsers are Microsoft Internet Explorer, Firefox, Netscape Communicator, and Apple Safari.

As you can see, since 2005 there has been an increse in the percentage of people using Firefox and a decrease in the percentage using Internet explorer.

Data Storage on Disc

1962 IBM disk = 1/2 MByte
    5 1/4" discs  = initial capacity was 100K, was then lifted to 1.2MB 
    sundry floppy discs 720kB and 1.44MB
    ZIP disc 100MB                 
    CD 650MBytes                   
1990 DVD = 17GBytes

Data Storage on Tape

1/2" tapes, 1600-6250bpi -- nearly 22.5MBytes on a 2400 foot tape (x8 tracks?)
DEC TK50 cartridge, 10GB?
8mm Exabyte 8200 tapes 2.3GB
how much data does a household video tape hold?
1991 QIC (Quarter Inch Cartridge) DC6150 cartdridge tapes, 150MB

What do modern tapes hold? The largest tape is the T10000B made by IBM it holds 1000GB of data at 120 MB/s data rate.

Largest computer ever - 1950-1963

SAGE - Semi-Automated Ground Environment - US Air Force, 1950-63, in operation until 1983.

Each SAGE processor was 250 tons and had 60,000 vacuum tubes and occupied 50x150 feet. Each installation had two CPUs each performing 75 thousand instructions per second, one running and one in standby mode together taking 3MW of power.

As part of the US defense programme in the 1960s there were 24 inter-linked installations in concrete bunkers across the USA and Canada. The whole thing cost in the region of $8-12bn dollars. This was also the beginning of DARPANET, the US Defense network.

What was SAGE?

SAGE was the brainchild of Jay Forrester and George Valley, two professors at MIT's Lincoln Lab. SAGE was designed to coordinate radar stations and direct airplanes to intercept incoming planes. SAGE consisted of 23 "direction centers," each with a SAGE computer that could track as many as 400 airplanes.

The SAGE project resulted in the construction of 23 concrete-hardened bunkers across the United States (and one in Canada) linked into a continental air-defense system called "SAGE." . SAGE was designed to detect atomic bomb-carrying Soviet bombers and guide American missiles to intercept and destroy them. SAGE was linked to nuclear-tipped Bomarc and Nike missiles. Each of the 23 SAGE "Direction Centers" housed a A/N FSQ-7 computer, the name given to it by the U.S. Military. The SAGE computer system used 3MW of power, and had approximately 60,000 vacuum tubes. It took over 100 people to operate.

Transistors and Microprocessors (Intel)

1969 Intel 4004 had 2300 transistors, 4-bit word, 0.06 Mips *
1974 Intel 8080, 6000 transistors, 8-bit word, 0.64 Mips
1978 Intel 8086, 29000 transistors, 16-bit word, 0.66 Mips
1982 Intel 80286, 134000 transistors, 16-bit, 2.66 Mips
1985 Intel 80386, 275000 transistors, 32-bit, 4 Mips
1989 Intel 80486, 1.2M transistors, 32-bit, 70 Mips
1993 Intel 80586, 3.3M transistors, 126-203 Mips
1999 Intel Pentium III, 9.5M transistors, 32-bit
2004 Intel Itanium has over 15 million, 64-bit, 1200 Mips 
2007 Intel Xeon E5472, 820 million transistors, 64-bit, 
2009 Intel Xeon E5540, 731 million trasistors, 64-bit,

See http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors.

* Mips = million instructions per second

PC World Magazine

There is an on-line archive of Personal Computer World here: https://worldradiohistory.com/Personal_Computer_World.htm.

Rob Allan 2024-08-16