Computer systems at Daresbury over the Years

Table D lists some of the computer systems which have been used for research work at Daresbury Laboratory.

Table 2: Significant Computer Systems at Daresbury Laboratory
Item Make Dates installed at DL procs and speed memory total MFlop/s *
0 Ferranti Atlas (RAL) 1964-   256 kB 1 MIP/s
1 IBM 1800 1966-      
2 IBM 360/50 1966-      
3 IBM 360/65 1968- c.1 MHz    
4 IBM 370/165 1973- 12.5 MHz 3 MB  
5 PDP-11/05 1974-81 100kHz 16kB  
  PDP-11/15        
  Perkin-Elmer 7/32        
6 Cray-1S 1978-83 2x 120 MHz 4 MB 480 (115kW per unit)
7 GEC 4000 cluster c.1975-      
8 NAS 7000 1981-88 100 MHz 8 MB, later 16 MB  
9a FPS 164 1984-89 11 MHz 4 MB 11
9b FPS 264 1984-89 11 MHz 4 MB 22
  Concurrent 3230 (formerly Perkin-Elmer) 1985- 2x 4 MHz LSI CPU    
10a Meiko M10 1986-89      
10b Meiko M60 1990-93 14x T800 transputers and 10x i860 co-processors   560
11 Convex C220 1988-94 2x 25 MHz custom CMOS CPU 256 MB 72
12 Intel iPSC/2 1988-94 32x 4 MHz 80386/7, Weitek 1167 and AMD VX co-processors 160 MB 212
13 Stardent 1520 1989-94 2x 32 MHz MIPS R3000 32 MB 16
  SGI Power 4D/420 c.1989 4x 32 MHz MIPS R3000   33
14 Intel i860 1990-93 64x 40 MHz 80860. Top500 no. 210 in June 1993 512 MB 2.56 Gflop/s
  Alliant FX/2808 c.1990 8x 80860   320
16 Beowulf cluster 1994-98 32x 450 MHz Pentium III 8 GB  
17 Loki cluster 1999-2003 64x 667 MHz DEC Alpha EV6/7 32 GB  
15 IBM SP-2/3 2000-02 32x 375 MHz Power3    
1 8 Scali cluster 2003-7 64x 1 GHz AMD K7 64 GB  
19a IBM Regatta HPCx phase I 2002-2010 1280x 1.3 GHz Power4. Top500 no. 9 in Nov'2002 1.6 TB 6.66 Tflop/s. 0.015 Gflop/s/W
19b IBM Regatta HPCx phase II   1600x 1.7 GHz Power4+.    
19c IBM Regatta HPCx phase III        
19d IBM Regatta HPCx phase IV        
20 NW-GRID 2005-12 384x 2.6 GHz Opteron    
21 BlueGene/L -2011      
22 Hapu        
23 Woodcrest   32 Woodcrest    
24 BlueGene/P -2012      
25 CSEHT   32 Harpertown    
25a nVidia   Nehalem+Tesla GPU    
26 Fujitsu        
27 SID IBM iDataPlex 2011-14 480x 2.67 GHz Westmere 960 GB  
28 Blue Wonder IBM iDataPlex 2012-16 8,192 Sandybridge   Top500 no. 114 in June 20121 70 Tflop/s. 1 Gflop/s/W
29 Blue Joule BlueGene/Q 2012-16 114,688 1.6 GHz BGC   Top500 no. 13 in June 2012 1.46 Pflop/s. 2.55 Gflop/s/W
30 Napier NextScale cluster 2014-17      
31 Iden iDataPlex Xeon Phi cluster 2014-17      
32 Neale Novel cooling system 2015-17 96x Supermicro nodes in oil bath    
33 Ace development system 2015-17 12x 64-bit ARM nodes    
34 DeLorean Maxeler data flow engine 2014-2022 5x Xeon hosts with FPGA accelerators    
35 Panther IBM Power-8 2016-2022 36 nodes with Power-8 CPU plus 4x nVidia K80 GPU    
36 Paragon IBM Power-8 2017-2922 34 node swith Power-8 NVL CPU plus 4x nVidia P100 GPUs    
37 JADE 2017-2022 22x nVidia DGX-1 nodes    
38 Scafell Pike Bull Atos X1000 Sequena 2017-present 55,680 Xeon Phi Knight's Landing cores, 25,728 Xeon Gold Skylake cores 192 GB per node (32 cores on SKL) Top 500 no. 35 at installation, around 4 Pflop/s
39 JADE-2 2020-present 63x nVidia DGX1-MaxQ nodes with 8x V100 GPUs 32 GB per GPU  
40 Quasar Atos quantum learning simulator 2021-present Xeon Platinum 8160M, 8 sockets, 24 cores per socket, 384 virtual cores    

* Rpeak MFlop/s quoted for 64-bit arithmetic where possible.

For a history of parallel computing with a general timeline see [48].

Rob Allan 2024-08-16