The fourth generation of self-made supercomputer at the ITP represents the first real design update to the original zBox1 which was built in 2002. The number of cores and amount of memory was increased substantially over zBox3 and the very aging SCI Network replaced by QDR Infiniband. From the 576 cores (Intel Core2) and 1.3 TB RAM of zBox3, we now have 3072 cores (Intel Sandy Bridge E5) and 12 TB of RAM. While zBox3 was an upgrade of the main boards, CPUs and memory without making any design changes, zBox4 involved completely redesigning the platters which now each hold 4 nodes. The special rack which houses these platters was also improved with addition of a special nozzle to improve airflow and cleaning up the way cables are routed.
Planning for zBox4 began in the Spring of 2011, but these plans were partially scrapped in the late Fall of 2011 to await the arrival the Intel Sandy Bridge E5 chip with 8 cores and 4 memory channels per chip and main boards to support it. Serious planning began again in the Spring of 2012 and construction began end of September 2012. The dismantling of zBox3 and the construction of zBox4 was performed by volunteer students, postdocs and friends of the Institute, with even a few professors lending a helping hand.
Specifications
Hardware
- CPUs: 384 Intel Xeon E5-2660 (8 cores @ 2.2 GHz, 95 W), 3072 cores in total
- Main Boards: 192 Supermicro X9DRT-IBQF (2 CPUs per node) on-board QDR Infiniband
- RAM: Hynix DDR3-1600, 4 GB/core, 64 GB/node, 12.3 TB in total
- SSD: 192 OCZ 128 GB high performance Vertex 4 drives, 24.6 TB in Total
- HPC Network: QLogic/Intel QDR Infiniband in 2:1 fat tree (9 leaf and 3 core switches)
- Gbit Ethernet and seerate dedicated 100 Mbit management networks
- Power usage (full load): 44 kW
- Dimensions (L x W x H): 1.5m x 1.5m x 1.7m
- Number of Cables: Power: 112 IB: 300 Ethernet: 388
- Cost: under 750'000 CHF (797,000 USD)
System Configuration
- OS: Scientific Linux version 6.3
- Queue System: Slurm
- Swap: 8 GB on node-local SSD drive
- Temp Files: 110 GB on node-local SSD drive
- Booting: from node-local SSD, or over Ethernet
- Storage System (existing)
- Capacity: 684 TB formatted Raid-6
- Lustre file system with 50 OSTs
- 342 x 1.5 TB HDD and 171 x 2.0 TB HDD
- Physical dimensions: 48 standard rack units
- 10 Gb Ethernet and 40 Gb (QDR) Infiniband
- 3 Controllers using Intel E5645 2.4 GHz CPUs
- Tape Robot: Capacity: 800 TB, 437 tape slots
- 4 x LTO-5 drives and 2 x LTO-3 drives
- 54 TB high speed tape cache (108 TB raw storage)
- 40 Gb (QDR) Infiniband connected
The zBox4 is the latest upgrade to the famous zBox supercomputer. Located in the Institute for Theoretical Physics at the University of Zurich, Switzerland, the zBox features a custom rack design that houses 3,072 2.2GHz Intel Xeon cores and over 12TB of RAM. Each node is connected to a high speed Infiniband network, that enables researchers to perform cutting edge parallel N-body simulations of galaxy, star, and planet formation. The GHalo simulation featured in this video has over 3 billion particles and models the formation of a dark matter halo similar to that containing our own Milky Way galaxy.
Find the full details at http://www.zBox4.com
A full length video of the simulation can be found here http://youtu.be/Ty7jN-hzb-E
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.