Piz Daint & Piz Dora

Please notice that this page describes the Piz Daint & Piz Dora system that has been operational from 2012 until November 2016. For an overview of the current Piz Daint system please refer to the actual page of the supercomputer.

Piz Daint

Named after Piz Daint, a prominent peak in Grisons that overlooks the Fuorn pass, this supercomputer is a Cray XC30 system and is the flagship system for national HPC Service.

Piz Daint has a computing power of 7.8 PFlops, this means 7.8 quadrillion of mathematical operations per second. Piz Daint can compute in one day more than a modern laptop could compute in 900 years.

This supercomputer is a 28 cabinet Cray XC30 system with a total of 5'272 compute nodes. The compute nodes are equipped with an 8-core 64-bit Intel SandyBridge CPU (Intel® Xeon® E5-2670), an NVIDIA® Tesla® K20X with 6 GB GDDR5 memory, and 32 GB of host memory. The nodes are connected by the "Aries" proprietary interconnect from Cray, with a dragonfly network topology.


ModelCray XC30
Compute Nodes (one Intel® Xeon® E5-2670 and one NVIDIA® Tesla® K20X) 5'272
Theoretical Peak Floating-point Performance per node 166.4 Gigaflops (Intel® Xeon® E5-2670) 1311.0 Gigaflops (NVIDIA® Tesla® K20X)
Theoretical Peak Performance7.787 Petaflops
Memory Capacity per node32 GB (DDR3-1600) 6 GB non-ECC (GDDR5)
Memory Bandwidth per node51.2 GB/s DDR3 250.0 GB/s non-ECC GDDR5
Total System Memory169 TB DDR3 32 TB non-ECC GDDR5
Interconnect ConfigurationAries routing and communications ASIC, and Dragonfly network topology
Peak Network Bisection Bandwidth33 TB/s
System storage capacity2.5 PB
Parallel File System Peak Performance 117 GiB/s

Piz Dora

The Piz Daint extension - Piz Dora - is a Cray XC40 with 1256 compute nodes, each with two 18-core Intel Broadwell CPUs (Intel® Xeon® E5-2695 v4). Piz Dora has a total of 45'216 cores (36 cores per node) or 90'432 total virtual cores (72 virtual cores per node) when hyperthreading (HT) is enabled. Out of the total, 1192 nodes features 64GB of RAM each, while the remaining 64 compute nodes have 128GB of RAM each (fat nodes), accessible under the SLURM partition bigmem.

Upgrade and extension

Arrival and installation