Grand Tavé

The computing system Grand Tavé is accessible via ssh from the front end as

The system is a Cray XC40 Iron Compute. The Intel Kights Landing (KNL) nodes on the system can be allocated within the SLURM batch queuing system (see RUNNING JOBS).


Model Cray XC40 Iron Compute KNL
Compute Nodes 64 cores Intel(R) Xeon Phi(TM) CPU 7230 @ 1.30GHz
Login Nodes 8 cores Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Theoretical Peak Performance 436.63 TFlops
Memory Capacity per Compute Node 96 GB, 16 GB HBM
Memory Capacity per Login Node 256 GB
Interconnect Configuration Aries routing and communications ASIC with Dragonfly network topology
Scratch capacity /scratch/snx2000 904 TB

Programming Environment

The software environment on Grand Tavé is controlled using the modules framework, which gives an easy and flexible mechanism to access compilers, tools and applications.

Each programming environment loads the local MPI library. Available programming environments are GNU, Intel and PGI. You can get information on a specific module using the following commands (in the example below, change <module> with the module name):

$ module avail <module>
$ module show <module>
$ module help <module>

Please follow this link for more detailds on compiling and optimizing your code.


The system has a high-speed interconnect configuration based on Aries routing and communications ASIC with a Dragonfly network topology.

File Systems

The $SCRATCH space /scratch/snx2000/$USER is connected via Infiniband interconnect. The shared storage under /project and /store is available through the high speed interconnect from the login nodes only.

Please carefully read the general information on filesystems at CSCS.

For further information, please contact help(at)