The computing cluster Castor is accessible via ssh from the frontend as

The operating system on the login nodes is Red Hat Enterprise Linux Server 6.6: direct access to the compute nodes is not allowed, to run jobs you need to use the SLURM batch queueing system (see RUNNING JOBS).


Model IBM iDataPlex
32 Compute Nodes 2 x Intel® Xeon® CPU X5650 @ 2.60GHz (12 cores, 24 to 96GB RAM) + 2 GPU Nvidia Tesla M2090
2 Login Nodes 2 x Intel® Xeon® CPU E5620 @ 2.40GHz (8 cores, 48GB RAM)
Theoretical Peak Performance 46.6 TFLOPS
Memory Capacity per node 24,48,96 GB (DDR3-1300)
Memory Bandwidth per node 41.6 GB/s
Total System Memory 1.5 TB DDR3
Interconnect Configuration 1 high-speed interconnect based on InfiniBand FDR (used for both MPI traffic and high-speed storage traffic)
Scratch capacity /scratch/castor 640TB

Programming Environment

The software environment on Castor is controlled using the modules framework, which gives an easy and flexible mechanism to access compilers, tools and applications.

Available programming environments are GNU, Intel and PGI. You can get information on a specific module using the following commands (in the example below, change <module> with the module name):

$ module avail <module>
$ module show <module>
$ module help <module>

Please follow this link for more detailds on compiling and optimizing your code.


The system has 1 high-speed interconnect based on QDR: it is dedicated to the MPI traffic and to the storage high speed traffic (GPFS, file transfer, etc…).

File Systems

The $SCRATCH space /scratch/castor/$USER is connected via Infiniband interconnect. The shared storage under /project and /store is available through the high speed interconnect both from the login and the compute nodes.

Please carefully read the general information on filesystems at CSCS.

For further information, please contact help(at)