The computing cluster Castor is accessible via ssh from the frontend ela.cscs.ch as castor.cscs.ch. The operating system on the login nodes is Red Hat Enterprise Linux Server 6.6: direct access to the compute nodes is not allowed, to run jobs you need to use the SLURM batch queueing system (see RUNNING JOBS).
Specifications
Model | IBM iDataPlex |
32 Compute Nodes | 2 x Intel® Xeon® CPU X5650 @ 2.60GHz (12 cores, 24 to 96GB RAM) + 2 GPU Nvidia Tesla M2090 |
2 Login Nodes | 2 x Intel® Xeon® CPU E5620 @ 2.40GHz (8 cores, 48GB RAM) |
Theoretical Peak Performance | 46.6 TFLOPS |
Memory Capacity per node | 24,48,96 GB (DDR3-1300) |
Memory Bandwidth per node | 41.6 GB/s |
Total System Memory | 1.5 TB DDR3 |
Interconnect Configuration | 1 high-speed interconnect based on InfiniBand FDR (used for both MPI traffic and high-speed storage traffic) |
Scratch capacity | /scratch/castor 640TB |