In order to tackle today’s scientific challenges researchers from around the world collaborate and pool their knowledge and resources. CSCS participates in a number of European and worldwide collaborations with the aim of furthering the knowledge and competence of the staff, exchange knowledge and experience with peers and ultimately provide excellent support and service to the users. To this end, CSCS engages in projects in a variety of topics such as software development, development of a European compute resource infrastructure and studies on energy efficiency in computer centres.
ADAC - Accelerated Data Analytics and Computing
Accelerated Data Analytics and Computing (ADAC) Institute is a collaboration between ETH Zurich, Oak Ridge National Laboratory (ORNL), and Tokyo Institute of Technology (TITech). All three institutions deploy supercomputers with hybrid accelerated compute nodes – Titan at ORNL, Tsubame at TITech and Piz Daint at ETH Zurich / CSCS. The Institute focuses on three topics: (1) sharing best practices in operating accelerated supercomputers, including the development of system software and diagnostic tools; (2) joint development of testbeds for newly emerging technologies; and (3) collaborative development of domain specific libraries for applications.
For more information: ADAC website
CHiPP/WLCG Worldwide LHC Computing Grid
CSCS operates a Tier-2 center for the Worldwide LHC Computing Grid (WLCG) on behalf of the Swiss Institute of Particle Physics (CHiPP). It is the biggest Tier-2 center in Switzerland. Tier-3 centers are operated by PSI, the University of Bern and the University of Geneva. Switzerland as a member state of CERN contributes to the distributed data storage and analysis infrastructure for the users of the ATLAS, CMS and LHCb experiments at the Large Hadron Collider - the largest scientific instrument on the planet.
Contact person: Miguel Gila, email@example.com
COVID-19 HPC Consortium
The COVID-19 High Performance Computing (HPC) Consortium brings together the Federal government, industry, and academic leaders to provide access to the world’s most powerful high-performance computing resources in support of COVID-19 research.
For more information please visit the COVID-19 HPC Consortium website.
CTA - Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is the next generation ground-based observatory for gamma-ray astronomy at very-high energies. With more than 100 telescopes located in the northern and southern hemispheres, CTA will be the world's largest and most sensitive high-energy gamma-ray observatory. Around 1.7 Petabyte per year will reach each of 4 off-site Data Centres (DC) from the two observatory sites. The whole data from their sites will be distributed to the DCs for their processing pipelines located in Europe, namely in Italy, Germany, Spain and Switzerland. These DCs will archive the raw data, perform the standard data analysis, and scientific analysis of the event data aimed at production of “high-level” data products ultimately used by astronomers.
Contact person: Pablo Fernandez, firstname.lastname@example.org
DICE (Data Infrastructure Capacities for EOSC) aims to enable a European storage and data management infrastructure for EOSC. DICE consortium brings together a network of computing and data centres, research infrastructures, and data repositories for the purpose to enable a European storage and data management infrastructure for EOSC, providing generic services and building blocks to store, find, access and process data in a consistent and persistent way. All services provided via DICE will be offered through the EOSC Portal and interoperable with EOSC Core via a lean interoperability layer to allow efficient resource provisioning from the very beginning of the project. The data services offered via DICE through EOSC are designed to be agnostic to the scientific domains in order to be multidisciplinary and to fulfil the needs of different communities.
Contact person: Stefano Claudio Gorini, email@example.com
For more information please visit DICE website.
ESiWACE2 - Centre of Excellence in Simulation of Weather and Climate in Europe
The path towards exascale computing holds enormous challenges for the community of weather and climate modelling regarding portability, scalability and data management that can hardly be faced by individual institutes. ESiWACE2 will therefore link, organise and enhance Europe’s excellence in weather and climate modelling to (1) enable leading European weather and climate models to leverage the performance of pre-exascale systems with regard to both compute and data capacity as soon as possible and (2) prepare the weather and climate community to be able to make use of exascale systems when they become available. ESiWACE2 will develop HPC benchmarks, increase flexibility to use heterogeneous hardware and co-design and provide targeted education and training for one of the most challenging applications to shape the future of HPC in Europe.
Contact person: William Sawyer, firstname.lastname@example.org
For more information please visit ESiWACE website.
EuroCC - National Competence Centres in the framework of EuroHPC
The EuroCC activity will bring together the necessary expertise to set up a network of National Competence Centres in HPC across Europe in 31 participating members and associated states, to provide a broad service portfolio tailored to the respective national needs of industry, academia and public administrations. All of this to support and increase strongly the national strengths of High Performance Computing (HPC) competences as well as High Performance Data Analytics (HPDA) and Artificial Intelligence (AI) capabilities and to close existing gaps to increase usability ofthese technologies in the different states and thus provide a European excellence baseline.
Contact person: Maxime Martinasso, email@example.com
Within the EuroCC project, ETH Zurich / CSCS is tasked with establishing a National Competence Centre (NCC) in the area of high-performance computing (HPC) in Switzerland, its respective country. The Swiss NCC coordinates activities in all HPC-related fields at the national level and serves as a contact point for customers from industry, science, (future) HPC experts, and the general public alike.
For more information please visit NCC Switzerland page or send an email to firstname.lastname@example.org.
hpc-ch - The Swiss Service Provider Community
CSCS is one of the promoter of hpc-ch, the Swiss Service Provider Community. The goal of hpc-ch is to support and foster the knowledge exchange between providers of HPC systems at Swiss universities. 11 organizations are already member of hpc-ch and 3 joined as guest.
Contact person: Michele De Lorenzi, email@example.com
For more information please visit hpc-ch website.
Human Brain Project
The Human Brain Project is part of the FET Flagship Programme, an initiative launched by the European Commission as part of its Future and Emerging Technologies (FET) initiative. The goal is to encourage visionary, "mission-oriented" research with the potential to deliver breakthroughs in information technology with major benefits for European society and industry. The HBP in particular will make fundamental contributions to neuroscience, to medicine and to future computing technology, rising the challenge of understanding the human brain.
Contact person: Stefano Gorini, firstname.lastname@example.org
For more information please visit the Human Brain Project website.
Interactive Computing E-Infrastructure for the Human Brain Project (ICEI)
Five leading European supercomputing centres are committed to develop, within their respective national programs and service portfolios, a set of services that will be federated across a consortium, called Fenix, which aims at providing scalable compute and data services in a federated manner.
The neuroscience community is of particular interest in this context and the HBP represents a prioritized driver for the Fenix infrastructure design and implementation. The Interactive Computing E-Infrastructure for the HBP (ICEI) project will realize key elements of this Fenix infrastructure that are targeted to meet the needs of the neuroscience community.
Contact person: Stefano Gorini, email@example.com
For more information please visit the Fenix website
LUMI - Large Unified Modern Infrastructure
The European High-Performance Computing Joint Undertaking (EuroHPC JU) will pool European resources to develop top-of-the-range exascale supercomputers for processing big data, based on competitive European technology. One of the pan-European pre-exascale supercomputers, LUMI, will be located in CSC’s data center in Kajaani, Finland. The supercomputer will be hosted by the LUMI consortium. The LUMI (Large Unified Modern Infrastructure) consortium countries are Finland, Belgium, Czech Republic, Denmark, Estonia, Norway, Poland, Sweden and Switzerland. LUMI will be one of the world’s best known scientific instruments for the lifespan of 2021–2026.
Contact information: Katarzyna Pawlikowska, firstname.lastname@example.org
For more information please visit LUMI website.
The multidisciplinary team made up of people from EPFL and CSCS (ETH) won a swissuniversities P-5 grant to further develop the Materials Cloud web platform for Computational Open Science. The project will allow users to autonomously contribute hundreds of different data entries in the different sections without having to interact with one of the platform maintainers.
Contact person : Joost VandeVondele, email@example.com
For more information please visit the Materials Cloud website.
MaX - Materials at eXascale
Materials at eXascale (MaX) is one of the eight “European Centres of Excellence for HPC applications” supported by the EU under its H2020 e-INFRA-2015 call. MaX was created to support developers and end users of advanced applications for materials simulations, design and discovery, and works at the frontiers of the current and future High Performance Computing (HPC) technologies. ETH Zurich represented by CSCS is the legal partner of the MaX consortium.
Contact person: Anton Kozhevnikov, firstname.lastname@example.org
For more information please visit the MaX website.
In January 2008 CSCS institutionalized its staff exchange program with the US American supercomputing centre NERSC at Lawrence Berkeley National Laboratory. The aim of this exchange program was to build on the similarities of both centres to share and further the scientific and technical know-how of both institutions. Berkeley Lab Associate Director for Computing Sciences Horst Simon is a member of the CSCS advisory board. Both centers share a common technological focus, having selected Cray XT supercomputers as their primary systems after thorough reviews of various systems. The two sites regularly interact and exchange on systems, applications and facilities.
For more information please visit NERSC website.
OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model. CSCS is an academic member of the OpenACC consortium and is helping to steer the long-term sustainability of OpenACC on next generation systems.
Contact person: Thomas Schulthess, email@example.com
For more information please visit OpenACC website >
PASC - Swiss Platform for Advanced Scientific Computing
The Swiss Platform for Advanced Scientific Computing (PASC) overarching goal is to position Swiss computational sciences in the emerging exascale-era and aims to provide the Swiss scientific community with the tools to make the best use of the new generations of supercomputing machines to solve key problems for science and society. It addresses important scientific research issues in high-performance computing and computational science in different domain sciences through interdisciplinary collaborations between domain scientists, computational scientists, software developers, computing centres and hardware developers.
PASC is a joint effort of all Swiss universities, coordinated by the CSCS and the Università della Svizzera italiana and will create a long-term research-driven cooperation network in computational science between Swiss Universities.
Contact person: Joost VandeVondele, firstname.lastname@example.org
For more information please visit PASC website.
PLAN-E - Platform of National eScience/Data Research Centers in Europe
PLAN-E is a new Platform of National eScience/Data Research Centers in Europe. The Platform unites the efforts of escience and data research groups across Europe in order to strengthen the European position in the escience and data research domain.
Contact person: Michele De Lorenzi, email@example.com
For more information please visit PLAN-E website.
PRACE - Partnership for Advance Computing in Europe
PRACE, the Partnership for Advanced Computing in Europe, aims to create a pan-European high performance computing (HPC) service and enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. PRACE also seeks to strengthen the European users of HPC in industry through various initiatives. PRACE has a strong interest in improving energy efficiency of computing systems and reducing their environmental impact.
ETH Zurich, represented by CSCS, with its supercomputer “Piz Daint” is a Hosting Member of PRACE international and participates in all PRACE projects.
Contact person: Maria Grazia Giuffreda, firstname.lastname@example.org
For more information please visit PRACE website
RACKlette - HPC
The ”RACKlette – HPC” team of motivated students from ETH Zurich in Switzerland will compete in the Student Cluster Competition that will take place at the upcoming ISC 2019 in Frankfurt. The team is composed out of a rich and diverse mix of interests in parallel programming, system administration and system's programming, algorithms and complexity analysis as well as processor and memory architecture. RACKlette is supervised by Prof. Torsten Hoefler (Scalable Parallel Computing Laboratory, ETH Zurich), is supported by CSCS and sponsored by different companies.
Contact person: Hussein Harake, email@example.com
For more information: RACKlette website
SKAO - Square Kilometre Array Observatory
The Square Kilometre Array Observatory (SKAO) is a next-generation radio astronomy facility that will have unprecedented sensitivity and survey speed, allowing for new insights into a wide range of astrophysical phenomena including planet formation, galaxy evolution, and science of the early universe. The SKA Observatory, once fully built, will generate up to 600 PB of calibrated science data products each year, an unprecedented data rate for observational astronomy. The Swiss SKA community (SKACH, which CSCS is part of) and the international SKAO science community are working collaboratively to create a shared and distributed data, computing, and networking capability.
Contact person: Pablo Fernandez, firstname.lastname@example.org
Exa2Green was a 3-year research project co-funded under the EU 7th Framework Program "FET Proactive Initiative: Minimising Energy Consumption of Computing to the Limit". FET (Future and Emerging Technologies) aimed to go beyond the conventional boundaries of ICT and ventures into uncharted areas, often inspired by and in close collaboration with other scientific disciplines.
Hybrid Multicore Consortium
The consortium aimed at exploring hybrid multicore architectures as a significant, yet unrealized promise for delivering high-end production computing capabilities for the most demanding science applications.
HP2C - High Performance and High-Productivity Computing
The HP2C platform aimed at developing applications to run at scale and make efficient use of the next generation of supercomputers. The platform consisted of domain science projects that were lead by research groups at Swiss universities and Institutes of the ETH Domain, and supported by a core group of scientific computing experts in the Lugano area. HP2C was jointly operated by CSCS and the Institute for Computational Sciences of the University of Lugano (USI). Project teams engaged in high-risk and high-impact application development for HPC systems at scale.
MAESTRO - Middleware for memory and data-awareness in workflows
The MAESTRO project supported by a three-year grant from the European Commission's H2020 Future Enabling Technologies for HPC (FETHPC) programme, was created to address the ubiquitous problems of data movement in data-intensive applications and workflows. The Maestro consortium consists of 7 expert partners, each bringing specialist knowledge and expertise to the technical challenge.
The objective of NextMuSE was to initiate a paradigm shift in the technology of Computational Fluid Dynamics (CFD) and Computational Multi-Mechanics (CMM) simulation software, which is used to model physical processes in research development and design across a range of industries. NextMuSE relies on a mesh-free method, Smoothed Particle Hydrodynamics (SPH), which is fundamentally different from conventional finite element or volume techniques. SPH offers the possibility of a novel, immersive, adaptive framework for user interaction, and has the potential for integrated multi-mechanics modelling in applications where traditional methods fail.
NVIDIA Co-Design Lab for Hybrid Multicore Computing at ETH Zurich
One of the lab’s key aims is to encourage tighter collaboration among computing system architects, integrators, application developers and researchers, providing an open channel for all involved to exchange ideas and experiences. This, in turn, will be used to speed up the design of new applications and technologies that will drive the next wave of computational scientific research and discovery.
SELVEDAS - Services for Large Volume Experiment-Data Analysis utilizing Supercomputing and Cloud technologies
Evolution and scalability of e-infrastructure services for the PSI operated large-scale research facilities including Swiss Light Source (SLS), the Swiss Free Electron Laser (SwissFEL), and the Swiss neutron source (SINQ) are essential for researchers from Swiss universities and a growing number of industrial partners. Ongoing and future progress in accelerator and detector technologies lead to substantial growth of data generated during experiments. PSI, in close collaboration with CSCS, aimed at developing scalable and extensible services for data management, data processing, and data analysis to the Swiss academic user by leveraging high performance computing (HPC), storage, networking as well as cloud technologies. Implementation of PSI target use cases and data-driven workflows demonstrated not only GPU-enabled application acceleration but also Supercomputing on Demand service for utilizing existing and planned novel HPC resources at CSCS.