November 21, 2017 - by Simone Ulmer
Professor Grab, with the Phoenix cluster, CSCS has been operating what’s known as a Tier-2 system to analyse data from the LHC experiments for more than ten years. The supercomputer “Piz Daint”, a Cray XC40/XC50, is now analysing the data instead of “Phoenix” in a pilot project that is currently underway. What induced you to do that?
Christoph Grab: Our job is to provide the tools such as computational power, storage and services for Swiss particle researchers to evaluate LHC data. Until now, we’ve bought the hardware that’s run at CSCS ourselves with the centre’s support. In around eight years, the upgraded High Luminosity LHC (HiLumi-LHC) will go online, giving us 10- to 20-times more particle collisions in the LHC. This means much more data in the multi-exabyte range, and we estimate that the overall global computing resources for the LHC will have to be increased at least 50-fold. And all this with the same annual budget. Investing more money in the existing cluster and expanding it won’t upscale sufficiently. Therefore, together with CSCS, we came up with the idea of using highly scalable resources such as “Piz Daint” for all our workflows. Other physicists in the U.S. are already using HPC resources for certain parts of the workflow.
The LHC-Tier-network has been based on clusters that are connected via a grid. You say that others in the U.S. are already partially using supercomputers like “Piz Daint”. Is it difficult to incorporate these HPC computers into the grid architecture?
It isn’t an insurmountable problem. Someone who sits in America and conducts analyses sends the job to the computer network, which itself looks for available resources, and subsequently the job is usually sent there. The actual kind of computer is secondary, provided it is outwardly transparent, in other words, the same compatible software interfaces and services are offered. Imagine a power plug that you stick into the plug socket to draw resources. Provided the plug fits and the same voltage is used, there aren’t any major problems.
How heavily are you using “Piz Daint” for your analysis during the current pilot phase?
On average, 25 percent of our analyses have been conducted on it since the beginning of the year. We have two different kinds of needs when using the supercomputer: on the one hand, pure computing for the simulations and, on the other hand, crunching data. The latter involves moving data back and forth; this data transfer has other demands than pure computing and is necessary to analyse the data measured in the experiment. We are now in the fortunate situation at CSCS where we can use an uncomplicated technical solution on “Piz Daint” that rules out an additional bottleneck. As mentioned above, others have also deployed HPCs before, but only for the simulation part.
How satisfied are you with the first test results on “Piz Daint”?
The data transfer and access for the analyses works extremely well, and we’ve managed to iron out all the major problems. In other words, we’ve honed the software in collaboration with the CSCS specialists in the course of the year. This already makes the Cray system more or less as efficient as the “Phoenix” cluster, both in terms of cost and computer efficiency. Understandably, it isn’t quite as stable to operate yet as the “Phoenix” cluster, which has been running for years. In addition, in close collaboration with CSCS and the University of Bern, LHC researchers successfully scaled ATLAS simulations on “Piz Daint” up to 25,000 cores. The main objectives were to test whether the infrastructure can scale up with this specific workload and whether “Piz Daint” could sustain this sort of experiment on a large scale. This is the largest run conducted at CSCS by an LHC experiment so far and one of the first large-scale tests on Cray HPC platforms using standard World Wide LHC Computing Grid workflows.
“Piz Daint” is used by many research groups, who sometimes have to wait in a queue for their computing jobs. For you, however, it’s important to be able to access the computer resources around the clock.
That’s a model we have to agree on with CSCS. If we stop using “Phoenix” completely and switch to “Piz Daint”, I imagine we’ll agree to a guaranteed resource quota with the service in question.
What do you hope will result from the switch overall?
First and foremost, scalability for the next five to ten years and a simplified system, both cost-wise and operationally. Currently, we have to maintain two completely separate systems with CPUs and disk space, parallel to the other infrastructures at CSCS. In case of switching, only a single shared system in regards to hardware needs to be operated, which should be sufficient for all our needs. In this case, being part of a larger entity not only has operational advantages, but also financial ones for purchasing components.
Due to its architecture, “Piz Daint” is particularly suitable for structuring large data quantities. Is this also the case for the Data of the LHC experiments?
Probably not in the immediate future. But the advantages for us also lie in the combination of CPUs and GPUs. That’s something we can exploit extremely effectively in physics in the long term. Presently, we already apply certain machine learning techniques in the analyses; it’s just that we’ve been referring to it as multivariate analyses in the last years. GPUs are particularly well suited to some of these tasks, and we want to benefit from this, of course. We’ve already run initial analyses of special physics problems successfully on the GPUs and are currently expanding and enlarging these applications to study the potential of these architectures.
Can you give an example?
Neuronal networks for analyses of particular event topologies. These networks have to be trained extensively usually on simulated data sets to obtain reliable results afterwards when applied to data, and that works very efficiently on “Piz Daint’s” GPUs. That’s just one potential application. When simulating materials, for instance, you also have recurring processes that can be shown to be calculated very effectively on GPUs.
Several special pilot projects to be conducted on “Piz Daint” are now scheduled. Can you say a bit more about them?
They involve running scalings of LHC software and workflows, without all the red tape, transparently via the network on the supercomputer and trying out new ideas. We’ve got ideal conditions for this, as there is a very reliable network between CERN and CSCS without any artificial barriers. The idea came during a discussion with CSCS Director Thomas Schulthess during a visit to CSCS by CERN Director Fabiola Gianotti last summer and will now be implemented directly. That kind of thing is only possible in Switzerland. One of these pilot projects is the one I mentioned earlier regarding GPUs with a view to exploiting the computer architecture more effectively. Our institute and the CMS experiment are primarily involved in this.
Are there any other plans?
In the near future, we’re due to test the concept of “data lakes”. These involve holding large quantities of raw data in a cloud-like space. Researchers from CERN and the Swiss ATLAS experiment are interested in these tests as CSCS also provides ideal conditions for them with “Piz Daint”. The goal is to conduct the major data processing near the lakes and run smaller jobs directly via the network. Currently, the grid consists of over 180 different-sized systems worldwide. That will no longer be the case in five to ten years, as it simply isn’t efficient. One day, we might have ten large installations worldwide with a likely power consumption of 2 to 5 megawatts each. The idea is to consolidate the small systems and make the technologies available for widespread use. HPC is one of them.
So ultimately, in regards to the data lakes on “Piz Daint”, it’s about incorporating the further development of the HPC into CERN research?
Yes. The Swiss Institute of Particle Physics and CSCS are paving the way for the upcoming computational challenges. The idea for this was born at the Directors’ Meeting in August. As chairman of CHIPP, I have now taken on a kind of mediator role to realise such projects. My main duty, however, is to make the computer service available to Switzerland for the three different experiments at CERN and make sure our particle physicists are able to analyse their data and conduct physics.
Are you planning to perform the calculation solely on "Piz Daint" in the future?
That’s the goal if it is more economical. Within the community, we’re clarifying the individual needs, testing them on “Piz Daint” and comparing the efficiency and costs. We should have the comparisons by the end of November. On this basis, the PIs from the individual Swiss institutions will then decide on what to do next.
What role does CHIPP play in an international context?
Needless to say, we’re a small beacon on the grand scale of things; but our cluster has always worked well, and we’re assuming a particular pioneering role with the current pilot projects and gaining more visibility. We were the first to analyse data reasonably on a supercomputer–apart from anything else, because we’re able to access the data in the memory and every single hub externally. This means that every hub we use can communicate directly with the data in the memory. That currently isn’t possible anywhere else due to security regulations. What it will look like in five years is uncertain, but I’m confident that both we and CSCS will learn a great deal via these pilot projects and develop a lot of expertise.
Will anything change fundamentally in the global particle physics data network, and will other Tier systems such as the Tier-0 system at CERN also switch to classic high-performance computers?
I think our model will change as our parameters have to evolve, too. Yesterday, we only spoke about grids; today, the cloud is on everyone’s lips. But these are actually very similar concepts. The networks will grow and become even faster and more reliable. But for now, what the architecture and our computing models of the future will look like in ten years and whether this will be HPC systems in the current sense is anybody’s guess, of course.
Where is there room for improvement?
We need more manpower. We currently employ people for the operations and new projects whose total employment level at CSCS for our LHC needs corresponds to between three and four full-time equivalents. That’s not very much, but we need experts if new technologies are to be promoted. In particular, there is still a lot of room for improvement on the software side of things, both at system level and on the experimental software side. However, it is extremely difficult to find truly suitable people and then only hire them temporarily for two or three years.