June 18, 2014 - by Simone Ulmer

Supercomputer power has skyrocketed in the last twenty years. Whereas a “supercomputer” could manage a few billion computer operations per second two decades ago, present-day supercomputers in the petaflop performance category carry out several quadrillion calculations in the same time – that’s several million times one billion computer operations per second. This enables researchers to calculate increasingly complex models and to simulate real circumstances with mounting accuracy, such as the processes that take place during a chemical reaction or the development of the weather in the next few days. Driven by the fascinating insights that computer simulations can provide, the exaflop performance class is already on the horizon: according to optimistic specialists, it should become a reality by the end of this decade.

Improvements beyond hardware

In Switzerland, intensive research is being conducted into how the performance of supercomputers can be increased in an energy-efficient way. One of the driving forces behind this is physicist Thomas Schulthess, Director of the Swiss National Supercomputing Centre (CSCS) in Lugano, which is affiliated to ETH Zurich. He is convinced that an increase in performance and efficiency cannot be achieved solely via the hardware; better computer algorithms and software are also needed. Consequently, he initiated the High Performance and High Productivity Platform (HP2C) over four years ago, where developers of application software for scientific simulations collaborated with mathematicians, computer scientists, CSCS and hardware manufacturers to make the simulation systems more efficient from the outset.

One of the first milestones in this cooperative project is the supercomputer Piz Daint, which has officially been available to researchers since April. This petaflop computer is currently the world’s most energy- efficient in its performance class – not least because its hybrid system comprising graphics processors (GPUs) and conventional processors (CPUs) boasts a sophisticated communication network and scientists in the HP2C have adapted their application software to an optimal utilisation of the computer architecture. In the follow-up project, the Platform for Advanced Scientific Computing (PASC), Schulthess aims to position computer-based research in Switzerland for the use of the exascale performance class. The project is coordinated by the Università della Svizzera italiana in conjunction with CSCS, EPFL and other Swiss universities.

Interdisciplinary approach

The platform’s focus areas are the climate and earth sciences, materials research, life sciences and physics. For instance, framework conditions are to be created to facilitate the handling of the large amounts of data that primarily accumulate in the climate and earth sciences or physics. The same goes for the simulations conducted, says Schulthess. It is a major challenge to maintain the accessibility of the data that is recorded experimentally and that is produced through simulations, as well as the scientific information thereby derived: thousands to millions of simulations produce a model based on the data recorded – such as a model of the earth’s mantle, of a molecule or a new material. In order to get somewhere close to reality, the researchers adjust the parameters for every simulation, which can trigger an absolute flood of data. One of the goals in PASC is thus to control the simulations and the adjustment of the parameters in a sensible way. ”Compared to HP2C, PASC is geared more heavily towards producing application-oriented tools than to working on monolithic codes”, says Schulthess. It all boils down to being able to use the respective codes more flexibly and, at the same time, more efficiently.

A more flexible choice of architecture

This project is also backed up with a computer science project supervised by Torsten Hoefler. This 33-year-old assistant professor runs the Scalable Parallel Computing Lab at ETH Zurich’s Institute of Computer Systems. And the lab’s name is also the focus of his research: his team increases the efficiency of computers via the software that issues the computational instructions. The central method for this is parallelisation: as many operations as possible should take place at the same time by sharing the work among as many kernels (processors) as possible. “We want to find out how we can bring applications in high-performance computing up to highly parallel systems with several million kernels”, says Hoefler. He is talking about performance-centric programming in which all levels (from parallel programming language and compiling to the development tools) have to be taken into consideration – always with the aim of using the supercomputer as energy- efficiently as possible.

In PASC, Hoefler and his team concentrate on the compiler, which converts the programs written in a legible programming language into an efficient computer language. Starting with a popular compiler, Hoefler wants to develop a heterogeneous one that translates and optimises applications for different computer architectures. If he succeeds, the operators of computer centres and their users will be more flexible in their choice of computer architecture. Until now, this choice has been limited because the programs are usually geared towards a particular architecture. In other words: a code that was developed for conventional CPU-based processors runs less efficiently on a machine with GPUs – if at all. Consequently, the users keep having to adjust their programs when a new computer architecture comes into operation.

Hoefler now wants to change all this. In close collaboration with three other PASC projects from the fields of climate, earth and material sciences, Hoefler is seeking particular sequences in the program codes that are suitable for different types of computer hardware. On the one hand, he is looking for sequences that are just the ticket for so-called “low latency” processors, which are designed for high performance and solve a single complex calculation as quickly as possible. On the other hand, he is also on the lookout for sections that allow many parallel calculations and are thus ideal for processors that are slow but can handle high performance. “It’s like the difference between a Porsche and a bus”, says Hoefler. “I get to Munich faster with the Porsche than by bus, but I can only take a maximum of two people as opposed to sixty.”

For an application’s code lines to use the computer architecture efficiently, a programmer still has to decide and manually program which code sections the Porsche is allowed to use and which should take the bus – a painstaking task. However, Hoefler would now like to automate this process – the “holy grail of automatic compilation”, in which many have already failed, he stresses. However, he is confident that he is up to the task. For the conditions that PASC and the innovative supercomputer Piz Daint provide are ideal for tackling this central problem. “With Piz Daint, Switzerland is technologically on a par with the USA and years ahead of many other computer centres. It’s right up there in pole position in the Grand Prix of supercomputing.”

Maximum information content

Another discipline to benefit from this development is life sciences, which have already been represented in HP2C with several projects and now rank among the key users in PASC. And here, too, the aim is to process the spiralling amounts of data in a sensible way. Statistics make a valuable contribution in this respect, such as genetic studies where tens of thousands of characteristics are recorded per test person. Mathematician Peter Bühlmann, a professor at ETH Zurich’s Seminar for Statistics, develops statistical methods in collaboration with biologists that enable them to assess the relevance of every single attribute. For instance, the programs determine whether an unusual peak, an outlier, is relevant or has merely cropped up by coincidence. “With our models, we provide something like an error bar, which can be used to determine what’s in the relevant range”, explains Bühlmann.

Besides software, he and his team primarily provide models that specify the framework for simulations. On this basis, the biologists can reduce their data by eliminating what is unusable. According to Bühlmann, however, the fact that the scientists obtain more accurate information through the information extraction is far more important. “The maximum information content can be fished out of the sea of data.” For this purpose, researchers in the Bioconductor project are developing software based on the popular statistics software “R.” The software is open-source – apart from anything else, to guarantee reproducibility. It is not a kind of software where you feed your data into one end and it spits the result out at the other, stresses Bühlmann. Just as the mathematicians have to have some understanding of the biologists’ data and their way of working, the biologists also have to learn to work with the programs.

Statistical selection of the best

Bühlmann states that he does not simply receive “a bucket full of data” from a researcher with instructions to look at what is inside. Instead, it is a question of concrete issues and hypotheses. For instance, Bühlmann collaborates with Ruedi Aebersold, a professor at the Institute of Molecular Systems Biology at ETH Zurich and the University of Zurich. Based on short peptide segments, the two scientists have developed a mathematical model that can illustrate which proteins are present how often in healthy and diseased tissue and how relevant these are for a particular disease. The model forms the basis for a future non- invasive diagnostic process for prostate cancer.

Bühlmann has also successfully completed a project with Wilhelm Gruissem, a professor of plant biotechnology at ETH Zurich, who was searching for a gene that makes the arabidopsis plant grow and flower faster. The researchers measured 20,000 gene expressions per plant for 43 wild varieties from different regions. They then processed the data using the statistical method they had developed, and produced a top-twenty ranking of the most promising genes as candidates for “growth accelerators.” Field trials were subsequently conducted with these genes, some of which actually hit the bull’s eye, reveals Bühlmann. Statisticians and computer scientists thus do not just seem to increase the efficiency of calculations and supercomputers; they also speed up research in the natural sciences.

Source

Globe, the magazine of ETH Zurich and ETH Alumni June 2014, Focus "Data science"