January 14, 2013 - by Simone Ulmer

The Computational Science and Engineering Laboratory has a gallery (www.cse-lab.ethz.ch) of fascinating visualisations. Most of them have been made by you. Do you consider visualisation to be fun?

Rossinelli: Everything started when I was an undergraduate student at ETH Zurich. During my master studies, I had my minor in computer graphics. After that, I went to a medical company for an internship, Varian Medical Systems. There I had the chance to play and experiment in the context of scientific visualization. During this internship I had a very supportive advisor, Tor Hildebrand. He allowed me to just play with whatever I had in mind. Furthermore, I had the opportunity to buy graphic cards and program them, based on what I read in papers.  The main problem was that a single graphic card doesn’t have a lot of memory. When I came to ETH Zurich for my PhD, I realised that the available software for scientific visualization in my lab did not produce high quality visualisation, because it did not take advantage of these graphic cards. All the visualization software was mainly running on a single CPU core. It became clear to me that we could have a much better visualisation quality if we could tap the compute power of the graphic cards. The challenge compared to medical visualization was that I had two orders of magnitude more data, which does not fit on one GPU. I therefore had to come up with different ways of organising the data.

 

Behind all of these beautiful videos and images are complex mathematics and physics. Could you explain what are the objectives and the main topics of your research?

I am trying to combine advanced numerical techniques and supercomputing technologies. It is an interdisciplinary effort to combine these two very powerful things in order to disruptively solve problems in science. In general, I try to develop software entailing efficient algorithms that can be effectively mapped on the hardware of HPC platforms, with the goal to address challenging scientific problems.

What kind of problems?

One typical group of problems are optimisation tasks in engineering -- for example, a silo-discharge. The material is granular and when you open the valve, sometimes the material clogs and cannot go through. This is a situation, you want to avoid. In addition, you want to have control on the mass flow. You therefore play in the simulations with the width of the silo outlet. Another example is fluid flow for biomedical applications. Here, we focus on shock induced collapse or Lithotripsy. This is more fundamental research and not as applied as the previous example. In vortex dynamics, an apparently simple problem that however has not been solved so far for Reynolds numbers of 50’000 or above, is flow past a sphere. There is currently no software that can simulate this. We would like to study this phenomenon, because it gives you fundamental insight in fluid dynamics.

Are your visualisations mainly a tool for you to understand your own research or are they a tool for explaining your research to outside people?

They proved to be useful in research as well, because you can do better analysis of the simulation results, either with close-up view at higher resolution or by giving you a better overview. It accelerates our own learning, but it is also true that we use it for educational purposes and outreach.

Would it be possible for you to work without visualisation?

I don’t think that it is really possible to work in computational fluid dynamics without doing visualisations. To not use the two eyes that evolution gave us would be like working in the dark or programming with one hand.

Petros Koumoutsakos, Babak Hejazialhosseini and you recently won the Milton Van Dyke Award. What does this prize mean for you and the CSE Lab?

We are happy for sure and I think that it just confirms how strong group is, not only in visualisation, but also in general.

The previously published video shows an object like a jellyfish. This one however shows something completely different, a shock wave in the air that is directed at a helium bubble. What can we learn from such a simulation?

The shock bubble interaction is a phenomenon present in different engineering and biomedical applications, e.g. for kidney stones or high speed combustion. The idea is that you don’t want to directly destroy the target, i.e. the kidney stone, with a shock but you use the bubble as an intermediate tool. The bubble is capable to develop high and strong pressure gradients that then destroy the target, due to the “water hammer” effect. It’s amazing that these things were already studied in the 1980 or even earlier and still their full mechanism hasn’t been entirely explored. We still don’t know how to displace two bubbles to amplify the destruction of the target, or whether a cloud of bubbles can give you the same end effect as one big bubble. The problem is still too complex and we need new, more powerful computational tools to achieve the necessary resolutions to understand these phenomena. If you look at the development of computing power, you can gain over a certain time three orders of magnitude, or factor 1,000 of acceleration from the computing hardware and six orders of magnitude, that is factor 1 million by improving the algorithms. This means that, in the future, we can only address such problems by having the right algorithms.

It seems that you try to find analogies and inspiration in nature. What can we learn from nature?

Nature is highly parallel. Every molecule is interacting with other molecules, all at the same time. That tells us that all the natural processes are somehow intrinsically parallel, at least at the microscopic scale. We are going in the same direction in the context of parallel algorithms.  Another thing that we learn from nature is that simple rules can lead to very complex phenomena. This is encouraging because it means that with somewhat simple models we can model complex phenomena. On the downside, however, nature tells us that we will never ever have enough computer memory to capture all the details of natural processes. That’s the challenge. We are trying to tackle this challenge by using wavelets, a method to represent and also analyse data. We use this compression technique for trying to identify what is important in a process and what is not, in order to retain only the important information because the memory size of even the largest supercomputers is limited.

How do wavelets work?

Wavelets basically provide a way of representing information by identifying data correlation. They are based on the so-called Refinement Equation: functions that can be expressed with a linear superposition of smaller functions of the same type. By blending functions of the same shape, which are just a bit translated and dilated, you can reproduce the big one. The coefficients associated with the tiny functions can very likely be replaced with just one coefficient, the one associated with the big function. This is where the compression starts.

What is for you the main fascinating aspect in your research?

Because of my supervisor Petros Koumoutsakos, I have the opportunity to play with leading-edge technology in computing, and I find it fascinating to couple this hardware technology with leading-edge numerical techniques. I spent most of my research time working with extraordinary people like Babak Hejazialhosseini and Michael Bergdorf. Each one of us came from a different background, but we share the same interdisciplinary vision. That drives me and it's why I am still here.

Do you have absolute freedom in your research?

So far, I had the luck to be very aligned with Petros' goals. The fact that we have the same scientific objectives gives me a lot of freedom.

You are at ETH for a long time……

Yes, but I am looking for a job (laughs).

Would you like to stay in academia?

Maybe. Most industrial enterprises want to either push the HPC and keep the numerical approaches simple or they want to push numerical techniques and keep HPC efforts low. But as I described before, the key is to push both.

What are your plans for the future?

One of the performance challenges in supercomputing is that we have to deal with a relatively poor memory bandwidth, due to the DRAM technology. It is very unlikely that simulations on future supercomputers will be compute-bound. Rather it will be bound by the memory size and bandwidth. That means that you can’t exploit the full compute power of the supercomputer because your application spends a lot of time in memory transfers. This problem has not been carefully addressed so far and is something that will matter more and more in the future. My personal research challenge would be to develop or investigate how different compression techniques can help solve this problem. With compressed data you increase the virtual bandwidth. The compression could be computationally expensive but because we can’t exploit the available compute power fully anyway, we have a free budget of CPU operations. If we spend a fraction of this budget of free floating point operations for compressing the data, we may get a higher memory-bandwidth. In general, what I would like to do in the next ten years is to finally tackle challenges that have not been solved so far and to do something that significantly improves the quality of human life. 

Further information