August 15, 2018 – Interview by Simone Ulmer

I was somewhat disillusioned after your keynote.
Alice Gabriel: Why was that? Because there were questions I couldn’t answer?

No, it wasn’t that. I had never heard anyone speak so candidly about the problems of seismic simulations before. In particular, how the models still rely on a large number of theoretical assumptions that make the findings unreliable.
We need to distinguish here between wave motion through the earth and what happens at the earthquake source. In classical seismology, the focus is on wave propagation. Earthquakes are understood in their most simplified form as point sources that generate a lot of useful data, which seismologists use to simulate global wave propagation. According to geophysicist Jeroen Tromp, this forward problem has been solved and works well nowadays. The challenge now is to work backwards and reverse the calculation at high resolution. This is referred to as an inverse process and is used to learn more about what the Earth’s interior looks like. This method always assumes that we know what the earthquake source looks like. However, we don’t really know. That is what we want to find out in our simulations. My research focuses on what happens at the earthquake source, the local ruptures and their dynamics.

Where does the challenge lie?
In contrast to the previous process, we cannot obtain such large amounts of data because we do not have direct access to the earthquake source. We cannot definitively say which physical processes are dominant or relevant in an earthquake. Moreover, the process is not linear due to the friction between the tectonic plates. This is why the inverse method is anything but clear when applied to the earthquake source.

So the problem lies in correctly depicting the rupture and what exactly is happening to it?
Exactly. We do not know when an earthquake will strike, at what speed it will spread, whether the crust will break piece-wise or as a smooth continuous fracture, and we do not know when it will stop. Models exist for this, but they are heavily simplified. We still do not understand the physics of earthquake sources.

This is why the models rely on assumptions to a large extent.
Exactly. Our knowledge comes from laboratory experiments where we are working on completely different scales, or from drilling directly into the fault zone, such as the one off the coast of Japan where the devastating Tohoku quake occurred in 2011. Similar drilling has also been going on for some time in the San Andreas Fault in California. Through these, we hope to gain a better understanding of earthquake mechanisms.

Each fault zone is unique, whether in its geometry, its rock composition or the prevailing physical conditions. Is it as simple as applying the knowledge gained from individual drilling missions to simulations of other earthquake regions?
Our basic physical assumptions are kept very broad: the laws of rock fracture mechanics date back to experiments in the early 1970s. However, we now use modern variations of these laws, which take into account the unique conditions of the fracture zone, such as the liquid content or type of rock, and the high rate of slippage during earthquakes. For example, the friction laws in simulations of the Sumatra subduction scenario are parameterised differently than those for scenarios of the 1992 Landers crustal earthquake in California.

In 2017, you and your team were the first to simulate the entire fracture zone of the Sumatran-Andaman earthquake that occurred in Indonesia on Christmas Day 2004, which caused a devastating tsunami. Why was this possible only now?
Very few research groups attempt, as we do, to simulate such earthquakes on a scale equivalent to nature. Normally, this is done only two-dimensionally or in small, highly simplified models. The challenge lies in the size scales that have to be taken into account. We not only have the 1,500 km of fracture zone, but also the seismic waves that are propagating. Then we have to link the whole thing to the tsunami triggered by the earthquake. This gives us a three-dimensional and very large modelling domain. At the same time, we have to resolve in the fracture zone how the stress at the tip of the earthquake diminishes as it spreads. This is done on very small scales of a few metres or centimetres. If we wanted to reproduce this completely realistically, we would even have to take millimetres into account here, but this is impossible with the static mesh we use.

However, you have already successfully carried out similar simulations in the past.
In 2014, we presented a simulation of the Landers earthquake in California at the Supercomputing Conference (SC14). This is a classic example of a segmented fracture system with which science still has difficulty. Until now, it is challenging to understand how the earthquake could jump from one segment of the fault to another. We were able to show mechanically viable earthquake jumping and branching in the simulation, for which all the codes were optimised in collaboration with the group of Prof. Michael Bader, computer scientist at the Technical University of Munich (TUM). Even with this code, however, we would not have been able to simulate the Sumatran-Andaman quake within a useful timeframe.

What did you do differently in the case of the Sumatran-Andaman earthquake?
The trick we have now applied is the local time stepping method, which we incorporated into the code. The code can thus solve the wave equation for each element in one-time step, which corresponds to the size of the element. This means that it is no longer the smallest element of the mesh to be calculated that determines the overall simulation. This is a decisive breakthrough for simulations of subduction quakes, since the angles at which these megathrusts intersect with the mountains and valleys underwater are very narrow. These structures are complex and very difficult to resolve. Our main idea was to use the simulated ground movements in higher resolution as the starting condition for the tsunami simulation and to link them together.

And that’s exactly what you did. What did this simulation show you? Were there surprising insights?
We wanted to see if the tsunami was activated by the fracture zone alone or if the smaller faults also played a role; these are known as splay faults and are pop-up fractures that branch off from the large fault zone at small angles. Due to the size of the tsunami, the first hypothesis was that these minor faults must have played a role. Research ships were sent there to investigate. But with the Tohoku earthquake and tsunami in Japan, it became clear that a single fracture zone was capable of generating a destructive tsunami. Our model also showed that the small fault zones in the Sumatra earthquake did not play a significant role in the development of the tsunami.

When exactly does a tsunami occur?
The ocean is like a standing water column. In order to create a tsunami, the earthquake must resonate at the exact frequency of the water column. Earthquakes are usually too fast for this. In order to produce a tsunami, the earthquake must be very strong, generate a lot of ground motion and at the same time be very slow. That is why the early warning systems relying on fast earthquake information for tsunamis often fail.

Are you saying that current early warning systems are useless?
We can use them to determine the magnitude of the earthquake very quickly, but the special “tsunami-specific” properties cannot be determined so easily in real time. That is a big problem.

Is there any point in early warning systems at all then?
After the Sumatra earthquake, the GFZ German Research Centre for Geosciences in Potsdam installed one such system for the first time in the Indian Ocean. This type of system is particularly challenging in Indonesia. Extremely short warning times of between 20 and 40 minutes render the early phase of the warning process particularly important; however, this is precisely where the uncertainties about the earthquake source have the most pronounced influence. If early warning systems are based only on earthquake information, false alarms are inevitable. For this reason, the German-Indonesian early warning system was also combined with tsunami measuring stations. Nonetheless, the problem is the potential for false alarms, which causes people to lose confidence in the system. Even in established early warning systems in the US or Japan, alarms are often triggered without a tsunami.

Will it ever be possible to predict earthquakes? These days, one frequently reads the misleading term “earthquake prediction”.
We tend to avoid the term and use “earthquake forecasting” instead. The best thing we can do is to find out how the ground will move during the next quake, the consequences this will have and which regions may be affected — for example, if a magnitude 8 earthquake occurs again in Sumatra or a magnitude 9 at the San Andreas Fault. Of course, we already know where the faults are and in what time frame a new earthquake can be expected. This is interesting for construction physics, for example, in order to build houses or other buildings with the necessary stability. Nowadays, estimates of the aftershocks are almost as reliable as the weather forecast. For this, we have a statistical model with which we can determine the probabilities of the aftershocks and their strength.

What is the biggest geophysical problem you face in your work?
That the initial conditions we need for our simulations still remain relatively unknown: the pre-stress, the forces acting in the early stages and the strength of the fracture zone. This means that we usually have to run several simulations to cover all possibilities. Of course, this leads to the methodological problem that we are currently unable to apply a Monte Carlo method to such large-scale earthquakes, since we would need about a million forward models. In contrast to seismic wave propagation, this does not yet exist for fracture zone dynamics. We really do run single forward simulations. The highest we can do are 100 to 200 simulations with manually varied parameters, and then evaluate them statistically. Uncertainty quantification and coverage of a large parameter space with very uncertain parameters are the major unresolved problems for which we are often criticised.

What do you reply to your critics?
We see our models as an integrative approach to the analysis of various research results, from laboratory measurements to deep drilling, from geotectonic analysis to geodetic measurements on realistic scales. We do this based on physics, following the laws of rock mechanics and wave propagation. With this approach, it is possible to gain valuable insights into which of the many potentially interesting effects actually influences fracture mechanics and seismic hazards.

In contrast to tomography, physical models can test competing scientific hypotheses — for example, the much-discussed controversy over weak or strong frictional resistance at active fracture zones.

Our models show a high degree of “uniqueness”; i.e. physically plausible results deliver only a few initial conditions. We find these out in several smaller-scale simulations in order to then calculate the best model in high resolution and on scales of entire supercomputers.

Seismology is a data-driven science: promising alternatives to simulations includes better methods to measure earthquake sources in-situ, and new approaches to evaluation of existing measurements — for example, based on seismic arrays. We are working on taking the new findings gained in this way into account in our work.

What are the challenges in the field of high-performance computing?
These lie in the geometry and the structure of the meshing, which I have already mentioned. We need to create a high-quality mesh for each scenario. This is sometimes difficult when it comes to complex geometries, such as intersecting terrain topographies such as mountains and valleys with earthquake fault zones. This is why we currently have projects in which we try to take this step away from the user. For exascale computing in particular, at some point the models are simply too large to be operated manually.