Solving the Mystery of Neanderthal Man in the Supercomputer

Around 15,000 years after modern humans, Homo sapiens, came to Europe, Homo neanderthalensis disappeared for good. Now the supercomputer is to assess which of the current hypotheses can explain their extinction.

April 2, 2012 - by Simone Ulmer

Neanderthals were used to a cool climate, but still died out during the peak of the last ice age. (Image: Homo Neanderthalensis - Homo Sapiens: A Portrait. Stefan Auf der Maur, Marcia S. Ponce de Leon, Christoph P.E. Zollikofer, 2008)

20,000 to 200,000 years before our time, with Neanderthal man and modern man, at least two types of hominid were alive on Earth at the same time. Whereas modern humans in Africa and Neanderthals in Europe occurred separately from one another, in the Near East Homo sapiens and Homo neanderthalensis lived alongside one another for around 100,000 years. Just why the Neanderthals suddenly died out more than 20,000 years ago remains unexplained to this day.

Surviving because others died out

For Christoph Zollikofer, Professor of Anthropology at the University of Zurich, the key point of interest in relation to the phenomenon of extinction is the disappearance of populations.The survivors do not survive because they are the fittest, but for the reasons that the others died out. That is an altered perspective that is rather unusual in the evolution of humans”, Zollikofer emphasises. In order to reassess the Neanderthal extinction, he and his team are now pursuing new methods: with the aid of the CSCS supercomputer “Monte Rosa”, they plan to simulate the extinction event in an agent-based model.

Such models are increasingly gaining in importance, in order for example to simulate the behaviour of people, such as in a mass panic. In this modelling technique (see also the box), each individual (human and animal) in the system that is to be modelled is represented as a discrete object. He, she or it is assigned specific characteristics such as height, sex, weight or energy consumption. Via an algorithm, the individual is also given the ability to seek food, to interact with other individuals, or to move to other regions, where better living conditions prevail. For this complex bottom-up approach, the scientists developed new codes and software, which are now to be adapted optimally to modern hardware.

For this, the Swiss High Performance and High Productivity platform provides the ideal conditions for Zollikofer and his team. HP2C is an interdisciplinary cooperation between scientists, mathematicians, IT specialists and hardware manufacturers, who wish to adapt existing algorithms for modelling and solving complex scientific questions to future computer architectures in such a way that these can be used efficiently. The main instigators of HP2C are the CSCS, with its director Thomas Schulthess, and the Università della Svizzera italiana, with its president Piero Martinoli.

The picture shows the population density generated by a simple simulation. (Image: Research group Zollikofer)

Support from the physical sciences and mathematics

Zollikofer praises the ideal conditions, but does not make a secret of how difficult it has been to put together a research team to take on this challenge. It was not from anthropology, but from other disciplines that, after a long search, he and his colleague Jody Weissmann found support: from the mathematician Wesley Petersen, the astrophysicists George Lake and Simone Callegari, and the biophysicist Natalie Tkachenko, who are now developing the codes for the model. A model that is intended to unite short-term, local patterns of individual human behaviour and long-term, globally-acting patterns of changed environmental conditions. With it, the researchers would like to run through various scenarios of behaviour between Neanderthals and humans, taking into account the fauna, climate conditions, topography and vegetation, in order to research the possible causes of the extinction of the Neanderthals in greater detail.

The agent-based models are to be combined with classic top-down diffusion models which are used for researching population dynamics. “With them, we want to test the validity of agent-based models, amongst other things”, says Weissmann. Tkachenko is researching the coupling between the diffusion models and the agent-based models.

“What we aim to model corresponds to the saying that the flapping of a butterfly’s wings in China can cause a tornado in the USA”, says Zollikofer. He says that so far, no-one has tested whether something like that is possible, whether – analogously to the flapping of a butterfly’s wings – the local behaviour of a single individual could have effects on the entire human population.

Reducing a simulation period of years to a few weeks

With their project, the team would like to find out to what degree of detail the simulation of such complex relationships is possible. But before things get that far, a large number of software- and hardware-related problems remain to be solved in order to obtain a stable simulation environment, emphasise the researchers. In order to obtain meaningful results, each simulation must be carried out with tens of thousands of single individuals several hundreds of times, with differing starting conditions. The prototype for the model for this would take years. Through the interdisciplinary cooperation within the framework of HP2C, Zollikofer and his team would like to carry out the simulations in a few months or weeks. The professor is convinced that such massively parallel, multi-agent-based simulations will become increasingly important, for example in the question of the effect of migration in the modern world, or an evacuation in the event of a reactor accident.

Agent-based modelling:
The agent-based modelling method is used in different areas in which complex systems are to be modelled. For example, to predict how the shares market will develop, an epidemic will spread or, looking back, to investigate the decline of civilisations. The method is based on the simulation of the interaction of autonomous agents with one another and with their environment. The behaviour of the agents is described by simple (if-then) rules through to complicated (adaptive artificial intelligence) ones. Assigned to each are different characteristics and modes of behaviour, through which they all differ from one another. The modelling of each individual and its interaction shows how patterns, structures and behaviours can develop which were not programmed beforehand. This self-organisation only develops through interaction. In order to determine the spatial movement of a Neanderthal individual, the researchers place it, as an agent in their model, in the centre of a cell that has the shape of a honeycomb. To each cell, they assign particular values for the topography, the climate, the available food as well as potential enemies. The cells make up a honeycomb grid. On it, the agent determines in which direction it will most probably move. The test runs on 20 CPUs, in which the interaction of a million “simple” agents are simulated in 100,000 simulation steps, require two days. Here, one simulation step corresponds to a period of one day to around a week. The aim however is to simulate 10 million more complex agents with 1 million simulation steps and 1,000 repetitions. Under the aforementioned conditions, that would currently take 600 years.
Reference and further information:
http://www.palgrave-journals.com/jos/journal/v4/n3/pdf/jos20103a.pdf