Why don't weather forecasts use machine learning?

  • Artificial intelligence has long had its place in everyday life. Although machines are still a long way from real intelligence, they are clearly superior to humans in some areas - for example in recognizing hidden patterns.
  • Scientists around the world are trying to teach machines to learn - for example at the Jülich research site.
  • A physicist, a computer scientist and a doctor from Jülich tell us how AI can better predict the weather, recognize faces and even make medical diagnoses in the future.

Cologne -

The tide hits the place without warning. On May 29, 2016, a storm cell discharged over Braunsbach in a very short time. As much rain falls in an hour as it usually does in months. A huge tidal wave makes its way through the village and sweeps away everything that stands in its way: trees, cars, house walls. It leaves around 50,000 tons of rubble and debris in the small town northeast of Schwäbisch Hall. Total damage: over 100 million euros.

Even if the weather forecast has become more and more precise over the years: It is still difficult for meteorologists to warn individual places such as Braunsbach early on about heavy rain or local thunderstorm cells. This is due to the relatively rough resolution of the regional weather models of the German Weather Service (DWD). “Anything smaller than three kilometers falls through the grid. The model then says, for example, that it is raining in an area three by three kilometers - even if in reality blue sky and rain alternate in the area.

This is usually not enough to reliably predict local precipitation, ”explains Dr. Martin Schultz. The physicist from the Jülich Supercomputing Center is therefore trying to improve the forecasts in the DeepRain project so that the authorities have enough time to warn of local thunderstorms and heavy rain. Artificial intelligence (AI) should make this possible. It is supposed to look for patterns in weather data that announce local weather extremes.

AI does not yet come close to the human brain

AI is an approach to emulating intelligent behavior with the help of computers. To do this, the machines learn, draw conclusions and correct themselves. But they are a long way from reaching the human brain. Our brain works more energy-efficiently than any machine so far, it can draw meaningful conclusions from just a few examples, is able to think flexibly, find unconventional solutions and establish relationships between completely different situations.

Machines, on the other hand, have an advantage when it comes to plowing stoically through mountains of data and tracking down hidden patterns in a jumble of information or recognizing much more complex patterns than humans can.


A series of instructions used to solve a specific problem. The individual commands must be clear and must be carried out step by step. Typically, an algorithm takes an input and provides an output. Examples of algorithms are computer programs and electronic circuits, but also building instructions or cooking recipes. Certain algorithms are assigned to artificial intelligence.

Artificial intelligence

This refers to machines that reproduce intelligent behavior on the basis of algorithms. AI spans a whole spectrum: computer programs that can play chess. Or chatbots that talk to users of social networks. Certain sub-areas of robotics are just as much part of AI as expert systems that are supposed to help make optimal decisions in a limited area. Machine learning is considered a key technology in AI.

Machine learning

Behind this are AI algorithms that learn from data and examples and thus solve tasks. They acquire “knowledge” using examples or by independently recognizing patterns in data. You can then use it to assess unknown data of a similar nature. The more data is available to the algorithm, the more precise the detection. Considerable progress has been made with the help of artificial neural networks.

Artificial Neural Networks

These are mathematical models that are inspired by the way the brain works. Signals are fed into units that are networked with one another. These artificial nerve cells process the information and use simple mathematical equations to generate additional signals that they pass on to downstream “cells”. In the end, an output layer produces a result. Several layers of these nerve cells, which are linked to one another in different ways, can lie between the input and output layers. When learning, the connections between individual cells are strengthened, weakened or changed. Advances in computer technology and the availability of large amounts of data have made deep learning possible in such artificial networks.

Deep learning

The term means machine learning in neural networks with many layers, the "deep" networks. Here, too, algorithms analyze large data sets and can then assess unknown data of a similar type. However, the network models are much more complex due to the many layers. As a result, the algorithm has many degrees of freedom to network and can thus independently learn to extract optimal and possibly very complex features in order to solve a task. When identifying faces, he can discover finer criteria such as eye relief or nose size, which are helpful for recognition. Programmers help the software “learn” by providing feedback on whether a result is right or wrong. But they do not correct the way there.

One example is face recognition processes: the AI ​​sorts the photos of different people according to eye relief, face shape and nose size - depending on what the programmers have specified. She then creates a pattern for each face. She then uses her knowledge for new pictures: She then compares a picture with the previous photo inventory and suggests who can be seen in the picture. So you are taught to judge unknown data sets. This is one of the simplest forms of machine learning.

Get better with deep learning

For the pattern recognition in the improved weather forecast that Martin Schultz is striving for, the machines have to be able to do a little more. “The weather data contain complex temporal and spatial patterns. We do not know which of these are typical for heavy rain. We therefore feed the software with as much data as possible, it searches for patterns itself and then creates forecasts. "

Schultz relies on an advanced type of machine learning, deep learning: Here, too, AI systems search through large amounts of data, in the case of DeepRain weather data from previous years - but the researchers do not specify what is characteristic of extreme weather. Instead, they train the machine to find out for itself.

“We don't know what patterns the AI ​​is looking for. These can be things that we didn't even begin to think about, ”says Schultz. At the end of the day, however, he and his colleagues can check whether the AI's forecast is correct - i.e. it was raining heavily on the day - and report this back to the software. Through constant repetition, the AI ​​“learns” which patterns best predict heavy rain.

Machines can learn in a similar way to humans

The way deep learning works is roughly similar to the learning processes of our brain. There, billions upon billions of nerve cells are linked to one another. This is how they pass information on and process it. When we learn, we activate certain connections between nerve cells again and again and thus change the network between the cells: In children who read a lot, for example, the connections between the areas in the brain that are responsible for vision, hearing and language are strengthened. In professional badminton players, the networking of the brain regions that coordinate vision and movement changes.

Deep learning uses simple mathematical units, the activity of which roughly corresponds to that of nerve cells in the brain: They too are linked to one another via input and output connections and receive information from other units, which they process and pass on. But they work much more easily than the biological models. The mathematical units are organized in layers.

Thousands of layers

"Deep networks for deep learning sometimes have hundreds to thousands of layers in which the data is processed," explains Dr. Jenia Jitsev, who is working on the architecture of such models at the Jülich Supercomputing Center. With face recognition, it's like the input image going through a variety of filters that respond to increasingly complex patterns. For example, the first layer only perceives brightness values. Deeper layers react to edges, contours and shapes - very deep layers, finally, details to individual features of human faces.

The network learns to identify a face by noting which combination of brightness values, edges, shapes and details characterize this face: As with the nerve cells in the brain, certain connections between the network units are strengthened. The learning process creates connection patterns that lead to the correct result. “Deep neural networks require as many different examples as possible for training: the more, the more successful the learning,” says Jitsev.

This is exactly where a problem lies that Schultz still has to solve in the DeepRain project: The training material is missing. “We transfer a total of 600 terabytes of data from the German Weather Service for our calculations. That doesn't sound like a defect at first. ”However, heavy rain is rare. "According to statistics from the German Weather Service, there were no more than eight such events at any station between 1996 and 2005," said Schultz. Data sets from which a pattern for the AI ​​could crystallize are correspondingly rare.

In addition, the data is not only required for training, but also for the final quality check. Deep learning expert Jitsev: “Typically only 80 percent of the data is used for the training phase. We don't touch the remaining 20 percent at first. This test data set is only taken out after the training in order to check the results of the neural network. "

This test phase is particularly important when it comes to sensitive data - data that determines the fate of people: for example, selection procedures for applications, assessments of creditworthiness or medical diagnoses. The latter is what the physician Prof. Simon Eickhoff from the Institute for Neurosciences and Medicine at Forschungszentrum Jülich is working on. He hopes at some point with the help of AI to find patterns in the brains of people with psychological and neurological diseases in order to be able to treat them in a targeted manner.

For example, computer programs are used to search for patterns in brain scans that provide information about the likelihood of a relapse in a patient with depression. AI could predict how quickly a person with Parkinson's disease will become impaired, or whether a patient should be treated better with Drug A or Drug B.

AI can assess personalities

But there is still a long way to go until then. Eickhoff and his team are already in the process of using pattern recognition to allow AI to obtain certain information from brain scans: At the moment, the focus is on cognitive performance and personality traits such as openness, sociability and emotional stability. For this purpose, Eickhoff and his team trained machine learning programs with the brain scans of hundreds of people. Certain psychological parameters are entered by these test subjects, such as the reaction time in a standardized test. If the model has seen enough data, it can draw conclusions about the reaction time of a new individual based on the brain images alone. “However, our algorithms do not search for individual aspects in the image data. We cannot say: In people with a good working memory, certain areas in the brain are larger than average. Rather, the overall pattern is decisive, ”says Eickhoff.

You might also be interested in

Tricked Google Maps: An artist freed a Berlin street from car traffic

Addicted to hearts: How Instagram is addictive

Mimikama warns: Candle as a WhatsApp profile picture - why this is not a good idea

More complex cognitive abilities, such as reaction times or the capacity of working memory, can be derived relatively reliably from the brain scans using AI, according to the brain researcher. In the case of personality traits, the prediction also tends to be correct, but the accuracy is not so good here. This is shown by the quality assurance with data that the AI ​​does not yet know: it only trains with part of a data set, the researchers use the rest to check how well the AI ​​predicts personality traits after a learning phase. The AI ​​already delivers very good results when it comes to predicting age and gender: “Here our program can state with 90 percent certainty whether the brain belongs to a woman or a man. When it comes to age, we're in a range of plus / minus four years, ”reports Eickhoff.

bringing light into the darkness

It is comparatively easy to check data such as age or gender. It becomes more difficult with diagnoses and prognoses. “The acceptance of artificial intelligence in the healthcare system depends on the trust that is placed in it - both from patients and doctors,” believes the Jülich expert. Trust is based in part on the fact that it is understandable how a diagnosis or a result comes about. However, AI experts like to compare a neural network in deep learning with a black box: You know the input data and receive an output.

But the processes in the information-processing layers in between are so complex that it is usually impossible to understand how the network arrives at its results. It is therefore an important task for the AI ​​experts to shed light on this darkness in the coming years. Experts are placing hope in "Explainable AI", i.e. explainable Artificial Intelligence. Such an AI also provides the criteria for how it came to its conclusion. Not only medicine and neurosciences would benefit from such algorithms, but also weather forecast, speech recognition and the control of autonomous cars. "Only when we can explain why an algorithm makes this decision will we accept solutions proposed by machines that our brain cannot find," says Eickhoff.