• Sunlight in the tank

    How agile development methods help to overcome fears concerning the range of electric cars

  • Bat bike

    Futuristic bike concept with series potential


    When the car suddenly becomes the better smartphone

  • Intelligent headlight with a high IQ

Death in a Fog Bank

How artificial intelligence will in future steer us safely through the fog

Left: Dr. Florian Baumann (supervisor), centre: B.Sc. Michael Gann (degree candidate), right: Jacek Burger (B.Eng.) (project leader at the Lindau branch on Lake Constance)

Most people have probably heard of the classic driving school film "Death in the Fog". Frightening, moving, and based on a true event. The Münchberger Senke in Bavaria was the scene of one of the worst mass collisions in German history. Within just a few minutes, 120 vehicles ran into each other on the A9 motorway. Ten people died. The cause: thick fog.

Our lives in artificial hands

26 years after this tragic accident, we are talking about autonomous mobility, driverless cars. The basic requirement for this: driver assistance systems that we can fully rely on at all times. An there's the crux. At all times also means in all weather conditions. Current driver assistance systems, however, only work with the help of expensive radar or lidar technology, which, in the case of radar, is generally reserved for the premium vehicle segments. Without image processing, the camera-based video data from vehicles is no more capable of detecting hazards in the fog than we human beings are. To change this in the future, in the Master's thesis he is writing at EDAG in Lindau, Michael Gann is working on a new model, in which he breathes life into artificial intelligence. This enables it to reconstruct, or "defog" the pictures taken by a camera. "Many of the camera systems installed in today's cars tend to be based on recognising traffic signs, road markings and other road users. If the weather is foggy, the pictures can be very cloudy. Conventional algorithms come up against their limits here, and present a considerable safety risk," explains Gann. So that it will be possible in the future to assess a critical situation in difficult weather conditions, Michael Gann is training a special form of neural networking, a "convolutional neural network". This model is trained by means of the "supervised learning" method - and by using a graphic card server to accelerate calculation and analysis - after which it is able to extract characteristics of an image and then restore the colours and contrasts in the picture. "One example of this," explains the Master's student, "are the outlines of a building or a traffic sign in a picture. In fog, it is difficult to recognise these. By transporting an established structure of a convolutional neural network from the end-to-end tool - "DehazeNet" - into the TensorFlow framework, it should be possible for traffic signs, pedestrians or other critical situations to be recognised in the fog. An important milestone towards autonomous driving. Only if the situation is correctly assessed by the driver assistance systems can the safety of all road users be guaranteed, and lives therefore saved!"

Even artificial intelligence needs practice

But how do you train this type of model? Large quantities of image data are needed, because even artificial intelligence is only as clever as people have trained it to be. To this end, Michael Gann generated 200,000 synthetic snapshots with the IPG Carmaker simulation tool. A total of 22,222 single images, taken from three different angles and each with nine different fog densities were generated using a simulator. Snapshots of car journeys in the city and overland, with an authentic infrastructure. The advantage over genuine fog images is obvious. Different fog densities can be generated precisely, consistently and considerably more quickly. Precise depth information is required to be able to obscure genuine pictures, and in the case of real pictures, this is not available for this Master's thesis. Snapshots of a genuine journey in the fog would also only be of limited use here, as supervised learning requires a fog-free reference picture. Basically, you can say that with genuine pictures, the cost involves currently bears no relation to the benefits."

Left: foggy snapshot, centre: "transmission map": grey-value image representing the light attenuation with full pixel accuracy, right: defogged picture

"After I had generated the data file, I was able to start transporting the DehazeNet into TensorFlow. The work is now almost complete, and in the next step, I will be concentrating on training the model. This will involve placing 16 convolutional filters which are to be trained over the initial pictures, in order to extract fog characteristics such as contrast or colour, as these are difficult to detect in fog, and the colour is muted. To do this, the model goes through a number of different training phases with various parameters," explains Micheal Gann.

Autonomous systems of the future

Michael Gann started work on his thesis in October 2017. The 29-year-old explains why he decided to take on this complex subject as follows: "Machine learning is a highly topical, trend setting subject. TensorFlow went public only two years ago, and already ranks as the most frequently used framework for deep learning. This means that my work will produce new findings in the recognition of objects in camera-based data under extremely severe weather conditions. The result: Today's driver assistance systems will become safer, and be able to make the driver's job even easier in the future. At the moment, driver assistance systems fail to work in fog. At the moment, the camera sees exactly the same as the human eye. To make autonomous driving possible in the future, the pictures from vehicles' cameras must be consistently evaluated in real time. Only then will vehicles be able to take precautionary measures when it is foggy. And this will only work if use is made of artificial intelligence." In addition to putting the findings of his Master's thesis to use in driver assistance systems, Michael Gann can also imagine that they might be used in early detection systems in the shipping or aerospace industries. "Wherever object monitoring with camera-based data is utilised, and the image data has until now become unfit for use if it is foggy."

Michael Gann began his student career with a Bachelor's degree in "Information Management Automotive" at the University of Neu-Ulm. "I noticed straight away that I had made the right decision with my choice of major subject. From the very beginning, the subject fired my enthusiasm. Following this, I registered at the University of Kempten, where I did a fascinating Master's degree course in Driver Assistance Systems. An excellent decision!

The initial contact that led to Michael Gann writing his Master's thesis at EDAG was established at the careers fair at the university in Kempten. "This fair gave me the chance to speak to my current project leader, Jacek Burger, about the possibility of doing my thesis." Through the close contact between EDAG in Lindau and the neighbouring company ADASENS GmbH, a software developer offering camera-based software solutions for the automotive industry, the idea for a joint Master's thesis involving both engineering partners and supervised by Dr. Florian Baumann (ADASENS GmbH) and Jacek Burger (B.Sc.) (EDAG Engineering GmbH) was born.