Sensor Models for Virtual Validation of Automated Driving

The virtual validation of automated driving functions requires meaningful simulation models of environment perception sensors such as radar, lidar, camera, and ultrasonic.

An unrivaled standard for perception sensor models does not yet exist, and radar especially lacks modeling approaches that consistently produce realistic results. In this field of research, we aim at the development of meaningful sensor models for radar, lidar, and ultrasonic. Our research is threefold: We work on new modeling approaches for sensor simulation, specifically designed for virtual validation of automated driving. Therefore, we also establish quality criteria and validation methods for virtual sensor models on system level. Ultimately, we investigate use of synthetic sensor data for validating and stress-testing the sensor data processing chain, such as in object detection and target tracking applications. We deploy both analytical approaches, such as ray-tracing as widely used in image synthesis for mimicking propagation of radio waves for radar and infrared light for lidar, respectively. Both for modeling, but also for validation of our sensor models, we also incorporate algorithms originating form artificial intelligence, such as deep learning.

Corresponding Research Projects SET Level
Corresponding Open Source Group Perception sensor modeling group at Gitlab