Sensors

Camera

Cameras are among the most widely used sensors for connected mobility and infrastructure perception. They deliver high-resolution color images that provide comprehensive and detailed information about the environment. Compared to other sensors, they are inexpensive, lightweight and compact. Cameras work effectively with computer vision algorithms and artificial intelligence to recognize and track different road users. They are able to monitor live events in real time, which is crucial for connected mobility and infrastructure. In addition, the use of multiple cameras enables the capture of different angles and views of an environment, providing a more comprehensive understanding of the surroundings. We use various camera models from companies such as IDS, Basler and Flir, to name but a few. These are equipped with various interfaces such as GigE and USB. These cameras are used both in vehicles and in infrastructure facilities for traffic monitoring, detecting road users, predicting movement, detecting construction sites, etc. We also use thermal imaging cameras in some projects. Since RGB cameras can record personal data, we use various data anonymization algorithms to remove sensitive personal data from the camera images in order to comply with data protection regulations.

LiDAR

LiDAR (Light Detection and Ranging) sensors are considered a key element for future automated driving in the field of environment perception. These sensors are optical sensors that operate in the near-infrared range and scan the surroundings using a short-pulse laser. By measuring the laser propagation time to the object hit, the object distance can be measured with an accuracy in the mm range. In this way, the known beam direction can be used to generate a very precise, high-resolution 3D model of the environment in the form of a point cloud. Based on this, both static and dynamic objects such as pedestrians and vehicles can be precisely detected and transmitted to a downstream driving function.

In general, a distinction is made between scanning (figure above) and non-scanning LiDARs (figure below). The former are much more widespread and detect the environment using a directed laser beam, which is deflected by a scanning mechanism and generates an image of it using the raster method. The latter are also known as solid-state LiDARs or flash LiDARs. These are characterized by the fact that no moving components are required to scan the environment, which is a definite advantage for their robustness. Instead of a scan, the entire environment is detected using a single diffuse laser pulse, which is then projected onto an image sensor using a corresponding receiver optic, similar to a flash camera.

 

Radar

Radar sensors play a crucial role in connected mobility and infrastructure as they offer a variety of advantages. Compared to other sensors, they are characterized by their robustness and their ability to provide reliable data regardless of the time of day and weather conditions. In addition, they are anonymized, which means that they do not collect any personal data and therefore comply with data protection regulations.

Radar sensors enable the identification and classification of targets at both short and long distances. For our sensor data fusion and the development of radar-based algorithms, we rely on high-quality radar products from Continental such as the ARS408, ARS430 and ARS548, as well as solutions from Oculii. In addition, we use radar sensors with raw data interface to work at a deeper level of development, especially when using the micro-Doppler effect for precise classification of objects. Here we rely on products from Texas Instruments such as the AWR1243 and AWR2243, as well as uRad radar sensors such as the RaspberryPi uRAD and the Automotive uRAD.

Software

In order to develop perception algorithms for networked mobility and infrastructure, a reliable, robust and modifiable software framework is required. This is why the Robot Operating System (ROS) is used in our research group. It is the platform of choice for the research and development of multi-sensor software. The primary languages for our software development are Python and C++, utilizing the latest software development strategies. Multisensor data fusion involves various software architectures, some of which are outlined below.

 

Calibration

In a multi-sensor system, each sensor measures the external environment in its own coordinate system. However, in order to process and fuse the data from these sensors, the data must be transformed from multiple coordinates into a reference coordinate frame. This reference frame can be one of the sensor frames, a ground reference frame, a vehicle reference frame or a world reference frame. The process of determining the correct mathematical transformation is called extrinsic calibration.
Similarly, an intrinsic calibration is also required for each sensor. The aim of intrinsic calibration is to parameterize a projection model that projects object points from 3-dimensional space onto the corresponding position of the image sensor. The intrinsic parameters therefore contain, among other things, crucial information about the sensor's optics, such as its effective focal length and the degree of lens distortion. In general, a family of targets with a fixed arrangement of so-called "features" is used for intrinsic calibration. These are detected in the image by the calibration algorithm and associated with the position of the features in space. 

New algorithms for both the intrinsic and extrinsic calibration of sensors are being developed as part of the research. 

Contact

Group Leader Optical Sensors
Marcel Kettelgerdes, M. Sc.
Phone: +49 841 9348-6451
Room: S421
E-Mail:

Open positions

If you are interested in vacancies for student work within the research group, please send an email with CV to assistenz-iimo-elger@thi.de.