Finding the goldilocks of cameras for self-driving cars

Publicly released:
International
Caption: This illustration shows a scene where the left half is coming from the color camera and the right half from the event camera Credit: Robotics and Perception Group, University of Zurich, Switzerland
Caption: This illustration shows a scene where the left half is coming from the color camera and the right half from the event camera Credit: Robotics and Perception Group, University of Zurich, Switzerland

International researchers have published two papers in which they believe they could have overcome a big hurdle in camera tech for self-driving vehicles. They say their new sensor chip can combine both the need for fast but imprecise perception with slow but precise processing. These two styles working together are key, the team say, as the fast perception works on situations such as entering a dark tunnel, a flash from a camera, or a person walking out in front of the car, whereas the higher detail processing is able to determine the difference between a leaf and a pet. In the second paper, the authors note how they can make these two styles work best by thinking about how much bandwidth the transfer of data from each needs, and by balancing this, they could have found a means to faster and more efficient image processing cameras in autonomous cars.

Media release

From: Springer Nature

Vision chips and hybrid cameras set their sights on self-driving cars

Two approaches to improve image processing, which could be used in self-driving cars, are presented in independent papers published in Nature this week.

Image sensors are crucial for a range of applications, including autonomous machines, and need to combine good overall vision quality (for accurate interpretation of the scene) with fast movement detection (to enable quick reactions). However, combining the desired functions may present challenges in terms of efficiency or trade-offs between image quality and latency. The two studies demonstrate hybrid approaches to meet both needs while overcoming previous limitations.

Luping Shi and colleagues have developed a sensing chip inspired by the way the human visual system works, combining fast but imprecise sensations with slower but more precise perceptions. The vision chip, called Tianmouc, has a hybrid pixel array that combines low accuracy but fast event-based detection (to enable quick responses to changes without needing too much detail) with slow processing to produce an accurate visualization of the scene. The authors demonstrate the ability of the chip to process images quickly and robustly in an automotive driving perception system. It was tested in a number of scenarios, including driving through a dark tunnel, responding to a flash disturbance from a camera and detecting a person walking in front of the car.

In a separate paper, Daniel Gehrig and Davide Scaramuzza address challenges with the cameras used for vision sensing. Full-colour cameras have good resolution but need large amounts of data processing (bandwidth) to detect rapid changes; reducing the bandwidth comes at a cost of increased latency and thus, potentially reduced safety. Alternatively, event cameras are good at detecting fast movement but with reduced accuracy. The authors show that a hybrid system can achieve robust low-latency object detection for self-driving cars. By combining the two cameras, the frame rate of the colour camera can be reduced, thereby reducing the bandwidth and increasing the efficiency while maintaining accuracy, and the event camera compensates for the resulting higher latency in the colour camera, ensuring that fast-moving objects, such as pedestrians and cars, can still be detected.

Both approaches could lead to faster, more efficient and robust image processing in self-driving cars and other applications.

Multimedia

Car detection 2
Car detection 2
Video 1
Journal/
conference:
Nature
Research:Paper
Organisation/s: Tsinghua University, Beijing, China and University of Zurich, Zurich, Switzerland
Funder: This work was supported by the STI 2030—Major Projects 2021ZD0200300 and National Natural Science Foundation of China (no. 62088102). This work was supported by Huawei Zurich, the Swiss National Science Foundation through the National Centre of Competence in Research (NCCR) Robotics (grant no. 51NF40_185543) and the European Research Council (ERC) under grant agreement no. 864042 (AGILEFLIGHT).
Media Contact/s
Contact details are only visible to registered journalists.