Media release
From:
Vision chips and hybrid cameras set their sights on self-driving cars
Two approaches to improve image processing, which could be used in self-driving cars, are presented in independent papers published in Nature this week.
Image sensors are crucial for a range of applications, including autonomous machines, and need to combine good overall vision quality (for accurate interpretation of the scene) with fast movement detection (to enable quick reactions). However, combining the desired functions may present challenges in terms of efficiency or trade-offs between image quality and latency. The two studies demonstrate hybrid approaches to meet both needs while overcoming previous limitations.
Luping Shi and colleagues have developed a sensing chip inspired by the way the human visual system works, combining fast but imprecise sensations with slower but more precise perceptions. The vision chip, called Tianmouc, has a hybrid pixel array that combines low accuracy but fast event-based detection (to enable quick responses to changes without needing too much detail) with slow processing to produce an accurate visualization of the scene. The authors demonstrate the ability of the chip to process images quickly and robustly in an automotive driving perception system. It was tested in a number of scenarios, including driving through a dark tunnel, responding to a flash disturbance from a camera and detecting a person walking in front of the car.
In a separate paper, Daniel Gehrig and Davide Scaramuzza address challenges with the cameras used for vision sensing. Full-colour cameras have good resolution but need large amounts of data processing (bandwidth) to detect rapid changes; reducing the bandwidth comes at a cost of increased latency and thus, potentially reduced safety. Alternatively, event cameras are good at detecting fast movement but with reduced accuracy. The authors show that a hybrid system can achieve robust low-latency object detection for self-driving cars. By combining the two cameras, the frame rate of the colour camera can be reduced, thereby reducing the bandwidth and increasing the efficiency while maintaining accuracy, and the event camera compensates for the resulting higher latency in the colour camera, ensuring that fast-moving objects, such as pedestrians and cars, can still be detected.
Both approaches could lead to faster, more efficient and robust image processing in self-driving cars and other applications.