Event Cameras and Vehicle Sensing November 2019
Subscribe to Insights in Brief to be notified about new Featured Content as it becomes available!
Traditional cameras use a frame-based method to capture a brightness value at every pixel almost simultaneously. In comparison, event-based cameras—also, dynamic vision sensors—report measurements only when one or more pixels change. A time-coded signal indicates when brightness of a pixel increases or decreases beyond certain thresholds (commonly, logarithmic changes in brightness). Because the output from event-based cameras differs significantly from the output of traditional cameras, new processing techniques and algorithms are necessary to extract useful information from event cameras.
Researchers are exploring new applications of event cameras—including robotics, industrial operations, and autonomous vehicles. Researchers from the University of Pennsylvania recently published a report exploring the use of unsupervised learning for processing video from an event-camera dataset captured from moving vehicles. The team trained two networks: One estimates optical flow; the other estimates egomotion (motion of a camera within an environment) and depths. Another group of researchers from the Polytechnic University of Madrid and the University of Zurich used deep neural networks to estimate a vehicle's steering angle from event-camera images. The team's system gave more robust steering predictions in cases where traditional cameras fail, such as challenging illumination conditions and fast motion.
In the past few years, researchers have been able to advance computer-vision technologies significantly using machine-learning techniques, but such systems still have limitations. Researchers and some companies believe that event cameras could offer additional benefits to computer-vision systems because event cameras hypothetically perform better in challenging lighting situations and with fast-moving objects than traditional cameras do.
Even though event cameras have been commercially available since 2008, they have remained a niche sensor product. Current event cameras have a significantly lower resolution and higher price than frame-based cameras have. A few companies—including Samsung and Prophesee—have continued to refine and improve on event-based image sensors and plan to commercialize the technology in the near future. Improvements include decreased pixel size, increased resolution, and improved noise reduction. Continued improvement in event-camera design will make it easier for researchers and companies to experiment with event cameras for existing and new applications.
Autonomous vehicles make use of many various sensing technologies—including cameras, lidar, radar, and ultrasonic sensors. Event cameras are only beginning to become commercial in a meaningful way, but they could provide an additional sensing method for advanced driver-assistance systems and autonomous vehicles in the future. Although event cameras sense light at the same wavelengths that traditional cameras do, their lower sensing latency and increased dynamic resolution could provide useful capabilities for intelligent-vehicle systems. In addition to needing hardware improvements, researchers will also need to develop new image-processing techniques and machine-learning algorithms to make best use of event cameras. The development of such algorithms is still in its infancy, and initial breakthroughs might first appear in robotics, industrial equipment, and consumer-electronics research rather than in autonomous-vehicle research. An early opportunity for event cameras is synthesis of high-dynamic-range frame-based video for use by existing computer-vision algorithms.