Cameras are a fundamental component of modern robotic systems. As robots become more widespread, the need for robust sensing is becoming exceedingly important. Specifically, there is motivation to improve the robustness of visual navigation algorithms. For example, in high-speed driving applications, images captured by cameras can become blurry. Outdoors, difficult lighting conditions can wash out important information when images are over- or underexposed. This problem is especially apparent for vehicles such as cars and trains entering and exiting tunnels, and for drones transitioning from flight indoors to flight outdoors. Built-in auto-exposure algorithms that adjust for changes in illumination are typically developed to ensure that captured images are the best for viewing by humans. This does not necessarily mean that the auto-exposure functions work well for computer vision applications.
Justin investigated methods for improving the quality of images used in visual navigation, through online adjustments of camera parameters. He examined how parameter adjustments can be used to improve the performance of the modern visual navigation pipeline. An example of the superiority of learned gain and exposure control, compared to auto-exposure, is shown in the graphic below; feature tracking can be maintained even when entering and exiting tunnels.