Cameras are a fundamental component of modern day robotics platforms. They are one of driving factors for the reduction of cost and growing popularity of robotics. As the application of robotics platforms becomes more widespread in a growing variety of environments, the need for robust sensing in these environments is becoming exceedingly important. Specifically, a major challenge presented by the use of cameras is to improve the robustness of visual navigation algorithms. For example, in high-speed applications images captured by cameras can become blurry. In outdoor applications, wide ranges in lighting conditions can wash out important information in images as images can quickly become over- or underexposed. This is especially apparent for vehicles such as cars and trains entering and exiting tunnels, and drones transitioning from flight indoors to flight outdoors. Built-in auto-exposure algorithms that adjust for changes in illumination are typically developed to ensure that scenes captured are the best for viewing by humans. This does not necessarily mean that the auto-exposure functions work well for computer vision applications.
Justin is investigating methods for improving the quality of images that are used in visual navigation algorithms through online adjustments of camera parameters. Justin is investigating the effects of these changes and how they can be used to improve the performance of modern visual navigation methods. Additionally, Justin is looking into ways in which modern machine learning can be used to predictively queue changes to camera parameters so that these visual algorithms perform well in high-speed environments.