Cameras are a fundamental component of modern day robotics platforms. They are one of driving factors for the reduction of cost and growing popularity of robotics. As the application of robotics platforms becomes more widespread in a growing variety of environments, the need for robust sensing in these environments is becoming exceedingly important. Specifically, a major challenge presented by the use of cameras is to improve the robustness of visual navigation algorithms. For example, in high-speed applications images captured by cameras can become blurry. In outdoor applications, wide ranges in lighting conditions can wash out important information in images as images can quickly become over- or underexposed. This is especially apparent for vehicles such as cars and trains entering and exiting tunnels, and drones transitioning from flight indoors to flight outdoors. Built-in auto-exposure algorithms that adjust for changes in illumination are typically developed to ensure that scenes captured are the best for viewing by humans. This does not necessarily mean that the auto-exposure functions work well for computer vision applications.
Justin investigated methods for improving the quality of images that are used in visual navigation algorithms through online adjustments of camera parameters. He examined how parameter adjustments can be used to improve the performance of modern visual navigation.
Soroush helped to devise several algorithms for certifiably optimal estimation. Now B.A.Sc. student in Engineering Science at U of T.
Jason worked on visual docking algorithms for our autonomous tail-sitter aerial vehicle.
Hudson worked on rover autonomy software as part of the U of T Robotics for Space Exploration competition team.
Yuchen examined reinforcement learning in cooperation with Prof. Florian Shkurti at UTM. Now an M.A.Sc. student in Prof. Tim Barfoot’s group at UTIAS.
The deep learning revolution has lead to significant advances in the state-of-the-art in computer vision and natural language processing. For mobile robotics to benefit from the fruits of this research, roboticists must ensure these predictive algorithms are not only accurate in dynamic environments, inclement weather and under adverse lighting conditions, but that they also provide some consistent measure of uncertainty. In many cases, what is sufficient in a computer vision context is significantly deficient for use in mobile robotics, and vice-versa.
For example, an object classification algorithm with an accuracy of 95% may be sufficient to reach the state-of-the-art on some computer vision datasets, but may be completely unusable for safety-critical mobile autonomy applications. Conversely, an algorithm with an accuracy of 30% may be deemed unsatisfactory for many computer vision tasks, but may be more than enough for mobile vehicles, if it operates at high frequency, and produces consistent uncertainty estimates that can be used to eliminate poor classifications.
Valentin’s research focused on bridging the gap between classical probabilistic state estimation and modern machine learning. He worked on several projects including:
HydraNet: A Network Structure for Learning Rotations with Uncertainty

HydraNet aids classical egomotion pipelines by extracting latent representations of rotation with aleatoric and epistemic uncertainty.
DPC-Net: Deep Pose Correction for Visual Localization

DPC-Net learns bias corrections to existing egomotion pipelines.
Sun-BCNN: Sun sensing through Bayesian CNNs

Sun-BCNN regressed the 3D direction of the sun to improve stereo VO.
Predictive Robust Estimation

PROBE maps visual landmarks into a prediction space.
As robotics enters the “robust perception age”, a major focus of modern robotics research will be the design of perception systems capable of operating over extended periods of time in a broad range of environments. Visual perception in particular holds great promise in this area due to the wealth of information available from standard colour cameras. Indeed, we humans rely heavily on vision for navigating our daily lives. But how can we use vision to build persistent maps and localize against them when the appearance of the world is always changing?
Lee’s research focused on developing ways for robots to reason about more than just the geometry of their environment by incorporating information about illumination and appearance into the mapping and localization problem. In particular, he applied machine learning algorithms to create robust data-driven models of visual appearance, and used these models as an enabler of long-term visual navigation.
Projects he worked on included:
Modelling Appearance Change for Long-term Visual Localization

CAT-Net learns to transform images to correspond to a previously-seen reference appearance.
Visual Sun Sensing

Sun-BCNN regresses the 3D direction of the sun to improve stereo VO.
Monocular Visual Teach & Repeat

MonoVT&R is capable of retracing human-taught routes with centimetre accuracy using only a monocular camera.
Shui Song visited us from NTU, Singapore, to work on semantic segmentation and mapping for manipulation tasks.
Juraj worked on mm-wave radar as part of ongoing research for his PhD in the LAMoR group at the University of Zagreb.
Brian developed extensions to our deep appearance modelling framework. Now a Machine Learning Engineer at Google, Mountain View.