Archives

Justin Tomasi

Cameras are a fundamental component of modern day robotics platforms. They are one of driving factors for the reduction of cost and growing popularity of robotics. As the application of robotics platforms becomes more widespread in a growing variety of environments, the need for robust sensing in these environments is becoming exceedingly important. Specifically, a major challenge presented by the use of cameras is to improve the robustness of visual navigation algorithms. For example, in high-speed applications images captured by cameras can become blurry. In outdoor applications, wide ranges in lighting conditions can wash out important information in images as images can quickly become over- or underexposed. This is especially apparent for vehicles such as cars and trains entering and exiting tunnels, and drones transitioning from flight indoors to flight outdoors. Built-in auto-exposure algorithms that adjust for changes in illumination are typically developed to ensure that scenes captured are the best for viewing by humans. This does not necessarily mean that the auto-exposure functions work well for computer vision applications.

 

Justin investigated methods for improving the quality of images that are used in visual navigation algorithms through online adjustments of camera parameters. He examined how parameter adjustments can be used to improve the performance of modern visual navigation.

Yuchen Wu

Yuchen examined reinforcement learning in cooperation with Prof. Florian Shkurti at UTM. Now an M.A.Sc. student in Prof. Tim Barfoot’s group at UTIAS.

Valentin Peretroukhin

The deep learning revolution has lead to significant advances in the state-of-the-art in computer vision and natural language processing. For mobile robotics to benefit from the fruits of this research, roboticists must ensure these predictive algorithms are not only accurate in dynamic environments, inclement weather and under adverse lighting conditions, but that they also provide some consistent measure of uncertainty. In many cases, what is sufficient in a computer vision context is significantly deficient for use in mobile robotics, and vice-versa.

 

For example, an object classification algorithm with an accuracy of 95% may be sufficient to reach the state-of-the-art on some computer vision datasets, but may be completely unusable for safety-critical mobile autonomy applications. Conversely, an algorithm with an accuracy of 30% may be deemed unsatisfactory for many computer vision tasks, but may be more than enough for mobile vehicles, if it operates at high frequency, and produces consistent uncertainty estimates that can be used to eliminate poor classifications.

 

Valentin’s research focused on bridging the gap between classical probabilistic state estimation and modern machine learning. He worked on several projects including:

HydraNet: A Network Structure for Learning Rotations with Uncertainty

HydraNet aids classical egomotion pipelines by extracting latent representations of rotation with aleatoric and epistemic uncertainty.

Deep Probabilistic Regression of Elements of SO(3) using Quaternion Averaging and Uncertainty Injection
Valentin Peretroukhin, Brandon Wagstaff, and Jonathan Kelly
CVPR (2019) Workshop on Uncertainty and Robustness in Deep Visual Learning.

DPC-Net: Deep Pose Correction for Visual Localization

DPC-Net learns bias corrections to existing egomotion pipelines.

DPC-Net: Deep Pose Correction for Visual Localization
Valentin Peretroukhin and Jonathan Kelly
Robotics and Automation Letters (RA-L) and ICRA (2018).

Sun-BCNN: Sun sensing through Bayesian CNNs


Sun-BCNN regressed the 3D direction of the sun to improve stereo VO.

Inferring sun direction to improve visual odometry: A deep learning approach
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
IJRR (2018).
Reducing Drift in Visual Odometry by Inferring Sun Direction using a Bayesian Convolutional Neural Network
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
ICRA (2017).

Predictive Robust Estimation


PROBE maps visual landmarks into a prediction space.

PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Valentin Peretroukhin, William Vega-Brown, Nicholas Roy, and Jonathan Kelly
ICRA (2016).PROBE: Predictive Robust Estimation for visual-inertial navigation
Valentin Peretroukhin, Lee Clement, Matthew Giamou and Jonathan Kelly
IROS (2015).

Lee Clement

As robotics enters the “robust perception age”, a major focus of modern robotics research will be the design of perception systems capable of operating over extended periods of time in a broad range of environments. Visual perception in particular holds great promise in this area due to the wealth of information available from standard colour cameras. Indeed, we humans rely heavily on vision for navigating our daily lives. But how can we use vision to build persistent maps and localize against them when the appearance of the world is always changing?

 

Lee’s research focused on developing ways for robots to reason about more than just the geometry of their environment by incorporating information about illumination and appearance into the mapping and localization problem. In particular, he applied machine learning algorithms to create robust data-driven models of visual appearance, and used these models as an enabler of long-term visual navigation.

 

Projects he worked on included:

Modelling Appearance Change for Long-term Visual Localization


CAT-Net learns to transform images to correspond to a previously-seen reference appearance.

Learning Matchable Image Transformations for Long-term Metric Visual Localization
Lee Clement, Mona Gridseth, Justin Tomasi and Jonathan Kelly
IEEE RA-L and ICRA 2020. Paris, France.
Matchable Image Transformations for Long-term Metric Visual Localization
Lee Clement, Mona Gridseth, Justin Tomasi and Jonathan Kelly
CVPR Image Matching Workshop 2019. Long Beach, USA.

Visual Sun Sensing


Sun-BCNN regresses the 3D direction of the sun to improve stereo VO.

Inferring sun direction to improve visual odometry: A deep learning approach
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
IJRR 2018.
Reducing Drift in Visual Odometry by Inferring Sun Direction using a Bayesian Convolutional Neural Network
Valentin Peretroukhin, Lee Clement, and Jonathan Kelly
ICRA 2017. Singapore.
Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation
Lee Clement, Valentin Peretroukhin, and Jonathan Kelly
ISER 2016. Tokyo, Japan.

Monocular Visual Teach & Repeat


MonoVT&R is capable of retracing human-taught routes with centimetre accuracy using only a monocular camera.

Robust Monocular Visual Teach and Repeat Aided by Local Ground Planarity and Colour-Constant Imagery
Lee Clement, Jonathan Kelly, and Timothy D. Barfoot
JFR 2017.
Monocular Visual Teach and Repeat Aided by Local Ground Planarity
Lee Clement, Jonathan Kelly, and Timothy D. Barfoot
FSR 2015. Toronto, Canada.

Brian Aly

Brian developed extensions to our deep appearance modelling framework. Now a Machine Learning Engineer at Google, Mountain View.