Machine learning, and specifically deep learning, has generated incredible advancements in robotics in the last 5-10 years. However, deep learning techniques suffer from being extremely data hungry, often requiring millions of training examples to achieve reasonable results. In contrast, humans typically require very little training or practice to achieve many skills that would be highly nontrivial for a robot.
This conundrum naturally gives rise to a number of questions. How can we combine the representational power of the non-linear models represented by deep learning with the strong inherent manipulation capabilities of humans? Can humans “teach” a robot to complete complex tasks that traditional, purely model-based approaches to robotics tend to fail in? Can this be done while minimizing the amount of required data we need? And, finally, can we do this while still ensuring safety by monitoring our model’s uncertainty about its capabilities?
Trevor’s current research attempts to chip away at some of these difficult questions by combining Imitation Learning techniques with existing planners and controllers, by using Bayesian principles, and by attempting to bridge the gap between Imitation Learning and Reinforcement Learning. He has also worked previously on techniques for improving the versatility and usability of mobile manipulators, such as self-calibration.
Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

Our contact-based self-calibration procedure exclusively uses its immediate environment and on-board sensors.
Reinforcement learning offers a promising framework to develop algorithms that can reproduce hard-to-model behaviours in robotics. Recently, there have been many success stories where reinforcement learning has been used to solve problems which were previously considered prohibitively difficult for traditional AI techniques. Unfortunately, it is still not clear how to transfer these methods to robotic systems, where problems involve high-dimensional and continuous state and action spaces, that are also often not completely observable (nor noise-free).
Oliver is interested in investigating how robotic platforms can successfully reason and act in response to noisy sensor readings by learning useful representations of perceptual data. Specifically, he is interested in developing methods which learn to integrate multiple perception modalities, including underused modalities such as contact or force sensing, within reinforcement learning frameworks.
Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

Our contact-based self-calibration procedure exclusively uses its immediate environment and on-board sensors.
Modern, reliable autonomous navigation requires the fusion of data from multiple sensors to ensure that a vehicle’s positioning error is always bounded. The choice of on-board sensors has yet to be fully determined—the door is open for cutting edge fusion techniques to dramatically improve navigation accuracy. At present, vehicle sensor packages typically include cameras, LIDARs, and GNSS-INS systems. However, these sensor packages are often not sufficiently accurate or robust to inclement weather, such as snow or rain. The development of robust solutions is imperative for autonomous vehicles operating in Canada, where navigation in harsh weather conditions is a necessity.
Emmett has a passion for investigating the application of novel sensors in the field of mobile robotics. Currently, he is examining the use of ground penetrating radar (GPR) to estimate the pose of a vehicle in inclement weather. Emmett’s goal is to improve upon the positional accuracy of state-of-the-art GPR localization algorithms, while performing a rigorous evaluation of GPR’s suitability as an all-weather solution for vehicle localization.
Localization with Ground Penetrating Radar

The ground penetrating radar antenna panels are mounted in the cavity. Calibration is soon to follow!
This project aims to leverage ground penetrating radar’s robustness to inclement weather for vehicle localization.
Long-range robotic mobility in extra-terrestrial environments, such as on the surface of Mars, allows humanity to learn about the solar system and our origins. These environments, however, are for the most part unknown and riddled with hazards inhibiting safe mobility like sandy terrain, steep slopes, permanently shadowed regions and more.
Olivier draws inspiration from current and previous rover missions to the Moon and Mars to increase the resiliency of long-range navigation planning algorithms against environmental uncertainty. More specifically, his research focuses on adaptive online path planning and fault-tolerant safe traverse scheduling for solar-powered mobility. Target applications include long-range driving on Martian terrain and the exploration of permanently shadowed regions (PSRs) at the lunar south pole.
Energy-Aware Planning for Planetary Navigation

Orbital imagery and elevation model of the Canadian Space Agency’s Analogue Terrain.
Adam is working on interactive perception algorithms for mobile manipulators in collaboration with the Dynamic Systems Laboratory at UTIAS.
I envision a future where robots will have the ability to interact intelligently and safely with their environment using perception, prior knowledge, common sense and reasoning. A wise usage of these robots, combined with foresighted politics, will reduce socio-economic inequalities and our environmental footprint.
My current research direction lies at the intersection of perception and planning with an emphasis on efficient algorithms that make use of models in order to be fast enough to be deployed on real robots in human-centric environments.
So far, I had the chance to perform research in many incredible laboratories such as the Control and Robotics (CoRo) laboratory at ETS with Prof. Vincent Duchaine, the Embodied Dexterity Group (EDG) at UC Berkeley with Prof. Hannah Stuart, the Orthopedics and Imaging Laboratory (LIO) at ETS with Prof. Rachid Aissaoui and the STARS Laboratory at the University of Toronto with Prof. Jonathan Kelly.
I earned my Bachelor degree from the École de Technologie Supérieure (ETS) in the Department of Systems Engineering. At ETS, I was enrolled in Automated Manufacturing Engineering, which focus on industrial robotics, mechatronics and control. I then joined the University of Toronto Institute for Aerospace Studies (UTIAS) where I am currently pursuing a PhD degree in robotics with the STARS Laboratory directed by Prof. Jonathan Kelly.
Jinbang is working on hybrid neurosymbolic task and motion planning algorithms in collaboration with the RVL Lab, U of T CS.