Archives

Dr. Filip Marić

Motion planning is one of key challenges in robotics today. When determining how a robot should perform a task, environmental and safety factors should be considered. For example, obstacles in the environment should be avoided, while also taking into account sensor measurement uncertainty and possible energy restrictions. Currently, there are many different ways of approaching this problem, ranging from classical optimization to deep learning. Filip developed motion planning algorithms with a focus on manipulators. He worked as a member of STARS and with the LAMoR group at the University of Zagreb.

Manipulability Optimization for Manipulator Motion

Screen Shot 2016-03-13 at 1.04.41 AM

High and low manipulability variants of trajectories for performing the same task.

A Riemannian metric for geometry-aware singularity avoidance by articulated robots
Filip Marić, Luka Petrović, Marko Guberina, Jonathan Kelly, Ivan Petrović
Robotics and Autonomous Systems (2021)
Fast Manipulability Maximization Using Continuous-Time Trajectory Optimization
Filip Marić, Oliver Limoyo, Luka Petrović, Trevor Ablett, Ivan Petrović, Jonathan Kelly
IROS (2019)

Global Polynomial Optimization for Robot Kinematics


Convex relaxations for polynomial formulations of inverse kinematics.

Convex Iteration for Distance-Geometric Inverse Kinematics
Matthew Giamou*, Filip Marić*, David M. Rosen, Valentin Peretroukhin Nicholas Roy, Ivan Petrović, Jonathan Kelly, ICRA (2022)
Inverse Kinematics for Serial Kinematic Chains via Sum of Squares Optimization
Filip Marić, Matthew Giamou, Soroush Khoubyarian, Ivan Petrović, Jonathan Kelly
ICRA (2020)

Distance-Geometric Inverse Kinematics

Encoding the configuration of a manipulator as a partially complete graph.

Riemannian Optimization for Distance-Geometric Inverse Kinematics
Filip Marić, Matthew Giamou, Adam W. Hall, Soroush Khoubyarian, Ivan Petrović, Jonathan Kelly
IEEE TRO (2022)

Dr. Matthew Giamou

What performance guarantees exist for algorithms running on complex robot systems that operate in dynamic environments shared with humans and other autonomous agents? This critical question motivates Matt’s past and ongoing work on safe robotic estimation and planning. Matt completed his Master’s degree in Aeronautical engineering at MIT, where he studied resource-efficient simultaneous localization and mapping (SLAM) with the Aerospace Controls Laboratory. His work focused on optimal communication and computation for multi-robot systems using SLAM in challenging mission scenarios like wilderness search and rescue.

 

During his PhD, Matt applied global polynomial optimization techniques to various estimation and planning problems involving 3D position and orientation. Matt is also interested in deriving bounds on measurement noise that ensure observability and fast, globally optimal solutions to key robotic estimation problems. These optimization methods, when combined with state-of-the-art learning-based solutions, will form a high-performance and provably safe architecture for mobile autonomous systems. Matt was a Vector Institute Post-Graduate Affiliate, the recipient of a 2019 Royal Bank of Canada Fellowship, and several other major awards. Matt worked on several projects including:

 

Global Polynomial Optimization for Robot Kinematics

Screen Shot 2016-03-13 at 1.04.41 AM


Convex relaxations for polynomial formulations of inverse kinematics.

Convex Iteration for Distance-Geometric Inverse Kinematics
Matthew Giamou*, Filip Marić*, David M. Rosen, Valentin Peretroukhin Nicholas Roy, Ivan Petrović, Jonathan Kelly, ICRA (2022)
Inverse Kinematics for Serial Kinematic Chains via Sum of Squares Optimization
Filip Marić*, Matthew Giamou*, Soroush Khoubyarian, Ivan Petrović, Jonathan Kelly, ICRA (2020)

Certifiably Globally Optimal Estimation via Convex Relaxations

Screen Shot 2016-03-13 at 1.04.41 AM


Dual SDP relaxation for extrinsic calibration.

Sparse Bounded Degree Sum of Squares Optimization for Certifiably Globally Optimal Rotation Averaging
Matthew Giamou, Filip Maric, Valentin Peretroukhin, Jonathan Kelly
arXiv pre-print.
Certifiably Globally Optimal Extrinsic Calibration from Per-Sensor Egomotion
Matthew Giamou, Ziye Ma, Valentin Peretroukhin, Jonathan Kelly, IEEE RA-L (2019)

Sensor Calibration for Robotic Systems

Screen Shot 2016-03-13 at 1.04.41 AM


Self-calibration between sensors.

Certifiably Optimal Monocular Hand-Eye Calibration
Emmett Wise*, Matthew Giamou*, Soroush Khoubyarian, Abhinav Grover, Jonathan Kelly, MFI (2020)
Entropy-Based  Calibration of 2D Lidars to Egomotion Sensors
Jacob Lambert, Lee Clement, Matthew Giamou, Jonathan Kelly, MFI (2016)

Resource-Efficient Communication for Multi-Robot SLAM

RSS 2018.


Measurement exchange graph for multi-robot SLAM.

Near-Optimal Budgeted Data Exchange for Distributed Loop Closure Detection
Yulun Tian, Kasra Khosoussi, Matthew Giamou, Jonathan How, Jonathan Kelly, RSS (2018)

* Denotes equal contribution.

Dr. Brandon Wagstaff

Brandon’s research focussed on using low-cost sensors (cameras and IMUs) for navigation and localization. In particular, he investigated how deep neural networks could be combined with classical estimators to yield better overall performance, under nominal and degraded conditions. Ultimately, his work is intended to produce algorithms that are able to operate within challenging environments, where classical algorithms are prone to failure.

 

For example, classical algorithms commonly rely on parameter tuning/calibration, which is highly sensitive to an agent’s motion or to the environment that the agent operates within. One of Brandon’s goals was to obviate the need for calibration or parameter tuning by replacing the sensitive components of the system with more robust learning-based models. By doing so, these systems are better able to operate within continuously-changing environments and over longer periods of time. He worked on several projects including:

Foot-Mounted Inertial Navigation for Indoor Localization

Foot-mounted inertial sensing can be used for first-responder localization.

Improving Foot-Mounted Inertial Navigation Through Real-Time Motion Classification
Brandon Wagstaff, Valentin Peretroukhin, and Jonathan Kelly, IPIN (2017)
LSTM-Based Zero-Velocity Detection for Robust Inertial Navigation
Brandon Wagstaff and Jonathan Kelly, IPIN (2018)
Robust Data-Driven Zero-Velocity Detection for Foot-Mounted Inertial Navigation
Brandon Wagstaff and Jonathan Kelly, IEEE Sensors J. (2020)

Deep Measurement Models for Visual-Inertial Navigation

The type of coupling between depth and egomotion networks has a significant effect on navigation performance.

Self-Supervised Deep Pose Corrections for Robust Visual Odometry
Brandon Wagstaff, Valentin Peretroukhin, and Jonathan Kelly, ICRA (2020)
Self-Supervised Scale Recovery for Monocular Depth and Egomotion Estimation
Brandon Wagstaff and Jonathan Kelly, IROS (2021)
On the Coupling of Depth and Egomotion Networks for Self-Supervised Structure from Motion
Brandon Wagstaff, Valentin Peretroukhin, and Jonathan Kelly, IEEE RA-L (2022)
A Self-Supervised, Differentiable Kalman Filter for Uncertainty-Aware Visual-Inertial Odometry
Brandon Wagstaff, Emmett Wise, and Jonathan Kelly, IEEE AIM (2022)

Andrej Janda

Maintaining an accurate representation of the environment is necessary for many tasks in robotics, such as navigation, obstacle avoidance, and scene understanding. The two most common scene representations, particularly for scene understanding tasks, are images, and point clouds. Images contain dense, feature-rich information but lack knowledge about distances and object sizes. Objects in images are also prone to occlusion. Modelling the 3D world directly with point clouds circumvents many of the limitations inherent to images. However, point clouds present their own challenges. Particularly, in contrast to images, point clouds are significantly harder to annotate. The difficulty of annotating 3D data has resulted in considerable effort and labelling times for existing datasets. A successful approach to reducing reliance on annotations is self-supervised learning. Self-supervision leverages unsupervised training on a large unlabelled dataset to initialize the parameters of a given model, which is subsequently trained with supervised annotations on a downstream task. Previous work has focused on self-supervised pre-training with point cloud data exclusively, which neglects the information-rich images that are often available as part of 3D datasets.

 

Andrej investigated a pre-training method that leverages images as an additional modality, by learning self-supervised image features that can be used to pre-train a 3D model. An advantage of incorporating visual data into the pre-training pipeline is that only a single point cloud scan and the corresponding images are required during pre-training. Despite using single scans, Andrej’s method performs competitively with approaches that use overlapping point cloud scans. Notably, his method yields more consistent performance gains than other, related algorithms.

Erin Richardson

Erin worked on lunar rover navigation planning software in cooperation with MDA. Now a Ph.D. student at CU Boulder in bioastronautics, working with Prof. Allison Anderson.

Abhinav Grover

The ability to perceive object slip through tactile feedback allows humans to accomplish complex manipulation tasks. Tactile signals provide vital information about slip faster than any exteroceptive perception method such as vision. Slip can be both disastrous (e.g., when transporting a fragile object) and advantageous (e.g., when moving an object without lifting it) depending on the context. For robots, however, detecting slip from tactile data remains challenging. This is due, in part, to the limited range of tactile sensors available and to the nature of tactile signal transduction.

 

Abhinav explored a learning-based method to detect slip using barometric tactile sensors. These sensors have many desirable properties; they are durable, highly reliable, and built from inexpensive components. He collected a novel tactile dataset and trained a temporal convolutional neural network to detect slip events. When tested on two robot manipulation tasks involving a variety of common objects, the detector demonstrated generalization to previously unseen objects. This is the first time that barometric tactile sensing technology, combined with data-driven learning, has been applied to slip detection.

 

Petra Alexson

Petra helped to build our GNN-based inverse kinematics solver. Now a B.A.Sc. student in Engineering Science at U of T.

Kelly Zhu

Kelly was involved in the design and testing of a new uncertainty-aware stochastic path planning framework. Now a B.A.Sc. student in Engineering Science at U of T.