Self-Supervised Geometry-Guided Initialization for Robust Monocular Visual Odometry

1 Frontier Research Center,
Toyota Motor Corporation
2 Toyota Research Institute

The code will be made available soon.
(Posted on December 2024)

We propose SG-Init: simple yet effective guides for the learned SLAM optimization.

Abstract


Monocular visual odometry is a key technology in a wide variety of autonomous systems. Relative to traditional feature-based methods, that suffer from failures due to poor lighting, insufficient texture, large motions, etc., recent learningbased SLAM methods exploit iterative dense bundle adjustment to address such failure cases and achieve robust accurate localization in a wide variety of real environments, without depending on domain-specific training data. However, despite its potential, learning-based SLAM still struggles with scenarios involving large motion and object dynamics. In this paper, we diagnose key weaknesses in a popular learning-based SLAM model (DROID-SLAM) by analyzing major failure cases on outdoor benchmarks and exposing various shortcomings of its optimization process. We then propose the use of self-supervised priors leveraging a frozen large-scale pre-trained monocular depth estimation to initialize the dense bundle adjustment process, leading to robust visual odometry without the need to fine-tune the SLAM backbone. Despite its simplicity, our proposed method demonstrates significant improvements on KITTI odometry, as well as the challenging DDAD benchmark.

Method


A simpler but principled way to robustify visual odometry.


Vulnerability Analysis


Our proposal is founded on the comprehensive analysis of de facto standard approach.


Our Related Projects


We introduce SESC, a novel method for extrinsic calibration that builds upon the principles of self-supervised monocular depth and ego-motion learning. Without manual camera calibration or 3D sensors like LiDAR, our proposed curriculum scheduling facilitates joint depth, ego-motion, and extrinsics learning.
We introduce ZeroDepth, a novel variational monocular depth estimation framework capable of transferring metrically accurate predictions across datasets with different camera geometries.

Citation


@article{frc-tri-sginit,
        title={Self-Supervised Geometry-Guided Initialization for Robust Monocular Visual Odometry}, 
        author={Takayuki Kanai and Igor Vasiljevic and Vitor Guizilini and Kazuhiro Shintani},
        year={2024},
        journal={arXiv},
        }

Notification


The project page was solely developed for and published as part of the publication, titled ``Self-Supervised Geometry-Guided Initialization for Robust Monocular Visual Odometry'' for its visualization. We do not ensure the future maintenance and monitoring of this page.

Contents might be updated or deleted without notice regarding the original manuscript update and policy change.

This webpage template was adapted from DiffusionNOCS -- we thank Takuya Ikeda for additional support and making their source available.