This letter proposes a method of global localization on a map with semantic object landmarks. One of the most promising approaches for localization on object maps is to use semantic graph matching using landmark descriptors calculated from the distribution of surrounding objects. These descriptors are vulnerable to misclassification and partial observations. Moreover, many existing methods rely on inlier extraction using RANSAC, which is stochastic and sensitive to a high outlier rate. To address the former issue, we augment the correspondence matching using Vision Language Models (VLMs). Landmark discriminability is improved by VLM embeddings, which are independent of surrounding objects. In addition, inliers are estimated deterministically using a graph-theoretic approach. We also incorporate pose calculation using the weighted least squares considering correspondence similarity and observation completeness to improve the robustness. We confirmed improvements in matching and pose estimation accuracy through experiments on ScanNet and TUM datasets.
We propose a hybrid object descriptor that combines the advantages of both VLM (CLIP), and general object detectors via semantic graphs. The CLIP descriptor provides excellent disciminability while semantic graph enjoys robustness of estimation and provides spatial information. By combining these two descriptors, we can achieve a more accurate and robust object descriptor.
To extract likely object correspondences, we propose a new strategy to select the adaptive number based on the similarity distribution among the landmarks. Unlike the conventional 1-to-1 matching or top-k matching, the proposed method can handle arbitrary number of possible candidates. It comes at the cost of increasing the outlier ratio, but we can handle it by the robust graph-theoretic inlier extraction.
We represent the pair-wise consistency among the correspondence candidates as a consistency graph, where each node represents one correspondence and an edge between nodes represent the consistency between the correspondences. Sets of mutually consistent correspondences are extracted as maximal cliques in the graph. This algorithm is robust to outliers and deterministic.
The final sensor pose is calculated via weighted least squares. The weights are calculated based on the similarity of the correspondences the completeness of the observations to mitigate the problem of inaccurate matching and incomplete observations.
Visualization of the estimation results on TUM fr3/long_office_household. Left: the object map and the semantic graph. Right: Top-down view of the sample trajectory, estimation results, and an exclusively successful and failure results compared to the SH [6]. In the successful case, the graph is way more sparse than the map, which will affect matching based on SHs. Nevertheless, pose estimation was successful thanks to CLIP-based descriptors. In the failure case, although there is more connectivity, correspondence matching failed. Looking at the detection, many observations are small and blurred. This might have affected the inference accuracy of CLIP, and led to failure.
More results such as ablations are in the paper.
@article{Matsuzaki2024RAL,
author = {Matsuzaki, Shigemichi and Tanaka, Kazuhito and Shintani, Kazuhiro},
doi = {10.1109/LRA.2024.3474482},
issn = {2377-3766},
journal = {IEEE Robotics and Automation Letters},
month = {nov},
number = {11},
pages = {10399--10406},
title = {{CLIP-Clique: Graph-Based Correspondence Matching Augmented by Vision Language Models for Object-Based Global Localization}},
url = {https://ieeexplore.ieee.org/document/10705086/},
volume = {9},
year = {2024}
}
The project page was solely developed for and published as part of the publication, titled ``CLIP-Clique: Graph-Based Correspondence Matching Augmented by Vision Language Models for Object-Based Global Localization'' for its visualization. We do not ensure the future maintenance and monitoring of this page.
Contents might be updated or deleted without notice regarding the original manuscript update and policy change.
This webpage template was adapted from DiffusionNOCS -- we thank Takuya Ikeda for additional support and making their source available.