KineDepth: Utilizing Robot Kinematics for Online Metric Depth Estimation

arXiv preprint arXiv:2409.19490, 2024

Soofiyan Atar, Yuheng Zhi, Florian Richter, Michael Yip

Abstract: Depth perception is essential for a robot’s spatial and geometric understanding of its environment, with many tasks traditionally relying on hardware-based depth sensors like RGB-D or stereo cameras. However, these sensors face practical limitations, including issues with transparent and reflective objects, high costs, calibration complexity, spatial and energy constraints, and increased failure rates in compound systems. While monocular depth estimation methods offer a cost-effective and simpler alternative, their adoption in robotics is limited due to their output of relative rather than metric depth, which is crucial for robotics applications. In this paper, we propose a method that utilizes a single calibrated camera, enabling the robot to act as a “measuring stick” to convert relative depth estimates into metric depth in real-time as tasks are performed. Our approach employs an LSTM-based metric depth regressor, trained online and refined through probabilistic filtering, to accurately restore the metric depth across the monocular depth map, particularly in areas proximal to the robot’s motion. Experiments with real robots demonstrate that our method significantly outperforms current state-of-the-art monocular metric depth estimation techniques, achieving a 22.1% reduction in depth error and a 52% increase in success rate for a downstream task.

Atar et al. (2024) KineDepth: Utilizing Robot Kinematics for Online Metric Depth Estimation. arXiv preprint arXiv:2409.19490, pp 1-8.

Pub Link: https://arxiv.org/pdf/2409.19490
arXiv: https://arxiv.org/pdf/2409.19490