Markerless camera-to-robot pose estimation via self-supervised sim-to-real transfer

Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023

Jingpei Lu, Florian Richter, Michael C Yip

Abstract: Solving the camera-to-robot pose is a fundamental requirement for vision-based robot control, and is a process that takes considerable effort and cares to make accurate. Traditional approaches require modification of the robot via markers, and subsequent deep learning approaches enabled markerless feature extraction. Mainstream deep learning methods only use synthetic data and rely on Domain Randomization to fill the sim-to-real gap, because acquiring the 3D annotation is labor-intensive. In this work, we go beyond the limitation of 3D annotations for real-world data. We propose an end-to-end pose estimation framework that is capable of online camera-to-robot calibration and a self-supervised training method to scale the training to unlabeled real-world data. Our framework combines deep learning and geometric vision for solving the robot pose, and the pipeline is fully differentiable. To train the Camera-to-Robot Pose Estimation Network (CtRNet), we leverage foreground segmentation and differentiable rendering for image-level self-supervision. The pose prediction is visualized through a renderer and the image loss with the input image is back-propagated to train the neural network. Our experimental results on two public real datasets confirm the effectiveness of our approach over existing works. We also integrate our framework into a visual servoing system to demonstrate the promise of real-time precise robot pose estimation for automation tasks.

Lu et al. (2023) Markerless camera-to-robot pose estimation via self-supervised sim-to-real transfer, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21296-21306.

Pub Link: http://openaccess.thecvf.com/content/CVPR2023/html/Lu_Markerless_Camera-to-Robot_Pose_Estimation_via_Self-Supervised_Sim-to-Real_Transfer_CVPR_2023_paper.html
arXiv: