Image-based pose estimation and shape reconstruction for robot manipulators and soft, continuum robots via differentiable rendering

Proc. IEEE International Conference on Robotics and Automation (ICRA), 2023

Jingpei Lu, Fei Liu, Cedric Girerd, Michael C Yip

Abstract: State estimation from measured data is crucial for robotic applications as autonomous systems rely on sensors to capture the motion and localize in the 3D world. Among sensors that are designed for measuring a robot’s pose, or for soft robots, their shape, vision sensors are favorable because they are information-rich, easy to set up, and cost-effective. With recent advancements in computer vision, deep learning-based methods no longer require markers for identifying feature points on the robot. However, learning-based methods are data-hungry and hence not suitable for soft and prototyping robots, as building such bench-marking datasets is usually infeasible. In this work, we achieve image-based robot pose estimation and shape reconstruction from camera images. Our method requires no precise robot meshes, but rather utilizes a differentiable renderer and primitive shapes. It hence can be applied to robots for which CAD models might not be available or are crude. Our parameter estimation pipeline is fully differentiable. The robot shape and pose are estimated iteratively by back-propagating the image loss to update the parameters. We demonstrate that our method of using geometrical shape primitives can achieve high accuracy in shape reconstruction for a soft continuum robot and pose estimation for a robot manipulator.

Lu et al. (2023) Image-based pose estimation and shape reconstruction for robot manipulators and soft, continuum robots via differentiable rendering, Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 560-567.

Pub Link: http://ieeexplore.ieee.org/abstract/document/10161066/
arXiv: