Proc. IEEE International Conference on Robotics and Automation (ICRA), 2022
Shan Lin, Albert J Miao, Jingpei Lu, Shunkai Yu, Zih-Yun Chiu, Florian Richter, Michael C Yip
Abstract: Accurate and robust tracking and reconstruction of the surgical scene is a critical enabling technology toward autonomous robotic surgery. Existing algorithms for 3D perception in surgery mainly rely on geometric information, while we propose to also leverage semantic information inferred from the endoscopic video using image segmentation algorithms. In this paper, we present a novel, comprehensive surgical per-ception framework, Semantic-SuPer, that integrates geometric and semantic information to facilitate data association, 3D reconstruction, and tracking of endoscopic scenes, benefiting downstream tasks like surgical navigation. The proposed frame-work is demonstrated on challenging endoscopic data with deforming tissue, showing its advantages over our baseline and several other state-of-the-art approaches. Our code and dataset are available at http://github.com/ucsdarclab/Python-SuPer.
Lin et al. (2022) Semantic-SuPer: A Semantic-aware Surgical Perception Framework for Endoscopic Tissue Identification, Reconstruction, and Tracking, http://arxiv.org/pdf/2210.16674, pp. 4739-4746.
Pub Link: http://ieeexplore.ieee.org/abstract/document/10160746/
arXiv: http://arxiv.org/pdf/2210.16674