SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised Monocular Depth Estimation

arXiv preprint arXiv:2303.18219, 2023

Shan Lin, Yuheng Zhi, Michael C Yip

Abstract: Without ground truth supervision, self-supervised depth estimation can be trapped in a local minimum due to the gradient-locality issue of the photometric loss. In this paper, we present a framework to enhance depth by leveraging semantic segmentation to guide the network to jump out of the local minimum. Prior works have proposed to share encoders between these two tasks or explicitly align them based on priors like the consistency between edges in the depth and segmentation maps. Yet, these methods usually require ground truth or high-quality pseudo labels, which may not be easily accessible in real-world applications. In contrast, we investigate self-supervised depth estimation along with a segmentation branch that is supervised with noisy labels provided by models pre-trained with limited data. We extend parameter sharing from the encoder to the decoder and study the influence of different numbers of shared decoder parameters on model performance. Also, we propose to use cross-task information to refine current depth and segmentation predictions to generate pseudo-depth and semantic labels for training. The advantages of the proposed method are demonstrated through extensive experiments on the KITTI benchmark and a downstream task for endoscopic tissue deformation tracking.

Lin et al. (2023) SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised Monocular Depth Estimation, arXiv preprint arXiv:2303.18219, pp. 1-11.

Pub Link: http://arxiv.org/pdf/2303.18219
arXiv: http://arxiv.org/pdf/2303.18219