Unsupervised domain adaptation for depth completion from sparse LiDAR scans depth map

Depth completion aims to predict the distance between objects on an image and the camera capturing the image from a LiDAR scans depth input, and the distance is expressed as a dense depth map. Denser scans depth input leads to better prediction, while the cost of the corresponding LiDAR equipment wi...

全面介紹

Saved in:
書目詳細資料
主要作者: Geng, Yue
其他作者: Wang Dan Wei
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/156769
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Depth completion aims to predict the distance between objects on an image and the camera capturing the image from a LiDAR scans depth input, and the distance is expressed as a dense depth map. Denser scans depth input leads to better prediction, while the cost of the corresponding LiDAR equipment will be more expensive, and the model trained by dense depth input performs badly on sparse depth input. Meanwhile, it is difficult to get dense ground truth annotations for training depth completion models. In this dissertation, an unsupervised domain adaptation method is proposed to improve the performance of the models with unannotated sparse depth input. The approach aligns the second-order statistics of the features generated by the convolution neural network, which is shared by dense and sparse depth input. Experiments based on the KITTI depth completion benchmark shows that the method can improve the performance of depth completion on sparse depth input.