Delving into multi-illumination monocular depth estimation: A new dataset and method
Monocular depth prediction has received significant attention in recent years. However, the impact of illumination variations, which can shift scenes to unseen domains, has often been overlooked. To address this, we introduce the first indoor scene dataset featuring RGB-D images captured under multi...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8658 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Monocular depth prediction has received significant attention in recent years. However, the impact of illumination variations, which can shift scenes to unseen domains, has often been overlooked. To address this, we introduce the first indoor scene dataset featuring RGB-D images captured under multiple illumination conditions, allowing for a comprehensive exploration of indoor depth prediction. Additionally, we propose a novel method, MI-Transformer, which leverages global illumination understanding through large receptive fields to capture depth-attention contexts. This enables our network to overcome local window limitations and effectively mitigate the influence of changing illumination conditions. To evaluate the performance and robustness, we conduct extensive qualitative and quantitative analyses on both the proposed dataset and existing benchmarks, comparing our method with state-of-the-art approaches. The experimental results demonstrate the superiority of our method across various metrics, making it the first solution to achieve robust monocular depth estimation under diverse illumination conditions. We provide the codes, pre-trained models, and dataset openly accessible at https://github.com/ViktorLiang/midepth. |
---|