Multi-modal semantic segmentation in poor lighting conditions

Semantic segmentation is a complicate dense prediction task that consumes significant computational resources, and the use of multi-modal RGB-T data makes its computational burden even more severe. This dissertation presents a novel and lightweight network for RGB-T semantic segmentation with a para...

全面介紹

Saved in:
書目詳細資料
主要作者: Li, Zifeng
其他作者: Wang Dan Wei
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/169137
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Semantic segmentation is a complicate dense prediction task that consumes significant computational resources, and the use of multi-modal RGB-T data makes its computational burden even more severe. This dissertation presents a novel and lightweight network for RGB-T semantic segmentation with a parameter-free feature fusion module that facilitates efficient fusion between modalities. The proposed method integrates both modalities by leveraging multi-scale features from both RGB and T domains in different feature extraction stages. Specifically, we employ a dual-encoder architecture to extract RGB-T features and fuse them with a parameter-free cross-modal attention mechanism, taking the advantage of the complementary information provided by the two modalities to improve segmentation accuracy. Besides, we further investigate the impact of different pretrained strategies on the performance of the model. We evaluate our approach on several benchmark datasets, including the MFNet and PST900 datasets. Experimental results show that our approach outperforms real-time state-of-the-art methods in the literature while showing comparable performance with state-of the-art methods that require up to 100 times the computational complexity. Our findings demonstrate the effectiveness of lightweight RGB-T model for semantic segmentation and highlight the potential of this approach for various real-world applications.