Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach

Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing...

Full description

Saved in:
Bibliographic Details
Main Authors: Yang, Jiangang, Yang, Jianfei, Luo, Luqing, Wang, Yun, Wang, Shizheng, Liu, Jian
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171563
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-171563
record_format dspace
spelling sg-ntu-dr.10356-1715632023-11-03T15:40:23Z Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach Yang, Jiangang Yang, Jianfei Luo, Luqing Wang, Yun Wang, Shizheng Liu, Jian School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Robust Visual Recognition Poor Visibility Conditions Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing model robustness in poor visibility conditions through techniques such as image restoration, data augmentation, and unsupervised domain adaptation, these efforts are predominantly confined to specific scenarios and fail to address multiple poor visibility scenarios encountered in real-world settings. Furthermore, the valuable prior knowledge inherent in poor visibility images is seldom utilized to aid in resolving high-level computer vision tasks. In light of these challenges, we propose a novel deep learning paradigm designed to bolster the robustness of object recognition across diverse poor visibility scenes. By observing the prior information in diverse poor visibility scenes, we integrate a feature matching module based on this prior knowledge into our proposed learning paradigm, aiming to facilitate deep models in learning more robust generic features at shallow levels. Moreover, to further enhance the robustness of deep features, we employ an adversarial learning strategy based on mutual information. This strategy combines the feature matching module to extract task-specific representations from low visibility scenes in a more robust manner, thereby enhancing the robustness of object recognition. We evaluate our approach on self-constructed datasets containing diverse poor visibility scenes, including visual blur, fog, rain, snow, and low illuminance. Extensive experiments demonstrate that our proposed method yields significant improvements over existing solutions across various poor visibility conditions. Published version This research was funded by SunwayAI computing platform (SXHZ202103) and the National Key Research and Development Program (2021YFB2501403). 2023-10-31T01:35:55Z 2023-10-31T01:35:55Z 2023 Journal Article Yang, J., Yang, J., Luo, L., Wang, Y., Wang, S. & Liu, J. (2023). Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach. Electronics, 12(17), 3711-. https://dx.doi.org/10.3390/electronics12173711 2079-9292 https://hdl.handle.net/10356/171563 10.3390/electronics12173711 2-s2.0-85170540918 17 12 3711 en Electronics © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
Robust Visual Recognition
Poor Visibility Conditions
spellingShingle Engineering::Electrical and electronic engineering
Robust Visual Recognition
Poor Visibility Conditions
Yang, Jiangang
Yang, Jianfei
Luo, Luqing
Wang, Yun
Wang, Shizheng
Liu, Jian
Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
description Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing model robustness in poor visibility conditions through techniques such as image restoration, data augmentation, and unsupervised domain adaptation, these efforts are predominantly confined to specific scenarios and fail to address multiple poor visibility scenarios encountered in real-world settings. Furthermore, the valuable prior knowledge inherent in poor visibility images is seldom utilized to aid in resolving high-level computer vision tasks. In light of these challenges, we propose a novel deep learning paradigm designed to bolster the robustness of object recognition across diverse poor visibility scenes. By observing the prior information in diverse poor visibility scenes, we integrate a feature matching module based on this prior knowledge into our proposed learning paradigm, aiming to facilitate deep models in learning more robust generic features at shallow levels. Moreover, to further enhance the robustness of deep features, we employ an adversarial learning strategy based on mutual information. This strategy combines the feature matching module to extract task-specific representations from low visibility scenes in a more robust manner, thereby enhancing the robustness of object recognition. We evaluate our approach on self-constructed datasets containing diverse poor visibility scenes, including visual blur, fog, rain, snow, and low illuminance. Extensive experiments demonstrate that our proposed method yields significant improvements over existing solutions across various poor visibility conditions.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Yang, Jiangang
Yang, Jianfei
Luo, Luqing
Wang, Yun
Wang, Shizheng
Liu, Jian
format Article
author Yang, Jiangang
Yang, Jianfei
Luo, Luqing
Wang, Yun
Wang, Shizheng
Liu, Jian
author_sort Yang, Jiangang
title Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
title_short Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
title_full Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
title_fullStr Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
title_full_unstemmed Robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
title_sort robust visual recognition in poor visibility conditions: a prior knowledge-guided adversarial learning approach
publishDate 2023
url https://hdl.handle.net/10356/171563
_version_ 1781793717995175936