Towards robust inference against distribution shifts in computer vision

After a decade of prosperity, the development of machine learning based on deep neural networks (DNNs) seems to reach a new turning point. A variety of tasks and fields have proved that recklessly feeding a massive volume of data and increasing the model capacity would no longer bring us a panacea f...

Full description

Saved in:
Bibliographic Details
Main Author: Tang, Kaihua
Other Authors: Zhang Hanwang
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/154119
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154119
record_format dspace
spelling sg-ntu-dr.10356-1541192022-01-05T09:23:40Z Towards robust inference against distribution shifts in computer vision Tang, Kaihua Zhang Hanwang School of Computer Science and Engineering hanwangzhang@ntu.edu.sg Engineering::Computer science and engineering After a decade of prosperity, the development of machine learning based on deep neural networks (DNNs) seems to reach a new turning point. A variety of tasks and fields have proved that recklessly feeding a massive volume of data and increasing the model capacity would no longer bring us a panacea for all the problems. The ubiquitous bias in the model structures, long-tailed distributions, and optimization strategies stops the DNN from learning the underlying causal mechanisms, resulting in the catastrophic drop of performances when facing distribution shift problems like rare spatial layouts, misalignment between source domains and targeted domains, or adversarial perturbations. To tackle these challenges and increase the robustness of DNNs for better generalization abilities, a line of research, including dynamic networks with attention architectures, long-tailed recognition, and adversarial robustness, have attracted significant attention in recent years. In this thesis, we systematically study the threats of model robustness against distribution shifts from three aspects: 1) network architectures, 2) long-tailed distributions, 3) adversarial perturbations. The latter two can also be interpreted as the explicit and implicit distribution shifts on patterns, respectively. To address these threats, we propose several algorithms that successfully increase the robustness of deep neural networks in a wide range of computer vision tasks, including image classification, object detection, instance segmentation, scene graph generation, and visual question answering. Doctor of Philosophy 2021-12-17T04:03:30Z 2021-12-17T04:03:30Z 2021 Thesis-Doctor of Philosophy Tang, K. (2021). Towards robust inference against distribution shifts in computer vision. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154119 https://hdl.handle.net/10356/154119 10.32657/10356/154119 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Tang, Kaihua
Towards robust inference against distribution shifts in computer vision
description After a decade of prosperity, the development of machine learning based on deep neural networks (DNNs) seems to reach a new turning point. A variety of tasks and fields have proved that recklessly feeding a massive volume of data and increasing the model capacity would no longer bring us a panacea for all the problems. The ubiquitous bias in the model structures, long-tailed distributions, and optimization strategies stops the DNN from learning the underlying causal mechanisms, resulting in the catastrophic drop of performances when facing distribution shift problems like rare spatial layouts, misalignment between source domains and targeted domains, or adversarial perturbations. To tackle these challenges and increase the robustness of DNNs for better generalization abilities, a line of research, including dynamic networks with attention architectures, long-tailed recognition, and adversarial robustness, have attracted significant attention in recent years. In this thesis, we systematically study the threats of model robustness against distribution shifts from three aspects: 1) network architectures, 2) long-tailed distributions, 3) adversarial perturbations. The latter two can also be interpreted as the explicit and implicit distribution shifts on patterns, respectively. To address these threats, we propose several algorithms that successfully increase the robustness of deep neural networks in a wide range of computer vision tasks, including image classification, object detection, instance segmentation, scene graph generation, and visual question answering.
author2 Zhang Hanwang
author_facet Zhang Hanwang
Tang, Kaihua
format Thesis-Doctor of Philosophy
author Tang, Kaihua
author_sort Tang, Kaihua
title Towards robust inference against distribution shifts in computer vision
title_short Towards robust inference against distribution shifts in computer vision
title_full Towards robust inference against distribution shifts in computer vision
title_fullStr Towards robust inference against distribution shifts in computer vision
title_full_unstemmed Towards robust inference against distribution shifts in computer vision
title_sort towards robust inference against distribution shifts in computer vision
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/154119
_version_ 1722355280332718080