Towards robust inference against distribution shifts in computer vision
After a decade of prosperity, the development of machine learning based on deep neural networks (DNNs) seems to reach a new turning point. A variety of tasks and fields have proved that recklessly feeding a massive volume of data and increasing the model capacity would no longer bring us a panacea f...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154119 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | After a decade of prosperity, the development of machine learning based on deep neural networks (DNNs) seems to reach a new turning point. A variety of tasks and fields have proved that recklessly feeding a massive volume of data and increasing the model capacity would no longer bring us a panacea for all the problems. The ubiquitous bias in the model structures, long-tailed distributions, and optimization strategies stops the DNN from learning the underlying causal mechanisms, resulting in the catastrophic drop of performances when facing distribution shift problems like rare spatial layouts, misalignment between source domains and targeted domains, or adversarial perturbations.
To tackle these challenges and increase the robustness of DNNs for better generalization abilities, a line of research, including dynamic networks with attention architectures, long-tailed recognition, and adversarial robustness, have attracted significant attention in recent years. In this thesis, we systematically study the threats of model robustness against distribution shifts from three aspects: 1) network architectures, 2) long-tailed distributions, 3) adversarial perturbations. The latter two can also be interpreted as the explicit and implicit distribution shifts on patterns, respectively. To address these threats, we propose several algorithms that successfully increase the robustness of deep neural networks in a wide range of computer vision tasks, including image classification, object detection, instance segmentation, scene graph generation, and visual question answering. |
---|