Causal view of generalization
Causal reasoning, an essential cognitive ability in human intelligence, allows us to generalize past learning to solve present problems. Unfortunately, while machine learning prospers over the past decade by training powerful deep neural networks (DNN) on massive data, it still lacks the generalizat...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Thesis-Doctor of Philosophy |
語言: | English |
出版: |
Nanyang Technological University
2023
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/172269 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
總結: | Causal reasoning, an essential cognitive ability in human intelligence, allows us to generalize past learning to solve present problems. Unfortunately, while machine learning prospers over the past decade by training powerful deep neural networks (DNN) on massive data, it still lacks the generalization ability like us humans. Inspired by the important role of causality in human generalization, we take a causal view of machine generalization. We reveal that the spurious correlation in the training data is a confounder that prevents generalization, which can only be eliminated by causal intervention. In this thesis, we study three categories of causal intervention and contribute practical implementations to improve generalization: 1) backdoor adjustment, 2) invariant learning, and 3) learning disentangled representation. The proposed practical implementations are extensively evaluated by standard benchmarks and demonstrate state-of-the-art generalization performance in Few-Shot Learning, Unsupervised Domain Adaptation, Semi-Supervised Learning, Zero-Shot Learning, Open-Set Recognition, and Unsupervised Representation Learning. |
---|