Robust Deep Learning in the Open World with Lifelong Learning and Representation Learning
Lee, Kibok
2020
Abstract
Deep neural networks have shown a superior performance in many learning problems by learning hierarchical latent representations from a large amount of labeled data. However, the success of deep learning methods is under the closed-world assumption: no instances of new classes appear at test time. On the contrary, our world is open and dynamic, such that the closed-world assumption may not hold in many real applications. In other words, deep learning-based agents are not guaranteed to work in the open world, where instances of unknown and unseen classes are pervasive. In this dissertation, we explore lifelong learning and representation learning to generalize deep learning methods to the open world. Lifelong learning involves identifying novel classes and incrementally learning them without training from scratch, and representation learning involves being robust to data distribution shifts. Specifically, we propose 1) hierarchical novelty detection for detecting and identifying novel classes, 2) continual learning with unlabeled data to overcome catastrophic forgetting when learning the novel classes, 3) network randomization for learning robust representations across visual domain shifts, and 4) domain-agnostic contrastive representation learning, which is robust to data distribution shifts. The first part of this dissertation studies a cycle of lifelong learning. We divide it into two steps and present how we can achieve each step: first, we propose a new novelty detection and classification framework termed hierarchical novelty detection for detecting and identifying novel classes. Then, we show that unlabeled data easily obtainable in the open world are useful to avoid forgetting about the previously learned classes when learning novel classes. We propose a new knowledge distillation method and confidence-based sampling method to effectively leverage the unlabeled data. The second part of this dissertation studies robust representation learning: first, we present a network randomization method to learn an invariant representation across visual changes, particularly effective in deep reinforcement learning. Then, we propose a domain-agnostic robust representation learning method by introducing vicinal risk minimization in contrastive representation learning, which consistently improves the quality of representation and transferability across data distribution shifts.Subjects
Machine Learning Deep Learning Novelty Detection Continual Learning Domain Generalization Representation Learning
Types
Thesis
Metadata
Show full item recordCollections
Showing items related by title, author, creator and subject.
-
Michael, Donald N. (1973)
-
Myers, Christopher G. (2015)
-
Ellis, Nick C.; O'Donnell, M. (De Gruyter Mouton, 2012)
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.