Robust Algorithms for Low-Rank and Sparse Matrix Models
dc.contributor.author | Moore, Brian | |
dc.date.accessioned | 2018-06-07T17:44:46Z | |
dc.date.available | NO_RESTRICTION | |
dc.date.available | 2018-06-07T17:44:46Z | |
dc.date.issued | 2018 | |
dc.date.submitted | ||
dc.identifier.uri | https://hdl.handle.net/2027.42/143925 | |
dc.description.abstract | Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. Robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them. This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and dynamic magnetic resonance imaging. | |
dc.language.iso | en_US | |
dc.subject | machine learning | |
dc.subject | signal processing | |
dc.subject | optimization | |
dc.subject | statistics | |
dc.subject | robust algorithms | |
dc.subject | dictionary learning | |
dc.title | Robust Algorithms for Low-Rank and Sparse Matrix Models | |
dc.type | Thesis | en_US |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Electrical Engineering: Systems | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Nadakuditi, Raj Rao | |
dc.contributor.committeemember | Zhou, Shuheng | |
dc.contributor.committeemember | Fessler, Jeffrey A | |
dc.contributor.committeemember | Hero III, Alfred O | |
dc.subject.hlbsecondlevel | Computer Science | |
dc.subject.hlbsecondlevel | Electrical Engineering | |
dc.subject.hlbtoplevel | Engineering | |
dc.description.bitstreamurl | https://deepblue.lib.umich.edu/bitstream/2027.42/143925/1/brimoor_1.pdf | |
dc.identifier.orcid | 0000-0001-7914-1794 | |
dc.identifier.name-orcid | Moore, Brian; 0000-0001-7914-1794 | en_US |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.