Efficient Sparse Representation Learning with Applications in Personalized and Explainable AI
dc.contributor.author | Liang, Geyu | |
dc.date.accessioned | 2025-05-12T17:36:15Z | |
dc.date.available | 2025-05-12T17:36:15Z | |
dc.date.issued | 2025 | |
dc.date.submitted | 2025 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/197140 | |
dc.description.abstract | Sparse representation learning plays a crucial role in modern machine learning and signal processing, offering a powerful framework for extracting structured and interpretable representations from high-dimensional data. This thesis explores key theoretical and practical advancements in sparse representation learning, focusing on efficient dictionary learning algorithms, their personalized counterparts for heterogeneous data, and the role of sparse codes in demonstrating model interpretability. A central challenge in sparse representation learning is the trade-off between efficiency, scalability, and theoretical guarantees. This thesis develops provable and computationally efficient methods for learning structured dictionaries, addressing fundamental issues in scalability and performance. By leveraging novel optimization techniques, it introduces algorithms that not only recover underlying sparse structures with theoretical guarantees but also scale effectively to large datasets and streaming settings. Beyond efficiency, this thesis extends sparse representation learning to personalized settings, where data exhibits both shared and unique structures. A new framework is introduced to disentangle these components, enabling more adaptive and robust representations across diverse datasets. This approach has broad implications, from improving generalization in imbalanced learning scenarios to enhancing multi-source data analysis. Finally, this work explores the intersection of sparse representations and explainable AI, addressing the long-standing challenge of balancing interpretability and predictive performance. By refining concept representations in structured ways, this thesis demonstrates how sparse codes can be leveraged to enhance both accuracy and transparency in machine learning models. The proposed methods achieve state-of-the-art performance in interpretable learning tasks while maintaining computational efficiency. Together, these contributions advance the theoretical foundations and practical applications of sparse representation learning. By bridging efficiency, personalization, and interpretability, this thesis provides new insights and methodologies that extend the impact of sparse learning across a wide range of domains, from signal processing to explainable machine learning. | |
dc.language.iso | en_US | |
dc.subject | Machine Learning, Representation Learning, Dictionary Learning | |
dc.title | Efficient Sparse Representation Learning with Applications in Personalized and Explainable AI | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | |
dc.description.thesisdegreediscipline | Industrial & Operations Engineering | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.contributor.committeemember | Fattahi, Salar | |
dc.contributor.committeemember | Qu, Qing | |
dc.contributor.committeemember | Al Kontar, Raed | |
dc.contributor.committeemember | Jiang, Ruiwei | |
dc.contributor.committeemember | Nagarajan, Viswanath | |
dc.subject.hlbsecondlevel | Industrial and Operations Engineering | |
dc.subject.hlbtoplevel | Engineering | |
dc.contributor.affiliationumcampus | Ann Arbor | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/197140/1/lianggy_1.pdf | |
dc.identifier.doi | https://dx.doi.org/10.7302/25566 | |
dc.identifier.orcid | 0000-0002-5763-5985 | |
dc.identifier.name-orcid | Liang, Geyu; 0000-0002-5763-5985 | en_US |
dc.working.doi | 10.7302/25566 | en |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.