Show simple item record

Efficient Sparse Representation Learning with Applications in Personalized and Explainable AI

dc.contributor.authorLiang, Geyu
dc.date.accessioned2025-05-12T17:36:15Z
dc.date.available2025-05-12T17:36:15Z
dc.date.issued2025
dc.date.submitted2025
dc.identifier.urihttps://hdl.handle.net/2027.42/197140
dc.description.abstractSparse representation learning plays a crucial role in modern machine learning and signal processing, offering a powerful framework for extracting structured and interpretable representations from high-dimensional data. This thesis explores key theoretical and practical advancements in sparse representation learning, focusing on efficient dictionary learning algorithms, their personalized counterparts for heterogeneous data, and the role of sparse codes in demonstrating model interpretability. A central challenge in sparse representation learning is the trade-off between efficiency, scalability, and theoretical guarantees. This thesis develops provable and computationally efficient methods for learning structured dictionaries, addressing fundamental issues in scalability and performance. By leveraging novel optimization techniques, it introduces algorithms that not only recover underlying sparse structures with theoretical guarantees but also scale effectively to large datasets and streaming settings. Beyond efficiency, this thesis extends sparse representation learning to personalized settings, where data exhibits both shared and unique structures. A new framework is introduced to disentangle these components, enabling more adaptive and robust representations across diverse datasets. This approach has broad implications, from improving generalization in imbalanced learning scenarios to enhancing multi-source data analysis. Finally, this work explores the intersection of sparse representations and explainable AI, addressing the long-standing challenge of balancing interpretability and predictive performance. By refining concept representations in structured ways, this thesis demonstrates how sparse codes can be leveraged to enhance both accuracy and transparency in machine learning models. The proposed methods achieve state-of-the-art performance in interpretable learning tasks while maintaining computational efficiency. Together, these contributions advance the theoretical foundations and practical applications of sparse representation learning. By bridging efficiency, personalization, and interpretability, this thesis provides new insights and methodologies that extend the impact of sparse learning across a wide range of domains, from signal processing to explainable machine learning.
dc.language.isoen_US
dc.subjectMachine Learning, Representation Learning, Dictionary Learning
dc.titleEfficient Sparse Representation Learning with Applications in Personalized and Explainable AI
dc.typeThesis
dc.description.thesisdegreenamePhD
dc.description.thesisdegreedisciplineIndustrial & Operations Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberFattahi, Salar
dc.contributor.committeememberQu, Qing
dc.contributor.committeememberAl Kontar, Raed
dc.contributor.committeememberJiang, Ruiwei
dc.contributor.committeememberNagarajan, Viswanath
dc.subject.hlbsecondlevelIndustrial and Operations Engineering
dc.subject.hlbtoplevelEngineering
dc.contributor.affiliationumcampusAnn Arbor
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/197140/1/lianggy_1.pdf
dc.identifier.doihttps://dx.doi.org/10.7302/25566
dc.identifier.orcid0000-0002-5763-5985
dc.identifier.name-orcidLiang, Geyu; 0000-0002-5763-5985en_US
dc.working.doi10.7302/25566en
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe its collections in a way that respects the people and communities who create, use, and are represented in them. We encourage you to Contact Us anonymously if you encounter harmful or problematic language in catalog records or finding aids. More information about our policies and practices is available at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.