Show simple item record

Distributed Learning, Prediction and Detection in Probabilistic Graphs.

dc.contributor.authorMeng, Zhaoshien_US
dc.date.accessioned2015-01-30T20:14:08Z
dc.date.available2015-01-30T20:14:08Z
dc.date.issued2014en_US
dc.date.submitted2014en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/110499
dc.description.abstractCritical to high-dimensional statistical estimation is to exploit the structure in the data distribution. Probabilistic graphical models provide an efficient framework for representing complex joint distributions of random variables through their conditional dependency graph, and can be adapted to many high-dimensional machine learning applications. This dissertation develops the probabilistic graphical modeling technique for three statistical estimation problems arising in real-world applications: distributed and parallel learning in networks, missing-value prediction in recommender systems, and emerging topic detection in text corpora. The common theme behind all proposed methods is a combination of parsimonious representation of uncertainties in the data, optimization surrogate that leads to computationally efficient algorithms, and fundamental limits of estimation performance in high dimension. More specifically, the dissertation makes the following theoretical contributions: (1) We propose a distributed and parallel framework for learning the parameters in Gaussian graphical models that is free of iterative global message passing. The proposed distributed estimator is shown to be asymptotically consistent, improve with increasing local neighborhood sizes, and have a high-dimensional error rate comparable to that of the centralized maximum likelihood estimator. (2) We present a family of latent variable Gaussian graphical models whose marginal precision matrix has a “low-rank plus sparse” structure. Under mild conditions, we analyze the high-dimensional parameter error bounds for learning this family of models using regularized maximum likelihood estimation. (3) We consider a hypothesis testing framework for detecting emerging topics in topic models, and propose a novel surrogate test statistic for the standard likelihood ratio. By leveraging the theory of empirical processes, we prove asymptotic consistency for the proposed test and provide guarantees of the detection performance.en_US
dc.language.isoen_USen_US
dc.subjectprobabilistic graphical modelsen_US
dc.subjectmachine learningen_US
dc.subjecthigh-dimensional statisticsen_US
dc.subjectstatistical estimationen_US
dc.subjectdistributed learning and estimationen_US
dc.titleDistributed Learning, Prediction and Detection in Probabilistic Graphs.en_US
dc.typeThesisen_US
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineElectrical Engineering: Systemsen_US
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studiesen_US
dc.contributor.committeememberHero Iii, Alfred O.en_US
dc.contributor.committeememberNguyen, Longen_US
dc.contributor.committeememberMei, Qiaozhuen_US
dc.contributor.committeememberBalzano, Laura Kathrynen_US
dc.subject.hlbsecondlevelComputer Scienceen_US
dc.subject.hlbtoplevelEngineeringen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/110499/1/mengzs_1.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.