Show simple item record

Graphical Multiagent Models.

dc.contributor.authorDuong, Quang A.en_US
dc.date.accessioned2013-02-04T18:04:32Z
dc.date.availableNO_RESTRICTIONen_US
dc.date.available2013-02-04T18:04:32Z
dc.date.issued2012en_US
dc.date.submitted2012en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/95999
dc.description.abstractI introduce Graphical Multiagent Models (GMMs): probabilistic graphical models that capture agent interactions in factored representations for efficient inference about agent behaviors. The graphical model formalism exploits locality of agent interactions to support compact expression, and provides a repertoire of inference algorithms for efficient reasoning about joint behavior. I demonstrate that the GMM representation is sufficiently flexible to accommodate diverse sources of knowledge about agent behavior, and to combine these for improved predictions. History-dependent GMMs (hGMMs) extend the static form of GMMs to support representation and reasoning about multiagent behavior over time, offering the particular advantage of capturing joint dependencies induced by abstraction of history information. Computational experiments demonstrate the benefits of explicitly modeling joint behavior and allowing expression of action dependencies when given limited history information, compared to alternative models. Many multiagent modeling tasks that employ probabilistic models entail constructing dependence graph structures from observational data. In the context of graphical games where agents' payoffs depend only on actions of agents in their local neighborhoods, I formally describe the problem of learning payoff dependence structures from limited payoff observations, and investigate several learning algorithms based on minimizing empirical loss. I show that similar algorithms can also be applied to learning both dependence structures and parameters for hGMMs from time-series data. I employ data on human-subject experiments in constructing hGMMs and evaluating the learned models' prediction performance for a consensus dynamics scenario. Analysis of learned graphical structures reveals patterns of action dependence not directly reflected in observed interactions. The problem of modeling information diffusion on partially observed networks provides another demonstration of hGMM flexibility, as unobserved edges may induce correlations among node states. As graphical multiagent models can compensate for correlations induced from missing edges, I find that learning graphical multiagent models for a given network structure from diffusion traces can outperform directly recovering the missing edges, across various network settings.en_US
dc.language.isoen_USen_US
dc.subjectMultiagent Reasoningen_US
dc.subjectGraphical Modelsen_US
dc.subjectDynamics Behavioren_US
dc.subjectStructure Learningen_US
dc.titleGraphical Multiagent Models.en_US
dc.typeThesisen_US
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science and Engineeringen_US
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studiesen_US
dc.contributor.committeememberWellman, Michael P.en_US
dc.contributor.committeememberNguyen, Longen_US
dc.contributor.committeememberBaveja, Satinder Singhen_US
dc.contributor.committeememberDurfee, Edmund H.en_US
dc.subject.hlbsecondlevelComputer Scienceen_US
dc.subject.hlbtoplevelEngineeringen_US
dc.subject.hlbtoplevelScienceen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/95999/1/qduong_1.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.