Show simple item record

Learning in dynamic noncooperative multiagent systems.

dc.contributor.authorHu, Junling
dc.contributor.advisorWellman, Michael P.
dc.date.accessioned2016-08-30T17:54:49Z
dc.date.available2016-08-30T17:54:49Z
dc.date.issued1999
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9938451
dc.identifier.urihttps://hdl.handle.net/2027.42/131906
dc.description.abstractDynamic noncooperative multiagent systems are systems where self-interested agents interact with each other and their interactions change over time. We investigate the problem of learning and decision making in such systems. We model the systems in the framework of general-sum stochastic games with incomplete information. We design a multiagent Q-learning method, and prove its convergence in the framework of stochastic games. The standard Q-learning method, a reinforcement learning method, was originally designed for single-agent systems. Its convergence was proved for Markov decision processes, which are single-agent problems. Our extension broadens the framework of reinforcement learning, and helps to establish the theoretical foundation for applying it to multiagent systems. We prove that our learning algorithm converges to a Nash equilibrium under certain restrictions on the game structure during learning. In our simulations of a grid-world game, the restrictions are relaxed and our learning method still converges. In addition to model-free reinforcement learning, we have also studied model-based learning where agents form models of others and update their models through observations of the environment. We find that agents' mutual learning can lead to a conjectural equilibrium, where the agents' models of the others are fulfilled, and each agent behaves optimally given its expectation. Such an equilibrium state may be suboptimal. The agents may be worse off than had they not attempted to learn the models of others at all. This poses a pitfall for multiagent learning. We also analyzed the problem of recursive modeling in a dynamic game framework. This differs from previous work which studied recursive modeling in static or repeated games. We implement various levels of recursive model in a simulated double auction market. Our experiments show that performance of an agent can be quite sensitive to its assumptions about the policies of other agents, and when there is substantial uncertainty about the level of sophistication of other agents, reducing the level of recursion might be the best policy.
dc.format.extent141 p.
dc.languageEnglish
dc.language.isoEN
dc.subjectDecision Making
dc.subjectDecision-making
dc.subjectDynamic
dc.subjectMultiagen
dc.subjectNoncooperative Multiagent Systems
dc.subjectQ-learning
dc.titleLearning in dynamic noncooperative multiagent systems.
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied Sciences
dc.description.thesisdegreedisciplineComputer science
dc.description.thesisdegreedisciplineEconomics
dc.description.thesisdegreedisciplineOperations research
dc.description.thesisdegreedisciplineSocial Sciences
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/131906/2/9938451.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.