Show simple item record

Stable Profiles in Simulation-Based Games via Reinforcement Learning and Statistics

dc.contributor.authorWright, Mason
dc.date.accessioned2019-07-08T19:45:58Z
dc.date.availableNO_RESTRICTION
dc.date.available2019-07-08T19:45:58Z
dc.date.issued2019
dc.date.submitted2019
dc.identifier.urihttps://hdl.handle.net/2027.42/149991
dc.description.abstractIn environments governed by the behavior of strategically interacting agents, game theory provides a way to predict outcomes in counterfactual scenarios, such as new market mechanisms or cybersecurity systems. Simulation-based games allow analysts to reason about settings that are too complex to model analytically with sufficient fidelity. But prior techniques for studying agent behavior in simulation-based games lack theoretical guarantees about the strategic stability of these behaviors. In this dissertation, I propose a way to measure the likelihood an agent could find a beneficial strategy deviation from a proposed behavior, using a limited number of samples from a distribution over strategies, including a theoretically proven bound. This method employs a provably conservative confidence interval estimator, along with a multiple test correction, to provide its guarantee. I show that the method can reliably find provably stable strategy profiles in an auction game, and in a cybersecurity game from prior literature. I also present a method for evaluating the stability of strategy profiles learned over a restricted set of strategies, where a strategy profile is an assignment of a strategy to each agent in a game. This method uses reinforcement learning to challenge the learned behavior as a test of its soundness. This study finds that a widely-used trading agent model, the zero-intelligence trader, can be reasonably strategically stable in continuous double auction games, but only if the strategies have their parameters calibrated for the particular game instance. In addition, I present new applications of empirical game-theoretic analysis (EGTA) to a cybersecurity setting, involving defense against attacker intrusion into a computer system. This work uses iterated deep reinforcement learning to generate more strategically stable attacker and defender strategies, relative to those found in prior work. It also offers empirical insights into how iterated deep reinforcement learning approaches strategic equilibrium, over dozens of rounds.
dc.language.isoen_US
dc.subjectsimulation-based games
dc.subjectreinforcement learning
dc.subjectgame theory
dc.titleStable Profiles in Simulation-Based Games via Reinforcement Learning and Statistics
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineComputer Science & Engineering
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberWellman, Michael P
dc.contributor.committeememberTeneketzis, Demosthenis
dc.contributor.committeememberSchoenebeck, Grant
dc.contributor.committeememberWiens, Jenna
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttps://deepblue.lib.umich.edu/bitstream/2027.42/149991/1/masondw_1.pdf
dc.identifier.orcid0000-0003-2723-3309
dc.identifier.name-orcidWright, Mason; 0000-0003-2723-3309en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.