Show simple item record

Deterministic and stochastic systems optimization.

dc.contributor.authorLambert, Theodore Joseph, III
dc.contributor.advisorSmith, Robert L.
dc.contributor.advisorEpelman, Marina A.
dc.date.accessioned2016-08-30T15:17:41Z
dc.date.available2016-08-30T15:17:41Z
dc.date.issued2003
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:3079481
dc.identifier.urihttps://hdl.handle.net/2027.42/123426
dc.description.abstractOptimization of real problems can pose a formidable task, as there are several common challenges faced when approaching these problems. Many times the objective function does not possess a closed form expression, but instead comes from, in effect, a black box, e.g., a simulation model or even program code. Moreover, even with a closed form expression for the objective available, the problem may remain too complex for direct analysis. In such problems, even finding some form of locally-optimal solution may be very difficult. We address this issue in the context of both complex deterministic and stochastic optimization problems; in the latter case focusing on problems which can be formulated as Markov decision problems. In practice heuristics are used to solve complex deterministic combinatorial optimization problems. We explore a new heuristic paradigm based on the game theoretic concept of fictitious play. By animating the system and playing a repeated <italic>fictitious game</italic> we attempt to find a local optimum. We show convergence of the fictitious play algorithm and address implementation issues such as sampling versus exact evaluation. The value of this new paradigm is demonstrated through its effective use on real applications. Markov Decision Processes have long been known to be a powerful modelling mechanism for sequential decision making under uncertainty. A significant deterrent to their use is the computation time and computer storage requirements resulting from the large state space of most models. One of the methods for mitigating this problem is state aggregation. Aggregation has been covered extensively in deterministic dynamic programming with a complete survey and description in Rogers, et al. [25]. One of the most general settings for aggregation is found in Bean, Birge, and Smith [28]. Instead of using an iterative aggregation/disaggregation scheme they develop an algorithm which simultaneously aggregates and disaggregates to arrive at a feasible solution with a calculable error bound. We provide a generalized technique for MDPs which can be seen as borrowing from their general construct. Our method is a computationally tractable technique which simultaneously calculates an aggregate solution and disaggregates the result to an approximate solution for the original problem.
dc.format.extent91 p.
dc.languageEnglish
dc.language.isoEN
dc.subjectDeterministic
dc.subjectMarkov Decision Processes
dc.subjectState Aggregation
dc.subjectStochastic Systems Optimization
dc.titleDeterministic and stochastic systems optimization.
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied Sciences
dc.description.thesisdegreedisciplineIndustrial engineering
dc.description.thesisdegreedisciplineOperations research
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/123426/2/3079481.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.