Show simple item record

Stochastic infinite horizon optimization with average cost criterion.

dc.contributor.authorWachs, Allise Okin
dc.contributor.advisorSmith, Robert L.
dc.contributor.advisorSchochetman, Irwin E.
dc.date.accessioned2016-08-30T17:44:22Z
dc.date.available2016-08-30T17:44:22Z
dc.date.issued1998
dc.identifier.urihttp://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:9840665
dc.identifier.urihttps://hdl.handle.net/2027.42/131346
dc.description.abstractWe consider the stochastic infinite horizon optimization problem which seeks to minimize average cost. Observe that any solution which makes poor decisions for any finite amount of time and then selects wisely can be average optimal. It is our intent to eliminate these types of average optimal solutions from consideration since they are difficult to justify over finite horizons. Instead, we seek a method for finding average optimal solutions with both effective short-run and long-run behavior. An efficient solution has been defined in a deterministic framework to be one which is optimal to every state it passes through. Thus, its behavior over the short-term is optimal in the sense that it reaches all its states with minimal cost. It has been shown that for deterministic problems which satisfy a bounded reachability condition, efficient solutions are average optimal. We transform the stochastic problem into a deterministic one to utilize many of these deterministic results. However, the transformed stochastic problem does not satisfy the bounded reachability property, so this research introduces the notion of near reachability. Furthermore, we extend the idea of an efficient solution to an $\epsilon$-efficient solution. We also develop a property which includes a near reachability condition and a regulated cost differences requirement. The $\epsilon$-efficient solutions are shown to be average optimal for problems which satisfy this property which we refer to as Property BNR. We prove that nonhomogeneous Markov Decision Processes satisfy Property BNR, hence, an interesting class of problems can be solved for the minimal average cost objective. Furthermore, an algorithm designed to solve the infinite horizon undiscounted deterministic problem is modified for the stochastic problem. This algorithm finds the first best decision to the infinite horizon problem in a finite amount of time. The procedure may be repeated iteratively to build an optimal sequence of decisions. The dissertation concludes with an application to a machine down-time minimization problem.
dc.format.extent113 p.
dc.languageEnglish
dc.language.isoEN
dc.subjectAverage Cost
dc.subjectCriterion
dc.subjectInfinite Horizon Optimization
dc.subjectNear Reachability
dc.subjectOptimiz
dc.subjectStochastic
dc.titleStochastic infinite horizon optimization with average cost criterion.
dc.typeThesis
dc.description.thesisdegreenamePhDen_US
dc.description.thesisdegreedisciplineApplied Sciences
dc.description.thesisdegreedisciplineComputer science
dc.description.thesisdegreedisciplineIndustrial engineering
dc.description.thesisdegreedisciplineOperations research
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/131346/2/9840665.pdf
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.