INFINITE HORIZON OPTIMIZATION Technical Report 87-3 Robert L. Smith Department of Industrial & Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 Irwin E. Schochetman Department of Mathematical Sciences Oakland University Rochester, Michigan 48063 August 1986

Infinite Horizon Optimization Irwin E. Schochetman Department of Mathematical Sciences Oakland University Rochester, Michigan 48063 Robert L. Smith* Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 August, 1986 * Research partially supported by NSF Grant No. ECS-8409682

ABSTRACT We consider the infinite horizon problem of minimizing discounted costs over a set of feasible strategies whose component decisions are from arbitrary metric spaces. We provide conditions for the existence of finite and infinite horizon optimal solutions, develop the notion of generalized algorithm and its convergence, give necessary and sufficient conditions for the existence of solution (i.e planning) horizons for an algorithm, establish a stopping criterion for solution horizon determination and conclude with application to doubly infinite, non-linear programs. KEYWORDS Infinite horizon optimization, generalized algorithm, solution horizon, stopping criterion, infinite program. 1

CONTENTS Page 1. Introduction 3 2. The Space of Feasible Strategies 5 3. The Cost Structure 10 4. Existence of Optimal Solutions 15 5. Generalized Algorithms and Their Convergence 22 6. Solution Horizons and Their Existence 29 7. A Stopping Criterion 32 8. Application to Infinite Programs 39 References 49 2

1. Introduction Consider an infinite sequential decision problem where each decision belongs to some general decision set. Any such sequence of decisions, called a strategy, is assumed to extend over the infinite horizon. In general, not all strategies will be feasible. Suppose that for any feasible strategy there is an associated cumulative cost and cumulative revenue which are functions of time; the cumulative net cost of a feasible strategy is then the difference of these two functions. Since these functions are unbounded in general, in order to compare them, we will continuously discount them to time zero relative to some suitable interest rate. The basic problem we wish to solve is the following infinite horizon optimization problem: find a strategy that minimizes continuously discounted cumulative net cost over the set of feasible infinite horizon strategies. Associated with our problem is a continuum of finite horizon optimization problems. Specifically, for each future time, minimize the corresponding finite horizon continuously discounted cumulative net cost over the set of initial segments of the feasible strategies up to the future time. Since we can in general solve only finite horizon problems, our plan is to approximate solutions to the infinite horizon problem by a sequence of finite horizon solutions. In particular, since the first decision is the one which has to be made initially, we would like to find a nearoptimal first decision to a sufficiently large finite horizon problem, and then repeat this process, each time being assured that the previously chosen decisions remain near-optimal. This is the familiar forecast horizon approach to infinite horizon optimization. 3

In [3], Bhaskaran and Sethi argue that a more appropriate term for the finite horizon used is solution horizon, since no a priori bound on the length of finite horizon required is known in general. We agree with this point and have chosen to use the term solution horizon in place of forecast horizon. The above approach to our problem has also been followed by several other authors. Originally, Bean and Smith [1] studied this problem in the context of finite decision spaces, obtaining a somewhat restrictive solution horizon theorem. Grinold [4] earlier considered similar issues in the context of infinite convex programs. In [2], Bes and Sethi investigated a stochastic version of the problem which was characterized by discrete time as well as finite decision spaces. In [7], Hopp, Bean and Smith studied a non-homogeneous Markov decision process version of the problem, also with discrete time and finite decision spaces. Our main goal here is to obtain conditions which guarantee the existence of solution horizons (relative to a finite horizon solution algorithm) for the infinite horizon optimization problem in the context of very general (possibly non-finite and non-discrete) decision spaces within a continuous time framework. Throughout this paper we will make the following assumptions. In addition to requiring that all quantities be deterministic, we will assume that: A. At any decision epoch, the possible decisions constitute a compact subset of a metric space. B. All strategies extend over infinite time. C. A given strategy's infeasibility can be determined by observing only finitely many initial decisions. 4

D. The cumulative cost and revenue functions are non-decreasing and eventually bounded by an exponential function. E. Cost and revenue functions are continuously discounted. F. Two feasible strategies which are the same through a given time have the same cost and revenue functions through that time. G. Assignments of cost and revenue functions to feasible strategies satisfy a certain continuity condition. In section 2, we define the space of feasible solutions and verify that it is a compact metric space. In section 3, we introduce the cost structure and establish some important and useful properties. Under our assumptions, the finite and infinite horizon discounted costs are shown to be continuous functions in section 4, thus guaranteeing the existence of finite and infinite horizon optimal solutions. In section 5, we introduce the notions of algorithm, algorithm convergence, as well as the set of algorithmically optimal solutions. An example shows that this set can be significantly smaller than the set of all optimal solutions. This is important because it is the smaller set which plays the central role in solving the infinite horizon problem by the finite horizon approximations. In section 6, we define the notion of solution horizon and study the existence of such. Specifically, we give necessary and sufficient conditions for solution horizons to exist for an algorithm in terms of the associated set of algorithmically optimal solutions. In section 7, we give our generalization of the stopping criterion established by Bes and Sethi in [2]. Finally, in section 8, we provide an application to solving doubly-infinite, nonlinear programs. 5

2. The Space of Feasible Strategies Suppose (zj,pj) denotes a metric space and Yjcz- the set of all possible decisions available after j previous decisions have been made, where j = 1,2,.... Each element yj of Yj will be called a jth policy. Assumption A. Yj is a compact space for all j. We may assume each pj is bounded by 1. There is no loss in generality in making this assumption since every metric is equivalent to one which is bounded by 1. Moveover, if Yj is finite, then we may also assume (without loss of generality) that the minimum distance between distinct points in Yj is 1. Some important special cases for Yj are: (i) a finite set. (ii) a closed, bounded interval of the real line. (iii) more generally, a compact, convex subset of R". Let Y denote the produce space IljYj. The points y = (yj) of Y will be called strategies. Since not all strategies are feasible in general, we will denote the subset of Y consisting of feasible strategies by x, which we assume to be non-empty. The strategy space Y is naturally equipped with the product topology which makes it compact (Tychonoff Theorem [5, p. 146]). The space Y also inherits a metric from the Yj as follows: p(x,y)= - p (x,y )/2j, x,y Y. j=1 It is easy to verify that p is indeed a metric on iY which is also bounded by 1. 6

Lemma 2.1. The product and metric topologies on Y are the same. Hence, Y is a compact metric space. Proof. Let I denote the identity mapping from Y (with the product topology) to Y (with the metric topology). Then I is a mapping of a compact space onto a Hausdorff space which is one-to-one. Thus, to show that the two topologies are the same, it suffices to show that the mapping I is continuous [6, p.207], which is easily done. Remark. It follows from the previous lemma that we may abandon the metric p and work exclusively with the product topology on Y. In fact, our problem can be generalized to the setting where the zj are assumed to be Hausdorff spaces, i.e. not necessarily metric spaces. However, we will keep the metric p for ease of exposition. The next two results relate closeness of strategies in the space Y with closeness of policies in the spaces Yj. Their proofs are straightforward. For yE, let ky = (Y,,yk) Lemma 2.2. Let 0<6 < 1. If p (x,y) < 8, for x, y in Y, then pj (x,) < 8, < j - log. 6 J i In particular, if Yj is finite, 1 j k, and 8 '- 1, then kx = kyLemma 2.3 Let 8 >0 and suppose k is a positive integer. If p (x,y) < 8/2k, then p (, y ) <, j=,., k. J 'J, If Yj is finite, 1 '- j k and 6 - 1,then kx = kY For each positive integer k, define 7

kX = {kx: xX}. For kxEkX, let Tk(kx) denote the time of implementation of policy kx (i.e., the kth decision epoch) given that policies xl,..., xl were implemented before Xk. Then k is a mapping of kX into the non-negative reals, since T1 (lx) = 0 for all x in x and Tk (x)< Tk+ (k+x) x(X, k,2,.... Assumption B. 7,(x)kX)-, ask-oo, forallxEX. Of special interest is the case where Tk (x)=k, xX,k=,2,..., which corresponds to a discrete time problem. For each T >0, Assumption B allows us to define n, (x)= maxu n: T7 ( x) T}, x(X, which is the index of the last policy in strategy x which is implemented no later than time T. Recall that the feasible strategies form a non-empty subset of Y. For the main purposes of this paper, it will be important that x be compact. We next turn to the question of what assumptions on X will make it compact. Proposition 2.4. The following are equivalent: (i) x is compact. (ii) x is closed in Y. (iii) x is complete relative to the metric p on Y. Proof. Omitted. 8

Assumption C requires that the feasibility of a given strategy can be determined by observing only finitely many initial policies. The exact meaning of this is the following: Assumption C. If yeY, yyx, then there exists 8>O and a positive integer k having the property that if zE Y is such that pj (zj,yj)<6 for 1 j k, then zex. Assumption C requires that if ye Y is infeasible, then all strategies z near y are also infeasible; i.e., that the complement of x in Y is open. Thus, it is necessary and sufficient for x to be closed, and hence compact, in Y. Consequently, X is a compact metric space. 3. The Cost Structure We next turn to the problem of assigning values to the feasible strategies. For each x in x, let K(x,t) and R(x,t) respectively denote the cumulative cost function and cumulative revenue function associated with strategy x, where t 0. Assumption D. For each x in x, K(x,') and R (x,.) are non-negative, nondecreasing and there exist oI >0, y 0 and 3 >0 such that max (K(x,1),R(x,t)) < Be"t for all t >T, x(X. Note that without loss of generality, we may assume TO = 0. (If not, replace 1t by Be o.) We define the corresponding cumulative net cost function for x by C(x,t)= K (x,t) - R(x,t), t>O. 9

Then, for each x in x, C (x,) is a function of locally bounded variation. Assumption E. All costs incurred at time t are continuously discounted by the factor e-rt where the rate of interest r is greater than 0. The T-horizon discounted net cost for x in x is defined by the Stieltjes integral rt C ({x,r=} e rtdC(x, T) r>, TO. The T-horizon discounted cumulative cost and revenue functions, K'(x,r,rT) and RI(x,r,rT) respectively, are defined analogously, Moreover, C" (x,r,T)=K" (x,r,T)-R- (x,r,T), xX, rO, T>O. Our T-horizon problem is then: min Cx (x,r,7T). Since the minimum need not exist in general, define V (r,T)= infx,C (x,r,T), T>O. Since the rT-horizon discounted costs associated with a strategy x truncates the cash flow at time rr, it will be useful to identify strategies that agree in policies through time T. To this end, for each T1>0, define the equivalence relation -T on X as follows. If x,yEx, then x -TY if: (i) nT(x) = nT(y). (ii) x, = y,, i=,...,nT(x). In this event, since nx = ny, for n = nT(x), we have that 7'k (x) = Tk(k), 1 kl< n (x), i.e., x and y have the same decision epochs throughout time T. 10

Each -T-equivalence class consists of those elements of x which are identical through time T, and the space X, = X/-7 of equivalence classes may be identified with the set of all distinct feasible strategies through time T. Assumption F. If x-, y, then K(x,t) =K(y,t) and R(x,t) = R(y,t) for 0'- t- T. From Assumption F, C(x,t) = C(,,t), 0 < t< T, and consequently, C'(x,r,7T) =C' (,r,T), i.e. the T-horizon objective function is constant on the >T-equivalence classes in X. Thus, we obtain a function ~,C_ (, r, T) on XT, where CS (,,r,T)= C (x,r,7T), xX,, and our T-horizon problem may be written: inf (, C (,,r, ). On a more concrete level, let X =lx:xE X}, where T x = (XlX2...-,nT'X1). Observe that xT is the T-horizon initial segment of x. We have seen above that C-(x,r,'r) depends only on xr, so that we may define Ct (xT,r,T) = C (x,r,T), xE X. 11

Consequently, our T-horizon problem may be written: inf T C (x,r,T) where xT consists of r-horizon feasible strategies of finite length which are extendable to elements of x. In particular, if we have a discrete time problem so that 7' (x)=n, xEX, n=1,2,.., then XT =,,X,forT= 1,2,3,.... The infinite horizon discounted net cost for x in x is defined by the Laplace - Stieltjes Transform f C- (x,r)= e- dC (x,t), where C- (x,r)= -im C (x,r,T), T-aos provided this limit exists. The infinite horizon discounted cumulative cost and revenue functions, K-(x,r) and RK(x,r) respectively, are defined similarly. The next result answers the question of existence. Proposition 3.1. If r>\, then C-(x,r), K-(x,r) and R (x,r) exist for all x in X. Moreover, C-(x,r) = K"(x,r) -R- (x,r). Proof. See Bean and Smith [1] For r >, the infinite horizon problem is then: infx e C (x,r ). As above, define V (r) = infxEX C (x,r ). 12

The following result will be extremely useful in sections 4 and 7. Define rB a (r,t)= - e Y-r)t r>y t'0 r-y Then a(r,t) >0 and, for each r, a (r,t) -,0, as t- oo. Theorem 3.2. For x in x, r> and T- 0, we have: 0<K_ (x,r) - K" (x,r,T) -a (r,T), 0 < R (x,r) - IR (x,r,T) < a (r,T) and IC" (x,r) - C~ (x,r,T') < a (r,T). In particular, it follows that lim IC (x,r) - C~ (x,r,T)[ = 0 T-XX) uniformly with respect to x in x. Proof. For each x, K (x,-) is a non-negative, non-decreasing function. By Theorem 4b of Widder [9], we have: S S K K(x,t) d(ert) K (x,S)e - K{(x,' e -r e- rdK(x), T T that is r t K" (x,r,S) - K_ (x,r,T) = ertdK (x,t) T -rS -~rT rrS K(xS)e -r K x, e' + r K(x,t)e- dt, T where S S r K (x,t)e- rdt < r Be( Y r)t dt, T T rB _ Y-r)S (y-r) T <_ -r e y - r 13

= a(r,T)-a(r,S). Thus K (x,r,S)- K( (x,r,T) BeY-rS- K (x,)e-T +a (r,T) - a (r,S ). Taking the limit as S-oo on both sides yields: K (x,r)-K' (x,r,T) a (r, K (x,TerT a (r,T). The claim for R is proved similarly. The claim for C then follows from the fact that - (x,r) - R (x,r,T)I < C (x,r) - C( (xr,'l < K (x,r) -K(x,r,T). Proposition 3.3. For 0O T Sand r>y, we have V (r,S) < V (r,T) + a(r,). Proof. In general, TrS C(x,r,S) - e rdC(x,t) + ertdC(x,t) o T S r =C (x,r,T) + e-rtdK(xt) - ert- d (x,t) T T <C" (x,r,T) + e-rtdK(x,t) T ~ rt <C (x,r,7t) + e rtdK(x,t) T =C (x,r,T) +K-(x,r) - K (x,r, ), that is, Cr (x,r,S) < C (x,r,T) + a (r,'), x(X, by Theorem 3.2. Taking the infimum over x on the left yields V (r,S) < C (x,r,'') + a (r,'), xEX. The proof is then completed by taking the infimum over x on the right. 14

4. Existence of Optimal Solutions In order to ensure existence of optimal solutions, it is important that the discounted cost and revenue functions be continuous in x. Thus, we are faced with the question of what reasonable assumptions to make about the original cost and revenue functions so that this is the case. A detailed examination of Helly's Selection Theorem and Helly's Convergence Theorem [8] strongly suggests the following: Assumption G. If xn-+x in x, as n -moo, then there exists a subsequence {xk} of {xn} independent of t for which K (xk,t) —K(x,t), O<t<oo as koo. This is true for example if K(x,t) is a continuous function over X for all t --- o. We make the same assumption on R; hence, it is true for C as well. We will see that this mild assumption on K and R is sufficient to guarantee the continuity over x of the discounted costs and revenue functions. In order to establish these results we require the next two lemmas. Lemma 4.1. Let T- O0. Then the family of functions {K (x,): x x} is of uniformly bounded variation on the interval [0,1]. Similiarly, for R and c. Proof. Since K (x*) is non-decreasing, its variation on [0,T] is equal to K (x,T) - K (x,0) K (x,'T) which is bounded by BeYT, xEx. Lemma 4.2. Fix rl 0O. Suppose the real sequence {rk} converges to r monotonically from the left or right. Then the sequence of functions {e-rkt} of t converges uniformly to the function e-rt on [0,T]. Proof. This follows from Dini's Theorem [6]. 15

For convenience, let r denote the open interval (y,oo). The following are our main convergence results. Theorem 4.3. Fix T >0. Then the mapping (x,r)-4C- (x,r,T) of X xl" into the real numbers is continuous. Similarly for C replaced by K or R. Proof. It suffices to prove the theorem for K. Suppose (xn,rn) - (x,r) in XxI', as n-oo. If the theorem is false, then there exists e >0 and a subsequence {xm,rm)} of {(xn,rn)} such that K (x,r, T) - K (x,r,T) >, m = 1,2,... Since rm -or, there exists a subsequence which converges to r from either the left or right. In turn, there exists a subsequence of this one which converges monotonically to r from either the left or right. Let {rk} be such a subsequence, so that ( xk, rk) -- (x,r) and K-xkr k, T) - K (xr,rT) c, k = 1,2,.... By Assumption G, there exists a subsequence {xji} of {xk} for which K (xi,t) -* K (x,t), 0 t < <, as j —o. Thus, the subsequence {K (xJ,-)} has uniformly bounded variation on [0,T] (Lemma 4.1) and converges pointwise to K (x,-) on [0,,T]. Hence, by Helly's Convergence Theorem, it follows that -rdK t)- ert dK (x, ) o o i.e. K (xr, t)K (x,r, T), j oo. Since rj-er monotonically from the left or right, it follows that the sequence of functions {e-rjt} of t converges uniformly to the function e-rt on [0,T] (Lemma 4.2). Consequently, 16

max t<T (e j - e ) —0, asj- oo. We then have: [T T I K- (xrj,T )-K (x,r,T) e j tdK (xt- e- td (x,t ) 0 0 |7 ( -r -t r t -rt -rt <O (e j - e )dK(rtl + I e dK (Xt) - e dKxt)] < ma <<T (e - - ) Var (K (xr, ), [O,T ) rrt r + I e- dK( t) e-rtdK(x t). 0 0 T t hdifference of the integrals on the right goes to 0 as j s-o (Helly's Theorem) and the other term on the right also goes to 0 as jx<, since the variations indexed by j are uniformly bounded. Hence which is a contradiction. Thus, the mapping (x,r) -*K~(x,r,T) is continuous. Similarly, the maping (x,r) —i(x,r, r) is continuous as is (x,r) —C-(x,r, r), since it is the difference of continuous mappings. Theorem 4.4. The mapping (x,r) -*C-(x,r) of X x Ir into the reals is continuous. Proof. Let(xn,rn) — (x,r) in x xl. Then r>y implies there exists ro such that y<ro<r. Without loss of generality, we may assume ro —: rn, all n, since rn -- r. Let v >O. By Theorem 3.2, there exists T. > 0 such that K (y, r )- K (y,r,T )</3, yEX. Then K (x", r ) - K (x, r)l II K- (x r, r 7 ) K (x, r, 7') 17

+ K(xn,r ) - K (x",r,T ) n t ~ + K { (x,r) - K (x, r, ). But Kx (x",r K)- Kxrn, T K (x", r )- K- (xn,r,T )<e/ 3, alln, and K' (x, r) - K ( x, r,T )<K- (x,r ) -K (x, r, T) < / 3. } 0 0 ' Moreover, by Theorem 4.3, there exists n, sufficiently large such that K (xn, r, T) -K ( x, r,T7 )1<d3, n3n Hence, K (xn, r )- K (x,r)| <, n>n, so that K- (xn,rn)-K~(x,r), as n- o~. Hence (x,r)-K~(x,r) is continuous as is (x,r)-iR~(x,r) by a similar arguement. The result then follows since (x,r)-+C- (x,r) is the difference of continuous mappings. Theorem 4.5. Suppose xn-ix in X, rn~-r in r and Tn - oo in [0,oo). Then C-(xn,rn,Tn) -,C.(x,r), as n -,. Similarly for K and R. Proof. Suppose once again that y< ro< rn for all n. Let e >0. Once again, choose T, >0 sufficiently large such that K-(y,r )- K (y,r,T)) <c/3, yEX. Choose m, sufficiently large so that Tn 'rT, for n M m,. Then, for such n, we have: K (xn, rn, T )- K- (x,r)I < I K- (x, r, T ) - K (x,r, T ) +I K (x r, rT' T )- K (xl, r IT ) I + IK| (x,r)- K-(x,r, )1. ' ll L '' 18

But IK" (x", rf T) - K (x<", r, T)\ < [ K (x", r )- KJ (x'"r ' )1 < I K (a r )-K(xn, ro, T ) <c/3, n- m, and I K (x, r)- K' ( x, r, T ) < I K (x,r )- K (x, r,T )1 < /3. Therefore, I K (x, r,T )- K (x,r)I <I K ("xn, r 'T )- KK (x,r,T ) +2cJ3, n>m. By Theorem 4.3, there exits k, > m, such that H K, (x", r, 'T )- K (x,r,T ) </3, n k. Hence, IK (x',r, T ) - K (x,r)I <e, n k and the proof is complete. Remark. This theorem says that the mapping (x,r, T) - Ki(x,r,rr) is continuous at any point of the form (x,ro) in x x r x [O,]l. This raises the question as to whether this mapping is also continuous at any point (x,r,T), where O0 T<o. We believe that this is false in general. However, we conjecture that it is true if and only if K (x,) is leftcontinuous at rT. Let us summarize what we have accomplished thus far. Theorem 4.6. (Solution Existence) Under Assumptions A through G, for r>y and IT 0, there exist optimal solutions for the T-horizon and infinite horizon problems. 19

If x* (r,T) and x * (r) respectively denote the sets of such optimal solutions, then these sets are non-empty and closed in x. Hence, they are also compact. Proof. A continuous function on a compact set attains its minimum and the set of points where this occurs is necessarily closed. Corollary 4.7. We have: V*(r)=min x C (x,r) and V* (r,) = min C" (x,r,T), T O. xEX The next lemma is a key result for the analysis to follow. Lemma 4.8. Fix r>y. Let {Tn} be a sequence of positive times such that Trn-*c, as n-oao. For each n = 1,2,..., let xn be an arbitrary element of x* (r,Tn). Then there exists a subsequence {Tk} of {Trn} and an element x* in x* (r) such that xk->x*, as k-eg. Proof. This is proved as in Bean and Smith [1]. Thus, an arbitrary sequence of finite horizon optimal solutions has a sub sequence which converges to an infinite horizon optimal solution. Theorem 4.9 (Value Convergence). Fix r>y. Then v* (r) = lim v* (r,rr). T ---oo Proof. Suppose not. Then there exists a sequence {Tn} and c >0 such that Tn-woo and {V*(r, 7 ) -V(r)|->, n- 1,2,.... For each n, let xn be an element of x* (r,Tn) and let x* be an element of x*(r). Then: V (r,T )= (x",r,T ), n=l,2... and V (r)= C (x,r), 20

so that | C (x", r, T )- C" (x, r) I, n= 1,2,.... By Lemma 4.8, there exists a subsequence {Tk} of {Tn} and x' in X*(r) such that xk-+x' and |C- (xk, r, Tk )-C (x',r), k= 1,2,... By Theorem 4.5, C (xk r, 7 T ) C (x', r) = C~ (x, r), which is a contradiction. Remark. Bes and Sethi [2] establish an analogous result in the stochastic setting for the case of discrete time and finite policy spaces. 5. Algorithms and their Convergence For the remainder of this paper, we fix r>y. Consequently, it will be convenient to suppress the reference to r in our notation. For1' >O, recall that x* (r,r 1) = x* (T) is a non-empty set. Suppose we have a (finite horizon) algorithm A which solves the r-horizon problem by providing a non-empty, closed subset A (r) of x* (T) for all r >O. More formally, A is the generalized sequence {A(T), T1 0}. If each A(T) is a singleton, then we call A a simple algorithm. In general, when A(T) is a subset of x*(rT), we call A a set algorithm. If A('T) is all of x*(T), we call A the maximal algorithm. For example, if x* (T) denotes the set of optimal solutions of a linear program, then A(T'l) could be a single extreme point optimal solution, the set of all extreme point optimal solutions or the set of all optimal solutions. Note that if each finite horizon problem has a 21

unique optimal solution, then the only finite horizon set algorithm is the maximal one. If A1, A2 are two algorithms, it will be convenient to write Ai i A2 whenever Ai(T) c A2 (T) for all r>O. Clearly, if A, denotes the maximal algorithm, then A- Au, for all algorithms A. For any A, define X*A to be the set of all accumulation points of the algorithm A, that is, the set of all x in x for which there exists a sequence Tn —* and x* (Tn) ( A (Tn) for all n, such that x* (Tn) -,x, as n - oo. Clearly, if A1 - A2, then X*A1 X*A2. In particular, if we write X*(oo) for X*A, then X*A X*(o), for all algorithms A. Theorem 5.1. For each finite horizon algorithm A, the set X*A is a non-empty, closed, and hence compact subset of x*. Proof. For each rr>O, let x* (T) E A(T). Then {x*(T): 1> 0} is a generalized sequence in x. Since x is compact, there exists a convergent subsequence which converges to some x in x. Therefore, by definition, XEX*A, so that X*A is non-empty. Let x be an element of X*A. Then there exists Tn-oo and x* (Tn)E A (rn), all n, such that x* (rrn)-x. If xEX*, then there exists a neighborhood W of x disjoint from X* such that the sequence x* (Tn) is eventually in W. By Lemma 4.8, there exists a subsequence x* (Tk) which converges to some x* in x*. But then x* = x, since x is Hausdorff. Contradiction. Consequently, X*Acx*. To show X*A is closed, let x in x be a limit point of X*A. Let {xn} be a sequence in X*A which converges to x. Be definition of X*A, for each n, there exists a sequence rpn, k-o, as k —oo, and a corresponding sequence xn,kEA(Tn,k) such that xnk-+xn, as 22

k-*oo. By a familiar diagonalization process, we may extract a subsequence which converges to x. Thus, x is an element of X*A, which is necessarily closed and hence, compact. Proposition 5.2. For any algorithm A, XA= U {XA: A' - simple algorithm, A' A) In particular, X (o)= U {X: A -simplealgorithmn) A Proof. If x* EX*A, then there exists Tn -oo and x* (Tn) ( A (Tn) such that x* (Tn) -* x*. For each r;trn, let x* (T) ( x* (T) be arbitrary. Then the resulting algorithm A' = {{x*(T)}: rr>O} is simple and x*EX*A. The reverse inclusion is clear. Since x*(oo) is the maximal subset of x* which can be obtained as limits of sequences of algorithmic solutions, we call it the set of algorithmically optimal solutions for the infinite horizon problem. This set was originally introduced by Hopp, Bean and Smith [7] in the particular context of discrete time and finite policy spaces. There, its elements were called periodic forecast horizon optimal solutions. It is clear from the above discussion that the only elements of x* we can approximate by finite horizon optimal solutions are those in x*(g). Thus, it is important to know if X*(oo) is strictly contained in x* in general, that is, if there exist solutions in x* that are not in x*(oo). As the next example will show, the answer is emphatically yes and the difference can be substantial. Example 5.3. For each i = 1,2,..., let z, be the real numbers with the Euclidian topology and Yi = {0,1,...,9} with the discrete topology. Assume X = Y = IIYj, so that x with the product topology is automatically compact. For each x = (xi) in X, define p (x) to be the infinite decimal.x1x2.... Then p is a mapping of x onto the 23

compact interval [0,1] which implies that x is uncountable. Moreover, the mapping p is continuous and p -1 (0) = {o}, where 0 is defined to be the element of x given byoi=, alli. Let y>O be arbitrary and r>1. For each x in x, define K (x, ) = F (p (x), *) and R(x,-) = 0, where, for 0 s- 1, F (s,-) is defined by O, -, Ot<I F(, ( l-rs)t+rs+s-1, 1 _ t. Define C (x,-) = K (x,-) - 1R (x, ) as usual. Since K (x,) is eventually linear, it is eventually bounded by any exponential, i.e. ~ can be chosen arbitrarily. Furthermore, the mapping x-> K (x,-) is pointwise continuous since p is continuous. For each x in x, we may verify that C (x,T)> C (0,', T> 1 and - 1 C' (x) = -. re re Thus, the finite horizon problem has the unique solution ofor - 1, i.e. for any algorithm A, we have A(T) = { 0 }, T1. This implies that x*(o) = {} also. On the other hand, since C-(x) is constant in x, it follows that x* = x, which is uncountable. The same cost structure can be associated with strategies from continuous policy spaces to yield the same conclusion. Example 5.4. For each i = 1,2,..., let Yj = [0,1] this time and x = Y = IIYm as before. Define p:x-,[0,1] by p(x) 2 -i x, i=l 24

for x = (xj) so that p is continuous, onto and p-1 (0) = {o }. If we set K (x,-) and R (x,.), for x in x, as before, then we obtain the same conclusions. Remark.The reader should note that Examples 5.3 and 5.4 do not satisfy Assumption F; the other assumptions are satisfied. However, this can easily be remedied as follows. In each example, let x denote the diagonal in Y, i.e. X ={xEY:x= x...}. In each case, define p: x-4[0,1] by p(x) = xi, xX. Then p is continuous and p-(0)= {o} as before. (In Example 5.4, p is also onto [0,1].) In each example, for each 1>0, the TT -equivalence classes are trival, i.e. x- y ifand only if x=y. Hence, Assumption F is satisfied automatically. Moreover, in each case, X*(X) = {0} and x* = x. However, in Example 5.3, x hasten elements, while in Example 5.4, x is uncountable. We next discuss convergence of algorithms. Theorem 5.5. Let A be a simple algorithm with A(Tr) = {x*A(T): rl>0}. Then the generalized sequence of points {x*A ()} in X converges to a point in X*A (in the product or metric topology) if and only if X*A is a singleton. 25

Proof. Suppose x* A(T) - x*, as T -oo, so that x*EX*A- If xx*A also, then there exists Tn -o and x* (Tn) in (T(Tn), all n, such that x* (Tn) — x, as n — o. Since x is Hausdorff and x* (Tn) -Ox* also, it follows that x = x*,e. * i* a singleton. Conversely, suppose X*A = {x*}. If X*A (T) does not converge to x*, then there exists a sequence Tn — co such that X*A (Tn) is bounded away from x*. By Lemma 4.8, passing to a subsequence if necessary, there exists x in X* for which X*A (Tn) -Ox, i.e. x(X*A. Thus, x = x* and we have a contradiction. Recall that in trying to solve the infinite horizon problem by finite horizon approximations via some algorithm A, the only solutions we can hope to approximate are those in x*A. These in turn are limits of sequences of elements from the A ('1), >0. It is impractical to expect a planner to be able to select a particular sequence of times Tn and to then be able to select x*A (Tn) in the A (Tn) which converge to some element of X*A. Realistically, we need to know that for any choice of horizons rr and any solutions X*A (r), these converge to the same x* in X*A. In view of this requirement and the previous results, we define an algorithm A to be convergent if the set X*A is a singleton. In particular, if the maximal algorithm converges, i.e., x*(c) is a singleton {x*}, then every algorithm converges to {x*}. In particular, this is true for simple algorithms. This last comment is the conclusion of the Planning Horizon Theorem of Bean and Smith in [1], where it is required that x* be a singleton. 6. Solution Horizons and Their Existence For the purposes of this section, it will be convenient to adopt the following notation. If yEY, PcY, 8>0 and k is a positive integer, define: Wk (Y',8)={ Y: P(z y ) <8, 1_j<k} 26

and Wk(P,8)= U W(Y,8). ycP In particular, if xEx and PCx, define: Uk (x,8)= 'k (x,8) n and U (P,8) = W (P, 8) nX. k k It is not difficult to verify that the sets { Wk (, 8): y ( Y, 8> 0, k pos. integer} form a base for the product topology on Y, while the sets { U(x, 8):x X,8>, k = pos. integer} form a base for the relative product topology on x. The following properties are also easily verified: (i) y(Wk (x,8) if and only if x(Wk (y,a). (ii) xeWk (y,8) and yEwk (z,o) implies xEWk (z,6 + a) (triangle inequality). (iii) a<_ implies Wk (y,8) c Wk (y,A). (iv) kx = kY implies Wk (y,8) = Wk (x,8). Suppose a decision maker wishes to determine the first k decisions x1 *, Xk* of some infinite horizon optimal strategy x*. (Typically, k = 1.) If he uses algorithm A to solve the finite horizon problem for some T>O, he gets the set A(CT). Ideally, for sufficiently large T, and arbitrary x* (rT) in A (T), he would like x* (T)j to be equal to xj*, for all j. However, since we are dealing with continuous policy spaces, we only require that the x* (r)j be close to the xj*, 1 <j <k. Accordingly, we make the following definition. Let A be an arbitrary algorithm, k a positive integer and 8>0. We say that there exists a 6-solution horizon of order k for A if there exists xk,6 in x* 27

and Tk,6>O such that A (s) c Uk (xk,b,8), for S- Tk,s, i.e. for each s- Tk^ and for each x* (S) in A (S), we have p (x(S) j,k6) <8, 1 <jk. Jj J We say there exist solution horizons of order k for A if, for each 8 >0, there exists a asolution horizon of order k for A. If this is the case, then solution horizons exist for all algorithms A' such that A' A. The next theorem says that xk,8 can be chosen to be independent of 6. Theorem 6.1. There exist solution horizons of order k for A if and only if there exists x* in X*A such that for each 8 >0, there exists Tk, 6>0 satisfying A(S)cuk (x*,8,) for all S - k, 6 -Proof. The sufficiency is clear. For the necessity assume that for each 8 >O, there exists xk,6 in X* and Tk, >O such that A(S)cUk (xkb,8), for all S ' Tk,a. The generalized sequence {xk,6:^>o} has a convergent subsequence xks-yk, as n- oo, where yk (X* and an -+0. Let >0. Choose n, sufficiently large such that an<c/2 and p (Xk'n,y )< d 2 nn+ Then xk,6nE Uk (yk, J/2), for n: n (Lemma 2.3). For a = an,, let Tk,, = 'Tk, and xk, = x k,6 Then xk, ( Uk (yk, /2) and for S' Trk,, we have A (S) C Uk (xk', 8 )C Uk (xk',t /2 ). Therefore, by the triangle inequality, it follows that A(S)CUk (Yk ), ST> 7 ~ — - ' ' k,c. It remains to show that yk may be chosen in X*A. Let x* (S) be an arbitrary element of A(S), for each s>0. Then {x*(s)} has a convergent subsequence 28

x*(Sn)-x*, where X*EX*A and Sn- *, as n -. Let a>0 be arbitrary. By the previous argument, there exists Tk,/2 such that S4- Tk,/2 implies A (S) CUk (yk,8/2). Let nk,6 be sufficiently large such that n 1 nk,6 implies p (x* (Sn), x*) <a / 2k + 1, i.e. x* (sn) eUk (x*, 8/2), and Sn - Tk, 6/2, i.e.x* (Sn) Uk (yk, 8/2). Therefore, by the triangle inequality, pj (ykj, x*j) <6,1 <j <k. Since 6 is arbitrary, it follows that y = k i.e. Uk(yk,u) = Uk (x*,). This completes the proof. Corollary 6.2. Suppose each Yj is discrete, 1 i j k. Then there exist solution horizons of order k for A if and only if there exists x* in x*A and Tk >0 such that kA(S)= {k}, S > Tk. Proof. For the necessity, let x* in X*A be as in the theorem. Choose 8 = 1 and let rTk = rk,1. Then A (S)CUk (x,1), S>Tk. Consequently, by hypothesis, kx* (S) = kx*, for each x*(S) in A (S). Lemma 6.3. Let A be an arbitrary algorithm, k a positive integer and 8>0. Then there exists Tk, >O sufficiently large such that A(Tr)C Uk (X,8), T>Tk, Proof. If not, there exists a sequence Tn —, such that A(T ) Uk (XA,), alln. Then, for each n, there exists x*(Tn)EA(Tn) such that x* (Tn) fUk (X*A,6), i.e., {x*(Tn)} is a sequence in the compact complement X/Uk(X*A, a). Passing to a subsequence if 29

necessary, we may assume there exists x in X/Uk (X*A, 8) such that x*(Tn) —x. Thus, xfUk(X*A,8). But x(X*A by definition and X*A CUk(X*A, 8). This is a contradiction. Theorem 6.4. Let A be an arbitrary algorithm and k a positive integer. Then solution horizons of order k exist for A if and only if k(x*A) is a singleton. Proof. Suppose k(X*A) is a singleton {kx*}, for X*EX*A. Let8>0. Then Uk (XA,)= Uk(x,). By Lemma 6.3, there exists Tk,6 sufficiently large such that A(T)C Uk (X*, 6 ), T>Tk, ' so that solution horizons of order k exist for A. Conversely, suppose this is the case and X*A is not a singleton. Then there exist x* and y* in X*A, 8>0 and 1- j k such that pj (x*j, y*j) 8. Let Tn, Sm 00 x* (Tn) (A(Tn), all n, y* (Sm) EA (Sm), all m, be such that x*(Tn)-x*, as n -+oo, and y* (Sm) -,y*, as m _oo. By hypothesis, there exists z* in X*A and Tk, 6>0 such that A(T)cUk(z*,6/4), for rr Tk, Suppose nk,6 is sufficiently large so that m,n - nks imply that T, Sm rTk,^, i.e. A(Tn), A (Sm) C Uk (z*,8/4) and p(x (T ),x) </2k+ p (y (S ),y )< /2 i.e. x* (T'n) E Uk (x*,6/4) and y* (Sm) e Uk (y*,8/4) (Lemma 2.3). Therefore, by the triangle inequality, we have that x*E Uk (y*, a), i.e. pj (x*j, y*j)<8, 1 <j k, which is a contradiction. Hence k(X*A) is a singleton. 30

Corollary 6.5. There exist solution horizons of order k for all algorithms A'- A if and only if k(x*A) is a singleton. In particular, there exist solution horizons of order k for all algorithms if and only if k(X*( ()) is a singleton. Theorem 6.6. Let A be an arbitrary algorithm. Then there exist solution horizons of all orders k for A if and only if x*A is a singleton, i.e. A is convergent. Proof. Observe that *A is a singleton if and only if k(X*A) is a singleton, for each positive integer k. Corollary 6.6. There exist solution horizons of all orders k for all algorithms A' A if and only if x*A is a singleton. In particular, there exist solution horizons of all orders k for allalgorithms if and only if x*(X) is a singleton. 7. A Stopping Criterion Let A be any algorithm. Fix k a positive integer and 8>0. Our objective in this section is to find Tk,6>0 which is a 8-solution horizon of order k for A. (Recall that such a horizon exists for all 8>0 if and only if k(X*A) is a singleton.) More specifically, we wish to determine an algorithm which finds a 8-solution horizon of order k for A in terms of finite horizon information. Such an algorithm was obtained by Bes and Sethi in [2] for the case of discrete time and discrete policy spaces. In this section, we generalize their results by allowing continuous time and continuous policy spaces. Before proceeding, we require some additional notation. If PCX, then x/Pwill denote the compliment of P in x. In particular, if P =Uk (x,8), for x in x, we write Xk (x,6)= X/U (x,8). k k More generally, Xk (P,8)= X/Uk (P,8). 31

= xk(x,8). If T>0, then let xE infC (x,r, T, P;0, V(P,T) = x(P o, P= 0. Clearly, v (P,) ~V*(T) = V(X,T). In particular, if P= Xk (A (T), 6) then we write VA (k,6,T) = V (X (A (T), 6), T). Recall that rb a(t) - e (Y-r)t r>y, t0, r-y and we are suppressing notational dependence on r. The next two results constitute our extension of Theorem 5.2.1 of Bes and Sethi [2] to the continuous setting. Theorem 7.1. Let A, k, a be as above. If rT>o satisfies V (k,8,T) - V (T)> 2a (T- 1), then for all S - T, we have X (S) C U (A (), 6 ). Proof. If not, there exist S>T and x*(S) in x*(s) such that x*(S)e Uk (A (T), ), i.e. x* (s) ( Xk (A (T), a). Consequently, C' (x- (S), T) 2 VA (k,),'D which implies C-( ( (S), - V* (7 - V (k,, 7 - V* (7 _> 2a (T-l) 32

> 2a (T), by hypothesis. On the other hand, V* (S)= C (* (S),S) > C (x (S), T) -a (7') (7'hm. 3.2) and V (S) - V (7T) + a (7M (Prop. 3.3). Subtracting, we see that 0 > C (*x (S), ') - V (T) - 2a (), which is a contradiction. The next theorem is our stopping criterion. Theorem 7.2. Let A, k,8 be as above. Suppose T>0 satisfies: (i) kA(T) is a singleton. (ii) VA (k, 8/2, 1) - V* (T) >2a (T-1). Then 'I is a 5-solution horizon of order k for A. Proof. By the previous theorem, X (S) C U (A (7'1, /2), S T. By (i), kA ('I) = {kx*(T) }, for any x*(T) in A(T). Thus, Uk (A (T, 8/2)= Uk ( (, 8/2) k k and A (S)CX (S)C U (x') 13,/2), S>T. Now let x* be any element of X*A, Sn-+, SnlT, and x* (Sn) ( A (S), all n, such that x* (Sn) -, x*. Let >0. Then there exists n. sufficiently large such that 33

p (x* (Sn), x*) < /2k, i.e. x* (Sn) E Uk (x*,c), for n - n,. But also, x (S )( Uk (x (T),/2), n=1,2,.... By the triangle inequality, x (T) EU (X, /2+), where v is arbitrary. Hence, p (x (7,x) < 6/2, 1 <j-k. Applying the triangle inequality again, we obtain A(S)CX (S)C U (x,6), S>'T', which completes the proof. Remark. The set kA(T) will be a singleton under each of the following conditions: (i) A is a simple algorithm. (ii) A yields a unique solution for the T-horizon problem. (iii) The T-horizon problem has a unique solution. Proposition 7.3 If Yj is discrete, 1 - j -- k, and k(X*A) is a singleton, then kA(T) is a singleton for sufficiently large T. Proof. Set 8 = 1 and apply Lemma 6.3. The next theorem gives a condition which is sufficient to guarantee that (ii) of Theorem 7.2 will be satisfied eventually. It is our extension of Theorem 5.3.2 of Bes and Sethi [2] to the continuous setting. Let b be any function satisfying b(t) >O,t — 1 and b(t) — 0, ast -,. For example, b(t) = 2a (t-1) has these properties. Theorem 7.4. Let A,k,6 be as above. If kX* is a singleton, then there exists a positive integer N satisfying VA (k, 8/2,N)- V (N) b (N). A' 34

Proof. By hypothesis, k(x*A) is a singleton. Thus, by Theorems 6.1, 6.3, there exists x* in X*A and Tk,^ such that kX* = {kX*} and A (T)c Uk (X,8/4), T >Tk, Hence, x*( nU{k( x*(),8/4): x (T)(A(T)}, i.e. x Uk (A (D,6/4), T_' Tk,. Now suppose that V, (k,8/2, T)- V (7 < b (T), for all r >Tk,. In particular, this is true for all integers N - Tk,b. Hence, for each such N, there exists XN in Xk (A(N), a/2) such that 0 < C~ ( N)- V) (k, 8/2, N) < b(N) - [V (k, /2,N) - V (N) I, so that 0 CN (,N)- V* (N)< b(N). Since x is compact, there exists a convergent subsequence xNn = xn-x, where XEX, and Nn —*, as n -.oo Thus, there exists nk,6 sufficiently large such that n - nk,6 implies N n - Tk,6,' SO that O < C {(xAN )- V(N ) <b (N), by above. But b(N )-0, 1 35

C- (x, N )- C-(), (Th m. 4.5) and V (N )-V, (Thim.4.9), n as n oo. Consequently, taking limits yields C~ (x) = v*, which implies xex*, i.e. k = kx. Now let 0<c<6/4. Since xn-, x, there exists nk, -- nk,6 such that n nk,, implies p(xn,x) <c/2k, i.e. xnE Uk (x,A), where Uk(x,) = Uk (x ) = k ( x,, /4). Since x* E Uk (A (Nn), a/4), by the triangle inequality, xne Uk (A (Nn), 8/2), for all n - nk,,. But xn( Xk (A (Nn), a/2), all n. Contradiction. Remark. By repeating the previous argument for M — N, we see that there exists an infinite sequence of positive integers which satisfy the condition of Theorem 7.4. We will say that the algorithm A is eventually k-simple if there exists 'lk >0 sufficiently large such that kA(T) is a singleton for T — Tk. The following is our extension of Theorem 5.4.1 of Bes and Sethi [2] to the continuous use. Theorem 7.5. Fix A, k be as above. Suppose: (i) A is eventually k-simple. (ii) kX* is a singleton. Then, for each 8>O, there exists T >0 such that T is a 8 solution horizon of order k for A. 36

Proof. Apply Theorems 7.2 and 7.4 with b(t) = 2a (t-1). Remark. If Yj is discrete, 1 - j - k, and k(x*), is a singleton, then kX*(T) is eventually a singleton (Prop. 7.3). Hence, in this case, every algorithm is eventually k-simple. Theorem 7.2 gives the following algorithm for determing a a-solution horizon of order k for eventually k-simple A. (Compare with [2].) Algorithm 1. Set r = 1. 2. Solve the T-horizon problem to get V*(T) and A(T). 3. Determine the set XA (k,8/2, T) and evaluate VA (k,a/2, T). 4. If the cardinality of kA(T) is 1 and VA (k,8/2,T)- V (7 T) 2a(7'-1), go to 6. 5. Otherwise, replace T by T + 1 and go to 2. 6. Stop. 'T is a a-solution horizon of order k for A. Note that this algorithm will eventually stop if we know that kX* is a singleton. 8. Application to Infinite Programs In this section we show that our general infinite horizon optimization model includes a very general class of infinite programs (continuous and integer) as a special case. For each j = 1,2,-, let zj be nj dimensional Euclidian space Rnj, Yj a compact subset of Rn- and cj a real-valued function defined on Yj. Let pj be the usual Euclidean metric on zj bounded by 1. As before, let Y = HiYj and for x, y in Y, define 37

p(x,y) as in section 2. In addition, for each i = 1,2, —., let bj be a real number and aij a real-valued function defined on Yj. Consider the discounted, doubly-infinite, non-linear program given by: 00 mm i c (y )al j=1 subject to a(y;)>b, i=1,2,... j = 1 y y jY=1,2,..., J ~ where a< 1 is the discount factor of the form e-r corresponding to the interest rate r>O. If the yj are to have integer values, then the Yj are finite discrete spaces. Our first objective is to find reasonably general conditions on the cost and constraint functions which will guarantee convergence of all the infinite series. Assumption H. For each i, suppose there exists a positive sequence {oij:1 ~j~<} and an index Ji sufficiently large such that aJ ja and \a (y)1 _o Uv (Yi) I ij for yE Y and j>J.. For example, this is the case if the ajj are eventually 0 for each i, i.e. each constraint contains finitely many terms. This assumption implies that, for each i, we have a real-valued function gi defined on Y by 38

- a'.i= I where J -1 00 1 V laj l ) la (YIJ )l+ ia (y.) - J - u f u J j=I Ij=l iJ J -1 C N \u.(U, )I + o. ' L.I X _. tJ ' j=1. J_. so that the series is absolutely converent for all y in Y. Assumption I. For each i and j, the function ai; is continuous on Yj. For example, this is the case if,jj is matrix multiplication or if Yj is discrete. Proposition 8.1. Assumptions H and I imply that the function gi is continuous on Y, all i. Proof. Fix i and let x, y be elements of Y. Then jO Ig (x)-g (y)I = I a (x) - ' a ('j) --- i J -J iJ J=l J=l ' N I U (x ) - (y )l. j=1 Let >0. By Assumption G, there exists j, Ji such that ' o. <d4. J>Jc Then Jc Ig (x)-g(y,)i<_ N~ la (x.)-a..y )I+ ' I\a(x)-a y), j= 1 J>J where 39

V la..(x )-a..(j)l Ji J i J <- (,' (la (x.) + la.. ( )I ) J>ja c. i < _ o0 li J>J <c2 <:/ 2. Since aij is continuous, there exists sj >0 such that pj(xj,yj)<8 implies I ~~ ~~~Ia (x- yIa2 lau (x )-a (aj I < d2j. U J. J by Lemma 2.3, p(x,y)- 6/2Ja implies Let = min {6j:1 <j j j. Then p (x,.)<, j,) <, iii a: so that I ij (x)-ua v j) I < t/2 U] J U.] j Hence, J. j' la.(x ) - a =') I < d2. j=l Consequently, g (x)- gi ()l <, which completes the proof. Now observe that the set X = y(Y: \ a i(yj) b } j=1 is the inverse image of the closed interval [bioo) by the continuous function g-. Hence, each xi is closed. Furthermore, the space x of feasible solutions is equal to nxj which is also closed. Therefore, x is a closed, compact subset of Y as required. We will assume x is not empty. A necessary and sufficient condition for this to be 40

the case is that the xj have the finite intersection property, i.e. each finite subcollection has non-empty intersection [5, p.127]. For each x in X, we must define K(x,), R (x,i) and C (x,-), where the first two must be eventually exponentially bounded. Assumption J. Suppose there exists B >0, \ >0 and Jo- 1 such that ' Ic (x ) I < Be x X, n J. - j J. j=l For each x in x, define: K(xt)= ( () < l, CI (X )+. ~ ~ + +ct (Xlt), l t, R (x,t) = -() < I c1 (XI)+. + it], 1t, and C(x,t) K (x,t)- R (x,t), O, where [.] is the greatest integer function and f +,f- denote the positive and negative parts of the function f respectively. The functions K (x,.) and i (x,.) are non-negative and non-decreasing. Furthermore, for t:Jo, we have: K (x,t) <Bet xEX. The same is true for i (x,t). Assumption K. For each j, the function cj is continuous on Yj. Hence cj + and cj- are also continuous. For example, this is true if the cj are linear or if the Yj are discrete. 41

Proposition 8.2. The mappings x —K (x,-) and x -1k (x,-) are continuous from X into the space of functions on [0,) equipped with the topology of pointwise convergence. The same is true for x — C (x,-). Proof. Supposes xn -ox in x, so that xn j-txj, all j. Let tU O. If t< 1, then clearly K(xn,t) —K (x,t). Thus, suppose t 1. By assumption J, c+ (x 1)-C+ (x ),asn - JI J.1 J for all j= 1, **, [t]. Hence, Itl Itl c+ (X") X c (x.), - J J - J i.e. j= j K (x',)-* K (x,t), n —oo The remainder of the proof is obvious. We have shown that Assumptions A through G are satisfied by this model. Hence, all the results of sections 2 through 7 are valid. In particular, by Proposition 3.1, the quantities Ki(x), R(x) and Ci(x) exist for all x in x, as long as r >. Let us see what these represent in this application. ForT >0, we have: K ) (x-,'= e dK(x,l) V() = \ e-rj + e c.x.) -.1./ J=l 17'1 = \ c+(X. )aJ, xX - J J Similarly, 42

I/'1 RK (x,'D= c- (x )a.i= 1 so that I Ti C~ (x,T)= N c (c x)a, x(X. J J./=1 Observe that Ci(x,T) = C-(x,N), for N'- rTI< N + 1, N a positive integer. Similarly for FKand i~. Moveover, note that Ci(x,N) is the objective function in the N-horizon infinite program given by: N min N' c.(x )an - J j=l subject to ~ a (x } b i 1,2, IJ J ' '.i = I J J x, = 1,2,). Recall (section 2) that kX = {kx:XlX}. In this application Tk () = k, k=1,2,.., i.e. the "decision epoch" of each "policy" Xk is its index. Thus, for any positive integer N, n.(x) N, xX, and the equivalence relation -N is given by: x N y ifandonly if x =y' 1<j<N. Y. 43

Therefore, each -N-equivalence class consists of those elements of x whose first N components are the same, i.e. XN = X7-N may be identified with those distinct N-tuples in Y1X...XYN which are extendable to elements of x. The N-horizon problem is then: min C ( xaJ,xEX. J=1 Assumption L. Suppose that for each positive integer N, the set xN is the solution set to the following system of inequalities: N a (xj.) b, i=l,...,N, j=l x EY, j=1,...,N..1 J This assumption is satisfied whenever it is possible to feasibly continue any solution which is feasible for a finite horizon problem. Under this assumption, the N-horizon problem becomes: N N ' a (x )>b i=l,...,N, - J J i J=l x (Y, j=,.... j j If we replace N by [,'], then we get a family of such programs which are equal to the previous one for all N ' T<N + 1. For'r = oo, we have: 44

K"(x)= ' c+ (x )a, - J J=l j.= 1 and C (x)= - c (x )aJ, xX. j=l Under assumptions H through K, the infinite program with which we began this section is an example of our general infinite horizon optimization problem. In particular, the function x -* C(x) is continuous on the non-empty compact set X of feasible solutions. By Theorem 4.9, it follows that orA~~, -~N in c (x J limi mill c (x )J. V* = J.= -.i J x(X.i=1 N —*o x(X./= The set x* of optimal solutions to the general problem is clearly the set of optimal solutions to the infinite program. Moveover, for each positive integer N, X*(N) is the set of optimal solutions to the N-horizon program. Thus, x* and x* (N) are non-empty and compact, for all N. Since X* (T) = X* (N), for all N^ t < <N + 1, in this application, it makes sense to think of an algorithm A as being a mapping of the form N-,A(N), where A(N) cX* (N), N = 1,2, ---, i.e. it is a sequence of non-empty, closed sets. For each A, X*A is the non-empty, compact subset of X* consisting of those x* which are limits of sequences of elements from subsequences of the A(N). If A is the maximal algorithm, then x*A = x*(,~). It seems likely that 45

x*(X) is a proper subset of x* for the case of infinite programs. Algorithm A is convergent if X* is a singleton. If k is a positive integer and 8>0, then a a-solution horizon of order k for A is a positive integer Nk,6 satisfying p (x(M),x )<6, < j8, k, x (M)A(M), for all M _ N, where x* is some element of X*A. In this application, not only do we want 8 to be arbitrary, but we also want k to be arbitrary in order to get close agreement in many of the variables xj. If the infinite program is integer, then the elements of A(M) have to eventually agree with x* in the first k variables, for 8:1. If A is convergent, then for sufficiently large N, we can get close agreement between the elements of A (N) and some x* in x*A for as many decision variables as we like. Finally, the algorithm of section 7 translates directly into an algorithm for finding solution horizons for our infinite program. 46

REFERENCES [1] Bean, J.C. and Smith, R.L. (1984). Conditions for the Existence of Planning Horizons, Math. of Oper. Res. 9 391-401. [2] Bes, C. and Sethi, S. (1986). Concepts of Forecast and Decision Horizons: Applications to Dynamic Stochastic Optimization Problems, to appear in Math. of Oper. Res. [3] Bhaskaran and Sethi (1985). Conditions for the Existence of Decision Horizons for Discounted Problems in a Stochastic Environment: a Note, Oper. Res. Letters 4, 61-64. [4] Grinold, R (1983). Convex Infinite Horizon Programs, Math. Proqr. 25, 64-82. [5] Gaal, S.A. (1964). Point Set Topology. Academic Press, New York. [6] Goldberg, R.R. (1964). Methods of Real Analysis, Blaisdell Pub. Co., Waltham. [7] Hopp, W.J., Bean, J.C. and Smith, R.L. (1985). Non-Homogeneous Markov Decision Processes. Worker Paper, University of Michigan, to appear in Oper. Res. [8] Kolmogorov, A.N. and Fomin, S.V. (1970). Introductory Real Analysis. PrenticeHall, Englewood Cliffs. [9] Widder, D.V. (1946). The Laplace Transform. Oxford University Press, Oxford. 47