A FINITE ALGORITHM FOR SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH Department of Mathematical Sciences Oakland University Rochester, Michigan 48309 and Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 Technical Report 97-12 September 29, 1997

A FINITE ALGORITHM FOR SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH Department of Mathematical Sciences Oakland University Rochester, Michigan 48309 and Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 September 29, 1997 ABSTRACT. We consider the general optimization problem (P) of selecting a continuous function x over a a-compact Hausdorff space T to a metric space A, from a feasible region X of such functions, so as to minimize a functional c on X. We require that X consist of a closed equicontinuous family of functions lying in the product (over T) of compact subsets Yt of A. (An important special case is the optimal control problem of finding a continuous time control function x that minimizes its associated discounted cost c(x) over the infinite horizon.) Relative to the uniform-on-compacta topology on the function space C(T, A) of continuous functions from T to A, the feasible region X is compact. Thus optimal solutions x* to (P) exist under the assumption that c is continuous. We wish to approximate such an x' by optimal solutions to a net {Pi}i E I of approximating problems of the form minxEi, ci(x) for each i E I, where (1) the net of sets {Xi}1 converges to X in the sense of Kuratowski and (2) the net {ci} I of functions converges to c uniformly on X. We show that for large i, any optimal solution x* to the approximating problem (Pi) arbitrarily well approximates some optimal solution x* to (P). It follows that if (P) is well-posed, i.e., limsup X, is a singleton {xa ), then any net {xt }j of (Pi)-optimal solutions converges in C(T,A) to x'. For this case, we construct a Jintte algorithm with the following property: given any prespecified error 6 and any compact subset Q of T, our algorithm computes an i in I and an associated x* in Xi' which is within 6 of r' on Q. We illustrate the theory and algorithm with a problem in continuous time production planning over an infinite horizon. ~1 Introduction Consider the abstract optimization probleml mlill c(x) (P) x(X 1991 Mathematics Subject Classificatton. Primary 90C20. Secondary 49A99. Key words and phrases. Continuous time optimization, optimal control, infinite horizon optimization, production planning. This work was supported in part by the National Science Foundation under Grants DDM-9214894 and DMI-9713723. The first author was partially supported by an Oakland University Research Fellowship. Typeset by AMS-ITX

2 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH where the feasible region X is a non-empty compact subset of a function space Y and the objective function c is a real-valued continuous function over Y. The Weierstrass Theorem assures us of the existence of an optimal solution x* E X that attains the minimum value c* of c over X. However, optimization problems in this class are difficult to numerically solve in general, since there is seldom a concrete representation for solutions in Y. In this paper, we explore general methods for approximating a solution to (P) via solutions to a net (Pi), i E I, of simpler approximating problems, where (Pi) is given by min ci(x) (Pi) xEXi for i E I, ci is continuous on Y, lim Xi = X (Kuratowski) and ci - c uniformly on Y. These approximating problems are typically finite dimensional (or special in other ways that render them more easily solvable than the original problem (P)). For example, we may choose to approximate an optimal solution to a continuous time infinite horizon optimization problem (P) by optimal solutions to discrete time finite horizon versions (Pi), i E N, where the time periods decrease to zero and the horizon increases to infinity as i -+ oo (see e.g. Schochetman and Smith [1989, 1992a, 1997], Bes and Sethi [1988], Bean and Smith [1984, 1993]). The emphasis in this paper is on the case where Y is an infinite dimensional function space. For example, problems in infinite horizon optimal control seek a control function x from a space Y of continuous functions on IR~ that minimizes its associated infinite horizon discounted cost. In this case, the feasible region X is specified through a differential or integral equation that relates system state evolution to the control policy employed (see e.g. Carlson, Haurie and Leizarowitz [1987], Luenberger [1969]). In fact, the special case of a problem in infinite horizon production control is considered here in detail in section 5. More generally, best approximation problems also fall within the framework of (P), where c(x) is a measure of the error of x from a fixed target function xo (see e.g. Schochetman and Smith [1991, 1992b], Dontchev and Zolezzi [1993]). Since the objective functions and feasible regions of the approximating problems (Pi)iEz are also defined over the conmmon space Y, it is important to endow Y with a topology that is fine enough to allow for the error in solution approximation to be suitably small, but at the same time, coarse enough for desirable properties like compactness of X to hold. Toward this end, we begin by embedding Y within the set of all functions from a a-compact Hausdorff space T to a metric space A. By requiring X to be a non-empty closed subset of equicontinuous functions from the product (over T) of compact subsets in A, the pointwise convergence and uniform convergence on compacta topologies agree on X. If follows by Tychonoff that X remains compact in the stronger topology of uniform convergence on compacta. It is this stronger topology that appropriately measures solution error; roughly speaking, two solutions over T are "close" when their difference is uniformly small over a compact subset Q of T. For example, if T = N, and is discrete, then this reduces to actual agreement on finite subsets of N and in particular on {1,...,n}. Moreover since T is a-compact. it is the countable union of such subsets, and the near agreement can thus be required over nearly all of T. In fact, there exist compact Q, -T (Kuratowski), as i oo, where Qi = Uk=Qf, Q compact for all k. We requilre only two properties of the approximating problems (Pi) in their relation to (P): 1) their feasible regtions X, N X (Kuratowski) and 2) their objective functions ci -' c (uniformly). Requirement 1) is a sigllificant relaxation over assuMliptions coimmonly made in the literature. For example, in Schochetman and Smithi [1989], it is required that., = X\ for all i, while in Schochetman and Smith [1992] and Semple [1996]. it is assumrned that X-i41 C N,,, all, and A = nol1Xi, so that Xi 1 X in all all cases. Also, the action functions in these papers are defined within a discrete time framework where T = {1, 2,...}. In section 2, the class of optimization problems considered is formally defined, as well as their associated nets of approxiniating problems. \\e prove optimal value convergence, i.e., that the net of optimal values {c* } to thle approximating problenms converges to the optimal value c* of the original optimization problem.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 3 We also establish the fundamental result that solutions to the approximating problems (Pi) become (for large i) arbitrarily close to optimal solutions to the original problem (P). Section 3 turns to establishing conditions under which optimal policy convergence takes place, i.e., conditions under which optimal solutions of approximating problems converge to an optimal solution of the original problem. Existence of a unique accumulation point of approximating optima is established as a sufficient condition for this to take place. Under this well-posedness condition, in section 4, an algorithm is provided, together with stopping rule, that is guaranteed to finitely compute an approximating solution within any prespecified error from the optimal solution. Finally, in section 5, we apply the theory and algorithms developed in the preceding section to a problem in continuous-time infinite horizon production control. ~2 Problem Formulation and Value Convergence Let T be a u-compact Hausdorff space (Dugundji [1966, p. 240]) and A a metric space with metric d. The space T is the space of the action index (e.g., time) and A is the space of all possible actions. Let AT denote the set of all functions from T into A. An element y of AT will be called an action strategy in A over T. Next let C(T, A) denote the set of continuous functions from T into A. Although there are several natural Hausdorff topologies for C(T, A), there is one which is of particular interest to us, namely the topology of uniform convergence on compact sets or more briefly, the uniform-on-compacta topology (Kelly [1955]. Moreover since T is -compact, it follows (Dugundji [1966, i p.272]) that this topological space C(T, A) is metrizable and hence, first countable. We next observe that the uniform-on-compacta topology is a jointly continuous topology on C(T, A); this is not true in general for the topology of pointwise convergence (Kelley [1955, p. 223]). It is primarily for this reason that we adopt the topology of uniform convergence on compacta onC(T, A). Lemma 2.1. The canonical mapping (y, t) - y(t) of C(T, A) x T into A is continuous. If T is discrete, then it is also continuous relative to the topology of pointwise convergence on C(T, A). Proof. This follows from the definitions of the relevant topologies, together with the fact that T is locally compact. De Let F be (a non-empty, equico7ntinuous (Kelley [1955, p.232]) family of functions in C(T, A), i.e., for each to E T' and > 0, there exists a neighborhood U of to in T such that d(y(f), y(to)) < (, Vt e U, Vy E E. For such a subspace E it follows (Kelley [1955. p.232]) that the relative topologies of uniform convergence on compactla anlld pointwise convergence are equal. Note that a sufficient condition for E to be equicontinuous is for E to be compact in C(T, A) (Kelley [1955, p.233]); if T is discrete, then it is sufficient for E to be merely pointwise-cornpact (Lemma 2.1). Since 1" is equicontinuous, its pointwise-closure in AT is also equicontinuous (Kelley [1955, p.232]), as is its relative pointwise-closure in C(T A). Thus, there is no loss of generality in also assuming that E is pointwise-closed in C(T, A). Consequently, E is closed in C(T, A) as well. II gelneral. given t E T. it is unlikely that all actions in A will be feasible for t. Thus, we will let Yt denote the space of actions in A which are feasible at action index t, Vt E T. We assume that Yt is a non-empty, compact subset of A, Vt e T. Let Y = En J7 Yt tET = {ye E:y(t) Yt,VteT},

4 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH so that Y C E C C(T, A) C AT and Y C nltT Yt C AT. Note that nteT Yt is not contained in C(T, A) in general, unless T is discrete. (In fact, if T is discrete, we could choose E = IT Yt = Y.) Thus, our choice of Y is necessitated by the fact that the decision index t need not be discrete. We assume Y #: 0. The space Y will play the role of the space of all possible strategies (feasible or not) over T with values in the appropriate decision spaces Yt, Vt E T. Note that Y is an equicontinuous family also, so that the restrictions to Y of the uniform-on-compacta and pointwise convergence topologies agree. We will simply refer to these restrictions as the topology of Y. Finally, note that Y being a subspace of C(T, A) is also first countable. The next result explains why we require E to be equicontinuous. Lemma 2.2. The space Y is a non-empty compact Hausdorff space. Proof. The space nItT Yt is pointwise-compact by the Tychonoff Theorem (Kelley [1955]), and contained in AT which is Hausdorff relative to pointwise convergence. Hence, nltT Yt is pointwise-closed in AT (Kelley [1955]). Thus, Y is pointwise-closed in AT and pointwise-closed in C(T, A). Consequently, Y is closed in C(T, A). Thus, the result follows from Ascoli's Theorem (Kelley [1955, p.233]). L Example 2.3. (Discrete-Time, Discrete-Action Space) Let T be the positive integers N, Yt = {0, 1,... t }, Vt = 1, 2,..., and A = {0, 1,..., m}, where we assume 0 < mt, m, Vt = 1,2,... and d(x,y)= Ix-y, Vx,y=0,1,...,m. Then E = Itl Y is pointwise-compact and hence, equicontinuous (Kelley [1955, p.233]) and pointwiseclosed. In this case, Y = lit1 Yt. Example 2.4. (Discrete-Time, Continuous-Action Space) Let T = N and let Yt be a compact subset of the Euclidean space Rnt of dimension nt, Vt = 1, 2,.... Define A to be the set of infinite sequences x = (Xt) of real vectors in the IR'nt having the property that xt = 0, for all but finitely many t = 1, 2,.... Then A is a metric space with metric given by 00 d(x,y) = (Z lIxt - yt|12)1/2, Vx,y E A. t= Note that the canonical restriction of d to I yields the usual Euclidean metric, Vt = 1, 2,.... Once again, E = Y 1 'I is pointwise-compact. equicontinuous (Kelley [1955, p.233]) and pointwise-closed; Y = Hi Yt also. Example 2.5. (Continuous-Time, Continuous-Action Space) Now let T = R+ (non-negative reals) and A = R. so that AT is the set IR of real-valued functions on R+. Fix M > 0 and let E = {y E RiR: O y(t) - y (s) < M(t - s), VO < s <t}. Note that the constant functions are contained in each such E. Thus, E is a non-empty equicontinuous subset of C (IR+, I). Moreover, it is pointwise-closed in C(IR+, R), and hence, uniform-on-compacta closed as well. Also let Yt denote the closed interval [0, Mt], Vt > 0. Note that the zero function is in both E and Ht>0 )'. the non-zero constant functions are in E, but not in nt>0 Yt, and there exist non-continuous

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 5 functions in Lt>o Yt, which are not in E. The set lt>o Yt is pointwise-closed in RR+ and thus, also closed. Therefore, Y = E n nft>0 Yt is also closed in C(R+, ), and hence, compact by Ascoli's Theorem (Kelley [1955, p.233]). Also, note that y(O) = 0, for each y E Y; in fact, y E Y if and only if y E E and y(O) = 0. More generally, let F= {y E RR: y(t) -y(s)l < MIt - s, Vs,t O}, i.e., F is the set of Lipschitz continuous functions with uniform Lipschitz constant M. Accordingly, also let Yt = [-Mt, Mt], Vt > O. Then the previous claims are true for F and Y = F n 1t>0 Yt as well. Note that the remaining case where T is continuous-time while A is a discrete-action space is not considered, since the only continuous functions from a connected space to a discrete space are the constant functions, a trivial family. Returning to the general discussion preceding Example 2.3, recall that Y is a compact Hausdorff space. We will let IC(Y) denote the set of all closed (hence, compact), non-empty subsets of Y. If we assume that AC(Y) is equipped with the relativized Vietoris topology (Klein and Thompson [1984, p.8]), then KC(Y) is also a compact Hausdorff space (Klein and Thompson [1984, pp.15-17]). Moreover, the convergence in K(Y) underlying this topology is precisely Kuratowski convergence (Klein and Thompson [1984, p. 34]) which we describe next. Let {Si}, be a net of subsets of Y (with I directed by A) and y E Y. Define: (i) y is a limit point of the net {Si }/ if, for every neighborhood U of y in Y, there exists iu E I such that Si n U: 0, for all i E I for which iu -< i. (ii) y is a cluster point of the net {S, } if, for every neighborhood U of y in Y, and every i E I, there exists ji E I sucll that i - ji and Sji n U/ 0. Then let lili inf, Si (resp. limsup, Si) denote the set of limit (resp. cluster) points of the {Sil}. If S C Y and S = lil inf ISi = lim sup, S we write lim ISi = S. In general, liminfi Si and limsup Si are closed subsets of Y which may be empty. and which satisfy lim infi Si C lim sup, Si. Thus, liml Si = S if and only if lim sup l'i C S and S C lim inf I,. We assume that our feasible region X is a given closed, non-empty subset of Y, i.e., X E K(Y). We also assume we are given a real-valued, continuous cost function c defined on Y. Our optimization problem (P) is then defined as follows: min c(x). (P) xEX Since X is compact and c is continuous, it follows that the minimum is attained. We will denote the set of optinlal solutions to (P) by X* and the optimal cost by c*. Of course, X* is a non-empty, closed subset of X, i.e.. X * ]k(Y). Our prillmary objective in this paper is to approximate the optimal solutions of (P) by optimal solutions of probleliis which approximate (P). To this end, let {Xi } be any net in )C(Y) such that lim, Xi = X. The directed set I (directed by -) is the approximatzon index set. Example 2.6. (Discrete, Partially-Ordered Approximation Index) Suppose that X can be expressed as a coullltable interlsection no 1,,. whlere ~Kr, f k(Y), n = 1,2,.... For example, if the notation is as in Example 2.1. and X is the region satisfying countably infinitely many nonlinear constraints, then Kn could be the set of solutions in Y which sat isfy the ri-th constraint, Vn = 1, 2,.... Let I denote the collection of all finite subsets of N, so that I is a countable set which is directed by C. For each i E I, let Xi = nnEiKn.

6 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH Then {Xi}i is a (countable) net in KC(Y) where i C j implies Xj C Xi. Moreover, by Klein and Thompson [1984, p.28], lim Xi = nieIXi = X. I Example 2.7. (Continuous-Time Approximation Index) Let the notation be as in Example 2.5, with (I, <) = (IR+, <) also. Suppose we are given a function D: R+ -- R+. Define X = y E Y: y(t) > D(t), Vt > O}, and suppose it is non-empty. Then X is pointwise-closed in Y, i.e., X E IC(Y). For each t > 0, define xt = {y E Y:y(s) > D(s), VO < s t}, so that {Xt}t>o is a net in KC(Y) for which s < t implies Xt C X, and lim Xt = Xt =X. t —*00 t>O We now return to our general situation. For each i E I, let ci be a continuous, real-valued function on Y. We assume that the net {ci}i converges uniformly to c on Y, i.e., given 6 > 0, there exists is E I such that c(y) - ci(y)l < 6, Vy E Y, for all i E I such that i6 -< i. For each i ~ I, we define the i-th approximating problem (Pi) as follows: min ci(x). ('i) xEX, (In the pailticular case where I = N. we have that the sequence of problems (Pi) converges in the sense of Fiacco [1971] to the problem (P).) 'Ilie optimal solution set X* to (Pi) is then a non-empty, closed subset of Y. Thus. X7* E I(Y), for all i E I. We will denote the optimal objective value to (Pi) by c*, Vi E I. Theorem 2.8. (Value Convergence T he net {c, }I of optimal values to the (Pi),i E I, converges to the optimal vallue c* of (P), i.e., linm c' = c-. Proof. W\'v first show that liminf c c*. 1By definition of liminfj c, there exists a subnet {cj}j of {c*}I such that liiiJ c = liminfl c. But c; = cj(x), for some xj E Xj, Vj E J. Hence, {xj}j is a net in compact Y'. Thus, there exists a sutncet {x^ of {x;}, with corresponding subnet {ck}K of {cj}j, and x E V s(uch thlat limK x4 = x and linll/ c. = lin inf1 c;. Since lim1 Xi = X, i.e., limsup1 Xi C X and x E lim sup XA C limsup Xi C X J i.J i ebiE, (i.e., x is ('P)-feasible). we have that limeinf if = lir m c = lim ck(k). I'- I kEK k k

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 7 But limK Ck(Xk) = c(x), since ICk(Xk) - C(x)I < |ck(X) - C(4)l + IC(4) - C(X)I, c is continuous at x and the subnet {Ck}K converges uniformly to c on Y. Finally, c(x) > c*, which completes this part of the proof. Next we show that limsup1 c* < c*. Let x* E X*, so that c* = c(x*). Since X C liminf Xi, we have that x* E lim infr Xi. Hence, there exists a net {x}i) such that xi E Xi, Vi E I, for which limr xi = x*. By definition of limsup1 c*, there exists a subnet {cj* j of {c}, such that limj c = limsup1 c. Necessarily, limJ xj = x* also. Let cj = cj(x*), where x; E XJ C Xj, Vj E J. Then c* = c(x*) = lim cj (xj) > lim c! = lim sup ci, jEJ jeJ iEI which completes the proof. O Remark. Note that the second part of the previous proof does not require Y to be compact. Also, it is not true in general that limn X' = X* (see Schochetman and Smith [1992a], for example). However, we do have a partial result in this direction. Theorem 2.9. Every accumulation point of optima drawn from the (Pi) is optimal for (P), i.e., lim sup X' C X*. I Proof. Let x E limsupl X. Then telcre exists a subnet {Xj}j of {Xi}I and a corresponding net {x1} such that x E Y Vj E J. and lim.l* = x. Therefore. x E limsup Xi by definition, so that x E X by hypothesis. i.e.. c* < c(x). Since lii i x *. = x, we have that c(x) = li cj (x ) = lim c = c, j J jEJ J by Theorem 2.8, so that x E X*. O The following easy corollary to I'Theorem 2.9 is a fundamental result. It states that any optimal solution to an approximating problem (Pi) arbitrarily well approximates some optimal solution to (P) for sufficiently large i. Theorem 2.10. Let Q be a compact subset of 7' and 6 > 0. Then there exists io E I with the following property. for each i E I with io < i. and for each:x E,*', there exists y* E X* such that d(x*(t), y*(t)) < 6,Vt E Q. Proof Sll)ppose not. Thenl. for eac'i E /. there exists k E I such that i < ki, and there exists yi E X*i with t.he folllowing property. For (each y* E.'. there exists to E Q (depending on y* and i) such that d(yi(t(),y*l/o)) > 6. Now {y-}1 is a net in the compact space Y. Thus, there exists a subnet {yj}J of {yi}l and!/ E Y such tiiat lilnJ!/ =! in Y. (onsequently, y E limsupiX* C X*. Moreover, by the topology on y. there exists jo E.1 such that d(yE(t), y(t)) < 6, Vt e Q, for all j > jo in J. In particular, d(y3o(t)).!/) < 6,Vt E Q. But. for./' = y and I = jo, there exists to E Q such that d(yjo(to)),y(to) > 6. Contradicto 1(,11. Example 2.11. (Discrete-Time. Continuous-Action Space with Discrete-Time Approximation Index) Let 71 = I = N. A = R. anll let Y) be a non-empty compact subset of R, Vt E N, with Y = E = HN Yt.

8 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH For each y E Y, let Kt(y) (resp. Rt(y)) denote the cumulative cost (resp. revenue) attributed to y through time t = 1, 2,.... We assume that for each y E Y, the real functions (sequences) t - Kt(y) and t - Rt(y) on N are non-negative, non-decreasing and uniformly bounded by some exponential function, i.e., without loss of generality, there exist B > 0 and 3 > 1 such that max(Kt(y), Rt(y)) < B3t, Vy E Y, Vt = 1,2,.... For each y E Y, also define Ct(y) = Kt (y) - Rt (y) to be the cumulative net cost of y through time t = 1, 2,.... For each t, we assume that all costs and revenues incurred at time t are discounted by the discount factor a = (1 + p)-l, where p > 0 is the interest rate. Then, ct is defined by t ct(y) = E cs-[Cs(y) - C-_(y)], Vy Y, Vt = 1,2,... s=1 Similarly for the t-horizon discounted cost kt(y) and revenue rt(y), i.e., t kt(y) = cs- [Ks(y) - A,-l(y)], Vy E Y, Vt = 1,2,..., s=-1 and t rt (y) = E a- 1 [Rs(y) - Rs-l(y)], Vy E Y, Vt = 1,2,..., so that Ct(() = kt(y) - 7rt(Y) in this cCase. Define the infinite horizon discounted net cost to be c(y) = Z C -[C(y) - C_ (y)]= lim ct(y), Vy E Y, (:(~) = '[c"(y) - Cs_-() IMI=t —+-o.s'. 1 functions kl(/) and r(y). i.e.. k(y) = "-' [I,(?) - AIi(y)] = lim kt(y), Vy E Y, *" " t-Y~~ —.oo'... 1 and C r(y) = EI l(y) - -1(yl)] =lim rt(y), VyEY,.<, s- 1 lt-o00 respectively If p > 3 - 1. so that () < a3 < i. then for each y E Y, the quantities c(y), k(y) and r(y) do exist anld ((;y) = k(y) - r(y). For eactl y E Y, we also have that Jc(y) - ct(y)l < at, where at ias definled by " 4 B t 1 t 2, t^ = - (6/3)t+l Vt = 2,.. (r(l - a,)

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 9 (See 2.2-2.5 of Schochetman and Smith [1989] for analogous results in the case where T = IR and 3 = ev.) Consequently, the sequence {ct)} converges uniformly to c on Y. The same is true for k and r, i.e., Vt= 1,2,..., Ik(y)- kt(y)l < at, and jr(y) -rt(y)l < at, Finally, c* < c* +a,, for all s, t = 1, 2,... such that s < t. If, in addition, we assume that for each t = 1, 2,..., the functions Kt, Rt, and hence Ct, are continuous real-valued functions on Y, then for each t = 1, 2,..., the real-valued functions kt, rt, ct are continuous on Y. Hence, the real-valued functions k, r, c are also continuous on Y. (See 2.6-2.7 of Schochetman and Smith [1989].) Example 2.12. (Continuous-Time, Continuous-Action Space with a Continuous-Time Approximation Index) Let T = I = eRt and A = R. as in Examples 2.5 and 2.7. As in Schochetman and Smith [1989], let IKt(y) (resp. Rt(y)) denote the cumulative cost (resp. revenue) attributed to y E Y through time t > 0. We assume that, for each y e Y, the functions t -* Kt(y) and t -> Rt(y) on RD+ are non-negative, non-decreasing and uniformly bounded by some exponential function, i.e., without loss of generality, there exist B, - > 0 such that max(Kt(y), Rt(y)) < Be-t, Vy E Y, Vt:> 0. Also define Ct(y) = Kt(y) - Rt(y) to be the cumulative net cost of y E Y through time t > 0. Then, for each y E )'. the function t - Ct(y) is of locally bounded variation on JR+. For each t > 0, we assume that all costs and revenues incurred at time 0 < s < t are continuously discounted by e-p8, where p > 0 is the interest rate. Thus, the t-horizon discounted net cost ct(y) for y E Y is given by the Stieltjes integral (see Widder [1 4 (6]) ct(y) = e-PsdCs(y), t > 0. Jo The t-horizon discounted cost kt(y) and revenue rt(y) for y are obtained analogously, i.e., kt(y) = J e-PdKs(y), Vt > 0, and rtt(y) = c-dR,(y) Vt > O. Obvioulsly. r,(y) = k,(y) - rt(y). Vy Ef and Vt > 0. Define the infinite horizon discounted net cost c(y) of y E } to,() tlle Laplace-Stieltjes transform (see Widder [1946]) c(y) = f -P'dCs(y) = lim ct(y), J0} (t-0oo provided tlis limit exists. Similarly for the infinite horizon discounted cost and revenue functions k(y) and r(y). riestpectively, i.e., k(y) = f te-PsdK(y) = lim kt(y), Vy E Y, (y=-dJo KY) = t-oo

10 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH and r(y) = e-PSdR(y) = lim rt(y), Vy E Y. J0 t- 00oo If 0 < < p, then for each y E Y, the quantities c(y), k(y) and r(y) exist, and c(y) = k(y) - r(y). For each y E Y, and for each t > 0, we have \c(y) - ct(y)l < at, where at is defined by pBe(Y-P)t at --, Vt O. P -- (See 2.2-2.5 of Schochetman and Smith [1989].) Consequently, the net {ct}t>o of finite horizon net cost functions converges uniformly to the infinite horizon net cost function c on Y. The same is true for k and r. If, for each t > 0, the functions Kt, Rt, and hence Ct, are continuous real-valued functions on Y, then the real-valued functions kt, rt, ct are also continuous on Y. Thus, the real-valued functions k, r, c are likewise continuous on Y. (See 2.6-2.7 of Schochetman and Smith [1989].) Moreover, the net {c}t>O of optimal costs satisfies c; <_ c + as, VO < s < t and c* = limct (see 3.2 of Schochetman and Smith [1989]). ~3 Approximation Algorithms and Policy Convergence The problems (Pi), Vi E I, are viewed as approximating problems to (P). In general, the directed set I is not countable. However, it is intuitively the case that an approximation algorithm for solving (P) should be sequential in nature. (For example, consider the situation where an infinite horizon continuous-time optimization problem is being approximated by a sequence of finite horizon subproblems whose horizons are increasing without bound.) Consequently, for the remainder of this paper, we assume that N is a countable subset of the directed set I which is order-isomorphic to the positive integers and satisfies the property that for each i E I, there exists ni ~ N such that i -< ni, i.e., N is a subnet (Kelley [1955, p.70]) of I. Formally, we are as.siuning that there exists an order-preserving, one-to-one mapping f of (N, <) into (I, <) which satisfies: for each i e 1, there exists na E N such that if m E N is such that ni < m, then i < +(m). We then have AT = (N). For convenience, on N, we will use < and < interchangeably. Example 3.1. Let (I, <) be the positive reals (R1+, <) and N the positive integers N. (More generally, N could be a strictly monotone sequence in R+ which is unbounded.) Alternately, let I denote the uncountable set of all finite subsets of N (with -< given by C) with N the subset of I given by N = { {1, 2,..., n}: n E N}. in general. for each n E N, let Al be a closed, non-empty set of (Pn)-optimal solutions, i.e., 0 7 A* C X* so that c, = Cn(x), Vx E A4 and A* E K(Y), Vn E N. Define A*. = lirm sup A:, nEN which is 1eCcessarily a closed subset of X (Klein and Thompson [1984, p.28]). It is also non-empty, i.e., A*xE k A(). since the A* are non-empty and Y is compact. Since A< C X*, for all n E N, we have that,4 = linn sup.4' C lima sup X* C lim sup X* C X*, ryN nrE N iEI by Theor'iii 2.12, i.e., the elements of A*, are optimal for (P). In this context, as in Schochetman and Smith [1989]. we define ai approximation algorithm A* for (P) to be such a sequence {A.}N, for some choice of N as above. If A* is an approximation algorithm, then the elements of A* will be called A*-algorithmically

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 11 optimal solutions. The algorithm A* cannot approximate the other elements of X*. We will also say that the approximation algorithm A* converges if lim inf A = limsup A, nEN nEN in Y, i.e., if lim A: = AO, nEN in K(Y). Proposition 3.2. Suppose A* is an approximation algorithm for (P) which admits a unique A* -algorithmically optimal solution, i.e., Ao = {x*}, for some x* E X*. Then: (i) The algorithm A* converges to {x*} in IC(Y), i.e., limNA{ = {x*}. (ii) Every selection from the A* converges to x*, i. e., if x is any element of A,, Vn E N, then limN = x* in Y. Proof. Both parts follow from Corollary 2.2 of Schochetman and Smith [1991]. 0 Remark. If (P) admits a unique optimal solution, i.e., if X* = {x*}, then 0 7 A4, C {x*}, so that A* = {x* }. Such optimization problems are said to be well-posed (Dontchev and Zolezzi [1993]). Example 3.3. (Strictly Convex Programs) Consider the particular case where A is also a (real) topological vector space, X is a closed, non-empty, convex feasible region and c is a strictly convex objective function. In this case, (P) is well-known to admit a unique solution. Recall our fundamental notion of nearness of solutions in Y as near agreement over a compact subset of T. This guides our definition of neighborhoods of Y. Toward this end, let Q be a compact subset of T, x E Y' an(d (5 > 0. Define UQ(x,6) = y E Y: d(x(t),y(t)) < 6, Vt E Q} and UQ(G, 6)= U UQ(X ), xEG for C C )'. Note that UQ(x, 6) is a basic open neighborhood of x in the relative topology of Y, so that UQ(G, 6) is open in Y. In this context. we are able to give necessary and sufficient conditions for A4 to be a singletoll Theorem 3.4. The followiny are equivalent for an approximation algorithm A* = {.A}N: (i) A. it.s ( singleton {x*}. (ii) Solution indices exist for A*, i.e., there exists x* ACo such that, for each compact Q C T and 6 > 0, there ct'xst.s no E N satisfying A, C UQ(x*,6), for all 1 -.N such that n > no. (iii) Polcic convergence takes place for all approximate solution subsequences generated by A*, i.e., there exzsts x' A*, such that for all subsequences {An } of {A}), and corresponding sequences {x}j for which xj E A,. Vj = 1,2..., we have lim xj = x mn Y. Proof. (i = (ii). Suppose A, = {x*}. Then limiN A = {x*} in KC(Y) by Proposition 3.2. Suppose (ii) is fals, for x*. Thenl there exist Q and 6 as in (ii) such that for each n E N, there exists jn E N

12 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH such that jn > n and A*. UQ(*, 6), i.e., the intersection of A. with the complement of UQ(x*, ) in Y is non-empty. Thus, there exists a subsequence {Aj } of {A<}N, and a corresponding sequence {Yn}N, such that y, E A>4, but yn 4 UQ(x*, 6), Vn E N. Since UQ(x*, 6) is open, its complement is closed and hence, compact. Hence, passing to a subsequence if necessary, we may assume that limN yn = y, for some y B UQ(x*, 3). In particular, y $ x*. But yn E A4^, Vn E N, so that y E limsupN A = A4-, i.e., y = x*, by hypothesis. Contradiction. (ii) == (iii). Suppose x* is as in (ii). Let {A*;} be a subsequence of {A}N and {xj}j a corresponding sequence satisfying xj E A, Vj E J. Let Q be a compact subset of T and 3 > 0. By (ii), there exists no ~ N such that A^ C UQ(x*, 6), for all n E N satisfying n > no. By definition of subsequence, there exists jo J such that whenever j E J and j > jo, the image of j in N is at least no. Consequently, for j > jo we have d(xj(t),x*(t)) < 6, Vt E Q, so that x E UQ(x*,6), Vj > jo, i.e., limJ xj = x*. (iii (ii). By hypothesis, {x*} C A4. Suppose y* in A. Then there exists a subsequence {A}j of {A*,} mand a corresponding sequence {xj}j such that xj E eAj, Vj E J, and limjyj = y. By (iii), limj y.= 1*. Since Y is Hausdorff, it follows that y = x*. O ~4 Stopping Rule for a Finite Algorithm Let A* be an approximation algorithm for (P) as in section 3. In this section, under suitable additional assumptions, we present a Stopping Rule for this algorithm, as well as sufficient conditions for this Stopping Rule to he satisfied at some n7E C N, given compact Q C Y and 6 > 0. To do this, we require some additional notation;l(md ideas. For the remainder of the paper, we adopt the following assumptions. Assumptions (1) T le Xn, are nested downward. i.e.. if 7i > j in N, then Xn C Xj. Note that since limN Xn = X, we allso have that X = NXn,, (Klein and Thompson [1984, p.28]). (2) ('2,sts are monotoliically increatsing. i.e.. if i7 > j in N, then cn(y) > cj(y), Vy E Xn, where Xn C X ) \ 1). Example 4.1. In Examples 2.11 and 2.12. simply set I? = 0 in order to satisfy both of these assumptions. Lemma,1.2. The scqu.'enc' {c, }.s isrnoiwtonitally non-decreasing and bounded above by c*, i.e., c* T c*, as 11 - "N Proof. For it j in N' let x E- XC so tlhat c, = c,(x*), i.e., c* > cj(x*), by assumption (2). Since x, E X., tby assumption (1), it follows that ((x,,) > cJ, which proves the first part. For the second part, recall Tlloerem 2.8. 0 Let {,(,, },' be a sequence of real numbers satisfying an > c* - cn, so that an > 0, Vn E N. Note that in general. the sequence {a},}N need not converge.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 13 Example 4.3. If we know some b > c*, then we can let a, = b - cn, Vn E N. As an example, set b = c(x), for any x E X. Thus, there exist many such sequences. In particular, recall examples 2.11 and 2.12. Our next Lemma establishes upper and lower bounds on the optimal costs cn as we increase n. Lemma 4.4. For j < n in N, we have cj < c < cj + aj. Proof. By the previous lemma, c*j < c < C* = (c - c ) + cj < aj + c, by the choice of aj. O We next define a measure of error in value from optimal associated with solutions close to optimal for problem (Prn). Define MQ(6, n) = if{cm(x): x E Xm\UQ(A4, 6)}, where the slash denotes set difference (inl Y). Since UQ(A*, 6) is an open subset of Y, it follows that Xm\UQ(A, J6) is a closed (hence, compact) subset of Y, because it is equal to the intersection of Xm and the complement of UQ(A*, 6) in 1Y. Thus, AIQ(J, m) is attained, since cm is continuous. Recall that Cm = C,,7(1:), Vx E A;m. Stopping Rule. Fix compact Q C T and 6 > 0. Let m E N. Then stop at m if (Policy ('ri terion) d(x(t), y(t)) < 6/3. Vt E Q, Vx, y E An, and (Value (ri terion) AIQ(6/3, n) - c > 2am. We will s8!v that the algorithml A' terminnates at ni if the Policy and Value Criteria are satisfied for m, which we will (al l a solution indcxS of tolerance ( and support Q. Solution Index Algorithm 1. Choose compact Q C 17, > 0 and set nl = 1. 2. Solve ('P,) to get A*,, and c, which is equal to:,,,(x), for any x E An. 3. If tihe St opping Rule is nlot satisfied. set ni = ni + 1 and go to step 2. 4. Otherwise, stop. In tills event, ni is a solutionl irdex of tolerance 6 and support Q. Lemma 4.5. If A* termtinates at nm. then for all,n > mn, we have X* C UQ(A*, 6/3). ProoJ. If iot, t then there exists n > Ir/ such that XY is not a subset of UQ(A*,6/3). Hence, there exists x* E.\',,2(An, 6/3). 'I lie. since tlte \',, art nlested downward,.V; C X n C Xm and x * E X,, \ UQ(A t, J/3).

14 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH Thus, by definition, MQ(6/3, m) < cm(x*). Also, Cm(X*) - Cm > MQ(6/3, m) - c > 2am, by the Value Criterion, i.e., Cm(x*) - cm > 2am. On the other hand, C = Cn(X*) > Cm(X*) > Cm(X*) - am, by Assumption (2). Consequently, adding the previous two inequalities together, we obtain that cn - c > am, i.e., c; > cm + am. By Lemnla 4.4, this is a contradiction. Therefore, X* C UQ(A~, 6/3), Vn > m. - Theorem 4.6. If A* terminates at m, then for each x* E oA*, and each n > m, A C UQ(x*, 6), i.e., for each x E A*, we have d(x(t),x*(t)) < 6, t E Q. Proof. Fix x* E A* and e > 0. Note that UQ(x*, e) is an open neighborhood of x* in the topology of Y. By definitior of A* (Klein and Thompson [1984, p.24]), there exists n > m such that Aj n UQ(x*, e) # 0, i.e., there exists Xn E.A4 sucli that d(x,,(t),x*(t)) <, Et E Q. Consequ(ently, x* UQ((x,,. ) W\e have x, E A, C UQ(AL,6/3), by Leiiiiini 41.5. Therefori. tlere exists x,,, E. A,, such( that Xn E UQ(Xm, 6/3), i.e., d(Xi(t),.,,,(t) < 6/3, Vt E Q. Hence. by tlhe Triangle llnelquality. d(xf(t)x,,,(t)) <J/3+3, V(t E Q, so that x E -IC(x,,,, 6/3 + -), Ve > 0. We nItxt show that there exists y E A4,, sucll that d(y?(t), x (t)) < 6/3, Vt E Q.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 15 For each positive integer k, by the previous argument, there exists yk E A< such that x* E UQ(yk, 6/3+ 1/k). Then {yk is a sequence in Ag which is compact. Hence, there exists an accumulation point y of this sequence in A*m. This point x* must have the property that d(y(t),x*(t)) < 6/3, Vt E Q. Now let n > m and x E A,. By Lemma 4.5, xrE UQ(A*, 6/3). Hence, there exists Xm E A< such that x E UQ(xrr, 3/3), i.e., d(x(t), m (t)) < 6/3, Vt E Q. By the Policy Criterion, d(xm(t),y(t)) < /3, Vt E Q. Thus, by the Triangle Inequality, d(x(t),x*(t)) < d(x(t), m(t)) + d(xm(t),y(t)) + d(y(t),x*(t)) < 6, Vt E Q, which implies that x E UQ(X*, 6), so that A* C UQ(X* 3), Vn > m. This completes the proof. O Remark. The conclusion of Theorem 4.6 is true for all of Xn. However, we are assuming that our algorithm A* yields only that portion of X* given by A* C X. We next give sufficient conditions for the algorithm A* to terminate. Theorem 4.7. Suppose X* = {x*} and the a, can be chosen so that limN an = O. Then for each compact Q C 7'. ald for each 6 > (. the algorithm A* terminates at some m E N (which depends on Q and 6). Proof. Lt Q and 6 be as above. (Policy (Criterion) Since X* = {x*} and 0 #7 Ar* C X*, we have that lim sup A = lim A* = Ao = {x* }, 1N N by (i) of Proposition 3.'2. Thus, by Theorem 3.4, there exists no sufficiently large such that for n > no, we have A:, C 'Q(*, 6/6), i.e., d((xi)(t)x()) <6/6, Vt E Q, and for all:r,, E A*t. Tlerefore, for each i >?70 and each Xn, y, E A*, we have by the Triangle Inequality that d(x,,(t), y, (t)) < d(x (t),x*(t)) + d(x*(t),yn(t)) < /3, Vt Q. This establishles the Policy Criterion for every n E N satisfying n > no. (Value Criteri.on) Given Q and 6, we obtain no as above. Hence, for each n > no, x' E UQ(xn,6/6), Vxn E At

16 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH because d(x,(t),x*(t)) < 6/6), Vt E Q. Since A*:# 0, Vn, we have that X* E UQ(A,,6/6), n > no. (*) Therefore, if, for some m > no, we have that MQ(6/3, m) - c > 2am, then the Value Criterion is satisfied at m, as is the Policy Criterion, i.e., A* terminates at m. Thus, suppose that for each n > no, the Value Criterion does not hold, i.e., MQ(6/3, n) - c < 2an, for such n. Since MQ(6/3,n) is attained, for each n > no, there exists an E Xn\UQ(A*,6/3) such that Cn (n) = Mq(6/3, n). Hence, O < c,(x ) - Cn < 2an, for all n > no. But {axn} is a sequence in compact Y. Thus, there exists (Kelley [1955, p.138]) a subsequence {Xnk} of {x'n} and x E Y such that limk xnk = x. Necessarily, 0 < Cnk(Xk) - Ck < 2anlk, for all k sufficiently large such that n7k > no. Let ko be sufficiently large so that k > ko implies nk > no, and hence. 0 < Cnk (x) - Cl < 2ak,. Since litrlA ank = 0, we have that lim Crk (xk ) = li c = c, by Tlhoreiln 2.9. Now lillA. xnk = x and xn E XA, Vn, so that x E limsup Xn = limn Xn, i.e., x E X. But, by thle l'riangle Inequality, Ink (x ) - c(x)| < cnk, (x,) - C(xn) + IC(xnk) - C(X)|, Vk > ko, whichl coniverges to 0, since the function c is contiluous and the cn, converge uniformly to c on Y. Therefore, we have that c(x) = c*. so that x is optimal. Since X* = {x*}, it must be that x = x*. Now let 0 < 6 < J/6. Since, limk Xk = *r in Y, there exists kl (which we may assume is at least ko) such tlhat for k > k1, d(xk(t), x*(t)) < c < 6/6, Vt E Q, i.e.. nk E UQ(x*, ) C UQ(*, 6/6). Moreover. recall that:x * E Q(A, 6/6), for 77 > n7o, by (*) above. Then, for each k > k1 (so that nk > no), there exists Xnk E Al, for which d(x'(t), x,,(t)) < 6/6, Vt E Q.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 17 Thus, by the Triangle Inequality, for k > kl, d(xk (t), Xk(t)) < d(xn (t), x*(t)) + d(x* (t),Xnk(t)) < 5/3, Vt Q, i.e., Xnk E UQ(Ank, 3/3). However, k > k1 > ko implies nk ~ no, so that MQ(J/3, nk) is attained as ck(xnk) at xk e Xnk\UQ(Ank, /3), i.e., Xnk UQ(Ank, /3). Contradiction. Consequently, there exists n > no for which the Value Criterion holds. O In the next section, we present an application where the hypotheses of Theorem 4.7 hold. ~5 An Application to Production Planning Consider a production facility which produces one product over continuous time subject to a maximum production rate M > 0. Suppose that at any time t > 0, the cumulative demand through time t is given by D(t), where the demand function D:RIt -* RIt is non-decreasing (therefore Riemann integrable over any bounded interval) and satisfies D(O) = 0. Let T = IRW and A = IR, so that AT is the set RtR of all functions from IR+ to IR and C(T, A) is the set of all such functions which are continuous (we view RRi as a topological vector space of functions under pointwise operations and convergence). Define E and Y as in Example 2.5, with Yt = [0, Mt], Vt > 0, so that. Y = E n rlt>O Yt, i.e..:y E Y if and only if y:1 R -a R, y(0) = 0 and y E E, that is, 0 < y(t) - /(s) < A(t - s), V0< s <t. Recall tllat Y is compact ill C(IR,iR). For each( y e Y, y(t) deinotes the cumullative production through time t > 0. The non-empty, compact Hausdorff space Y is also an equicontinuous fanzlilv of real-valued functions on 1R+. In practical terms, Y consists of all (non-decreasing) cumulative productionl functions which do not exceed the maximum production rate A.I, and which reflect the fact that p)roduction begins at time t = 0. Also, let K(Y) be as in section 2. In order to ensure that it is possible to satisfy demand at all times, we must assume that the demand function /) and the production rate Al are such that D(t) < Mt, Vt > 0. Consequently, as in Example 2.7, the fea.sible region X is tlhell the convex subset of 1Ri given by x = {y e Y: (t) > D(t), vt > 0}.

18 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH (If D(to) > Mto, for some to > 0, then X = 0.) Since the function y(t) = Mt is in X, it is non-empty. Moreover, X is pointwise-closed in Y, so that X is compact, i.e., X E IC(Y). Also as in Example 2.7, let (I, <) = (R+, <) and define Xt={y Y: y(s) >D(s), VO < s t, Vt 0 so that {Xt}t>o is a net in K(Y) which is nested downward and satisfies lim Xt = nt>oXt = X. t-oc - In order to introduce the cost structure, for t > 0, let: h(t) = the unit holding cost at time t, p(t) = the unit production cost at time t, and q(t) = the unit revenue at time t. We assume that the functions h, p, q: I' -, RR are continuous and bounded, i.e., sup{h(t), p(t), q(t)} < oo. t>O Thus. for any choice of, > 0. there exists B > 0 sufficiently large such that nIax{h(t), p(t), q(t)} < Bet, Vt > 0. Given tli.s data, we will construct anl infinite horizon optimization model for the production planning facility alonl( til1 lines we have developed inl sections 2 and 3. Blefore we can specify tlhe cost structure, we need to define the inventory and sales functions 0 and a, respectively. Let 0, a ' Y x R 1R' be given by 0(y,t) = max{y(t) - D(t),0}, Vy E Y, Vt > 0, and a(y,t) = mili {y(t), D(t)}, Vy E Y, Vt > 0. Thrs, if we follow production strategy iv E ' over all timle, then O(y, t) represents the inventory on hand at tine t. and a(y,t) represents the cumrulative sale>s through time t. It is not difficult to verify that for any productiol strategy in V. the inventory at time I is equal to total production through time t, less cumulative sales tli ll lgh time t, i.e.. 0(y,t) =y(t) - (y,t), Vy Y, Vt >O. In )lar'ticlar. if y X. tlltel a(yt! = I)(t), t > 0, so tilt li!/. t) is indepenlldelt of y. 'huls. for y E.\. we have 0(,.t) = y(t) - D(t), Vt > 0. The following additional properties of 0 anld a will be required in what follows.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 19 (i) For all t > 0 and y E Y, we have 0 < 0(y, t), a(y, t) < Mt. (ii) For all t > 0 and x, y E Y, we have 0(x, t) - 0(y, t)l, la(x, t) - a(y, t) < Ix (t) - y(t). (iii) For each y E Y, the function t - O(y, t) is Riemann integrable in t. (iv) For each y E Y, the function t -a (y, t) is non-decreasing in t. We are now ready to introduce a cost structure as in Example 2.12. Let 0 < p < oo be a specified interest rate and choose 0 < y < p. Define the holding cost, production cost and revenue functions (respectively) H, P,: Y x ItR -- IR as follows. For each y E 7Y and t > 0, let: t Ht(y)= j h(s)O(y,s) ds for all fixed A > 1, t Pt(y) = p(s)dy(s) Jo and Rf(y)= J q(s)da(y,s). Then HI(y) represents the (illilulative holldinlg cost for production strategy y through time t. Similarly for the cumulative production cost Pt(y) and the cuitiulative revenue Rt(y). Note that for y E X, Rt is independent of y, i.e.. it is constant on X. Tlhe reason for takilng the A-th power of 0 in the definition of Ht(y) will become clear shioitlv. Also define the cumult ive cost fulnctionl K: Y' x R- — 1 R- by K = H + P, so that the cumulative net cost functionl C: Y x R' - '- is given by C' = / - K = H + P - R. Then Kt(y), Rt(y) and Ct(y) are as in Example 2.12. We leave it to the interested rea(der to verify that, for each y E Y, the functions t -4 Kt(y) and t -?t(y) are non-nelgative, notl-decresingll alnd uniformly bounded by an exponential function, as required inl Example 2.12. Furthermlore. for each I > 0. the functions y -> Kt(y) and y -- Rt(y), and hence, y - (',(l). are continuouls in y;. 'lllls. our molldel satisfies the hypotheses of Examples 2.5, 2.7 and 2.11. Conse(ulllltly. we obtaill ne Tiet {ct}t>( of contilllous cost functions and a continuous cost function c such that lili, Ct = c t C>C

20 IRWIN E. SCHOCHETMAN AND ROBERT L. SMITH uniformly on Y, where, for t > 0, t ct(y) = / e-dC(y), Vy E Y, Jo and r~~ c(y) = e-PdC,(y) = lim ct(y), Vy E Y. o t —+00 Similarly for ht(y), pt(y), rt(y) and h(y), p(y), r(y), so that ct = ht + pt - rt, Vt > 0, and c = h +p -r, where c is a strictly convex function because of our choice of A in the definition of Ht(y). Furthermore, Ic(y) - ct(y)l < pB('-P)t/(p - ) = at, Ey E Y, Vt > 0, so that lim at = 0. 1-*oc Thus. (P) is given by millnxE c(x). and for each t > 0, (Pt) is given by minXExt ct(x). Consequently, X* and X. are non-empty, closed subsets of Y, i.e.. X'* E C(Y), X~ E C(Y), Vt > 0, and liml sup Xt C X*. t>() Note that X* is a singleton { x*}. sin(e c is strictly convex as in Example 3.3. Moreover, the optimal solution values c* and ct, Vt > 0. satisfy ct <' ( + (, O < s < t. and c = limnt-oo ct. Froml tile Remark following Proposition:3.2. we have that A* = {x*} for all approximating algorithms A. I'T1ell tby Propositioll:3.2 (ii). x — r' as - c. for all t-horizon optima x, t = 1,2,3,..., since N is a linearly ordered subset of R'. Ini parti(cular. b) (ii) of Theorem 3.4, we conclude that for all times r < oo and positive numbers (t. thi1ere exists a i E N satisfvying \x(s) - x*(s)l < 6, for all 0 < s < r, and for all u > /. /i i N. That is, givetl T. all suflcientlv dlistant finite horizon optima uniformly well approximate all components of the unique infinite horizon optimulllml x* over [0, 7]. Ihrning to how large the horizon must be to approximate x* for a given component error 6 and given interval [0, T], we need to assune I? = 0 to conclud(e that the optimal costs Ct are monotonically increasing in t. so thllat Assumptions I and 2 in section 4 are satisfied as well (see Example 4.1). We can now invoke the stoppillg criterion in section 4 to finitely terminate the forward procedure of solving the problem for horizons T = 1. 2.3.... We are guaranteed wby liheoremi 4.7 that the corresponding algorithm finitely converges.

SOLVING INFINITE DIMENSIONAL OPTIMIZATION PROBLEMS 21 REFERENCES 1. Bean, J. C., R. L. Smith (1984). Conditions for the Existence of Planning Horizons. Mathematics of Operations Research 9 391-401. 2. (1993). Conditions for the Discovery of Solution Horizons. Mathematical Programming 59 215-229. 3. Bes. C., S. Sethi (1988). Concepts of Forecast and Decision Horizons: Applications to Dynamic Stochastic Optimization Problems. Mathematics of Operations Research. 13 295-310. 4. Carlson, D. A., A. B. Haurie, A. Leizarowitz (1987). Infinite Horizon Optimal Control: Deterministic and Stochastic Systems, 2nd Edition, Springer-Verlag, Berlin. 5. Dontchev, A. L., T. Zolezzi (1993). Well-posed optimization problems, Springer-Verlag, Berlin. 6. Dugundji, J., (1966). Topology, Allyn and Bacon, Boston. 7. Fiacco, A. V. (1974). Convergence properties of local solutions of sequences of mathematical programming problems in general spaces Journal of Optimization Theory and Applications 13 1-12. 8. Kelley, J. L. (1955). General Topology, Van Nostrand, New York. 9. Klein, E., A. C. Thompson (1984) Theory of Correspondances: Including Applications to Mathematical Economics, John Wiley and Sons, New York. 10. Luenberger, D. G. (1969). Optimization by Vector Space Methods, John Wiley, New York. 11. Schochetman, I. E., R. L Smith (1989). Infinite Horizon Optimization. Mathematics of Operations Research. 14 559-574. 12. (1991). Convergence of Selections with Applications in Optimization. Journal of Mathematical Analysis and Applications. 155 278-292. 13. (1992a). Finite Dimensional Approximation in Infinite Dimensional Mathematical Programming. Mathematical Programmiang. 54 307-333. 14. (1992b). Convergence of Best Approximations from Unbounded Sets. Journal of Mathematical Analysis and Applications. 166 112-128. 15. (1997). Existence and Discovery of Average-Cost Optimal Solutions in Deterministic Infinite Horizon Optimization, forthcomimg in Mathematics of Operations Research.. 16. Semple. J. (1996). Infinite Positive-Definite Quadratic Programming in a Hilbert Space. Journal of Optimization Theory and Applications 88 743-7-19.. 17. Wi(ddtr. D. V. (1946). Tif. Laplace Transfornm, Oxford University Press, Oxford.