CONVERGENCE OF SELECTIONS WITH APPLICATIONS IN OPTIMIZATION I.E. Schochetman Department of Mathematical Sciences Oakland University Rochester, MI 48309 and R.L. Smith Department of Industrial and Operations Engineering University of Michigan Ann Arbor, MI 48109 Technical Report 88-4 April 1988

CONVERGENCE OF SELECTIONS WITH APPLICATIONS IN OPTIMIZATION I. E. Schochetman Dept. of Mathematical Sciences Oakland University Rochester, MI 48309 and R. L. Smith* Dept. of Industrial and Operations Engineering University of Michigan Ann Arbor, MI 48109 ABSTRACT We consider the problem of finding an easily implemented tie-breaking rule for a convergent set-valued algorithm, i.e. a sequence of compact, non-empty subsets of a metric space converging in the Hausdorff metric. Our tie-breaking rule is determined by nearestpoint selections defined by "uniqueness" points in the space, i.e. points having a unique best approximation in the limit set of the convergent algorithm. Convergence of the algorithm is shown to be equivalent to convergence of all such nearest-point selections. Under reasonable additional hypotheses, all points in the metric space have the uniqueness property. Consequently, all points yield convergent nearest-point selections, i.e. tie-breaking rules, for a convergent algorithm. We then show how to apply these results to approximate solutions for the following types of problems: infinite systems of inequalities, semi-infinite mathematical programming, non-convex optimization, and infinite horizon optimization. AMS Subject Classification (1980): Primary: 54A20, 54C65, 54C60 Secondary: 49D39, 49D45, 90C25, 90C30 Keywords and Phrases: Hausdorff metric, lim sup/lim inf of sets, convergent selection, nearest-point selection, uniqueness point, convergent algorithm, tie-breaking rule, system of inequalities, semi-infinite program, non-convex optimization, infinite horizon optimization. *Partially supported by NSF Grant No. ECS-8700836. 1

CONVERGENCE OF SELECTIONS WITH APPLICATIONS IN OPTIMIZATION 1. Introduction. Suppose (X, d) is an arbitrary compact metric space. Define an algorithm A in X to be a mapping n -~ An of the positive integers into the set of closed, non-empty subsets K(X) of X (compare with [13, pp.183-184]). Thus, an algorithm A is just a sequence {An} in KC(X). Suppose C(X) is equipped with the Hausdorff metric D derived from d (see section 2). We will say that the algorithm A converges if the sequence {An} converges in (AC(X), D). If A converges, then there exists an Aoo in )C(X) such that An -- Aoo relative to D. We may think of Aoo as the solution set to some problem and An as the set of approximating solutions to this problem produced by the nth iteration of the algorithm A, for n = 1,2,3.... Given a convergent algorithm A, how do we approximate a point in Aoo, i.e. how can we construct a sequence {xn} such that xn E An, all n, and {xn} converges to some element of Ao? Such a construction will be called a tie - breaking rule. Of course, an arbitrary choice of Xn in An will not do, since the Xn will not converge in general. (However, if they do converge, the limit will be an element of Aoo.) Moreover, the theoretical existence of such a convergent sequence is insufficient for practical purposes. One usually requires a constructive procedure which can be implemented to yield such a sequence. In this paper, we give such a procedure which depends on the familiar notion of best approximation or nearest- point. Specifically, given any point p in X, let xn be a point in An which is nearest to p. Thus, the Xn are values at the An of a nearest-point selection on C(X) defined by p. The question we ask is the following one. If the algorithm A is convergent, under what conditions will the sequence {x,} converge to an element of Aoo? We show that convergence of the algorithm A is equivalent to the desired convergence of the sequence {,n} for all points p in the uniqueness set of Aoo, i.e. the set of points p in X having a unique best approximation in Ao. In general, the uniqueness set may be difficult to find, 2

thus making it difficult to choose p. In this case our tie-breaking procedure is difficult to implement. However, in many important applications, the uniqueness set of Ao is all of X. Therefore, in such cases, any point in X may be used as the defining point for a nearest-point selection whose values at the An converge to a point in Ao. (Note that this is in fact the case for convex subsets of a Hilbert space.) We apply these results to a variety of optimization problems. In section 2, we review the topological results required concerning the Hausdorff metric, the lim inf and lim sup of sets and as well as their connections. In section 3, we establish the selection convergence results. Specifically, we show that an algorithm is convergent if and only if selections canonically defined by continuous realvalued functions on X are convergent to an element of the algorithm's limit set, provided the function attains its minimum at a unique point of the limit set. This is equivalent to convergence of the nearest-point selections defined by the uniqueness points of the limit set. We also give a parallel set of conditions involving convergent selections which are equivalent to the non-emptyness of the lim inf of the algorithm. We complete this section by showing that if, in addition, the algorithm consists of convex subsets of a Hilbert space, then its limit set is convex and hence, its uniqueness set is the whole space. In section 4, we apply- our main results to approximation problems of the following types: (1) solving an infinite system of inequalities via approximate solution of finite subsystems, (2) semi-infinite mathematical programming via approximation by finite subprograms, (3) constrained optimization via grid approximation of the feasible region, and (4) infinite horizon optimization via finite horizon truncations. 2. Topological Preliminaries For each x in X, the mapping y -f d(x, y) is continuous on X. Thus, for each K in K(X), the minimum of d(x, y), for y in K, is attained and we may define 3

d(x,K) min d(x,y), xEX, KE C(X). yEK Moreover, for such K, the mapping x -~ d(x, K) is also continuous on X [9, Thm. 4.2]. Hence, for each C in K(X), the maximum of d(x, K), for x in C, is also attained and we may therefore define h(C, K) = maxd(x,K), C,K E AC(X). xEC Although h is not a metric on KC(X) (it's not symmetric), we may obtain a metric D on /C(X) if we define D(C,K) max(h(C,K), h(K, C)), C, K E C(X). This is the well-known Hausdorff metric on C(X) [5, 10, 11]. With this metric, IC(X) is compact [10, 12]. Convergence in K(X) will be understood to be relative to D. Now let {Kn} be an arbitrary sequence in KC(X). As in [5, 10, 11], define lim sup IK and liminf Kn as follows: 1. x E limsupKn if and only if there exists a subsequence {Kn } of {Kn} and a corresponding sequence {nk } such that Xnk E Kn,, all k, and xn -,- x, as k - oo. 2. x E liminfKn if and only if, for each n, there exists Xn E Kn such that Xn -- x, as n -- oo. In general, lim inf Kn C lim sup Kn. Also, lim inf Kn and lim sup Kn are closed subsets of X. In fact, limsup Kn belongs to AC(X), since it must be non-empty. However, lim inf Kn may still be empty. The next result summarizes the connection between these limit sets and Hausdorff convergence [10, 11 12]. 4

Theorem 2.1. Let {Kn} be a sequence in AC(X) and K an element of C(X). Then the following are equivalent: (i). Kn -+ K in C(X) relative to D. (ii). K = lim sup K, = lim inf Kn i.e. K C lim inf Kn and lim sup Kn C K. Corollary 2.2. If limsup K, = {)}, then liminf {K,} = {x} also. In this case, xz -- x, as n -- oo, for all choices xn in K,, all n. Proof. Let x,n E Kn, all n. If axn 74 x, then there exists a subsequence {xn, } of {x,n} which is bounded away from x. Since X is compact, passing to a subsequence if necessary, we may assume there exists y in X such that xz, - y, as k -. oo. Consequently y E limsup K, i.e. y = x, by hypothesis. Contradiction. I 3. Selection Convergence Results We define a selection s on AC(X) to be a mapping s: 1C(X) -+ X satisfying s(K) E K, all K in KC(X). Note that selections are not required to be continuous. Our objective is to equate convergence of Kn to K in (IC(X), D) with convergence of s(Kn) to s(K) in (X, d) for nearest-point selections Sp corresponding to appropriately chosen points p in X. Before we can do this, we need to establish some additional concepts. Let f be a continuous real-valued function defined on X. Define mf: 1C(X) -+R by mf(K) = min f(x), xEK and Mf: IC(X) - IC(X) by Mf(K) = x K: f(x) = mf(K)}. Then mf(K) is the minimum value of f on K and Mf(K) is the compact, non-empty subset of K on which this minimum is attained. (Note that Mf(K) being a singleton is a generalization of K being a singleton.) We then define an f-selection to be any selection Sf such that sf(K) E Mf(K), K E C(X). 5

Now let p be any point in X. Define fp(x) = d(p,x), EX. Then fp is a continuous real-valued function on X. Denoting mfp by mp and Mfp by Mp, we see that mp(K) is the distance from p to K and Mp(K) is the subset of K where this distance is attained. As above, we will write Sp for Sfp. Observe that sp(K) is any point in K which is nearest to p. For this reason, we call sp a nearest - point selection defined by p. Now fix K in C(X). If p is such that there exists a unique x in K such that d(p, K) d(p, x), i.e. Mp(K) is a singleton, then we will say that p is a uniqueness point for K (relative to d). In this case, d(p,x) < d(p,y), for all y in K different from x. Let U(K) denote the uniqueness set for K, K E C(X). If p E U(K) and sp is a nearest-point selection defined by p, then Sp(K) is uniquely determined in K. In general, K C U(K) C X, K E KC(X). Note also that U(K) being equal to X is a generalization of K being a singleton. The next lemma shows that, in general, mf is continuous, while Mf is only partially continuous. Lemma 3.1. If f: X — + R is continuous and K -+ Koo in KC(X), as n -+ oo, then mf(Kn) -+ mf(K,), as n -- oo and limsupMfI(Kn) C Mf(K). Proof. An application of the (minimum version of the) Maximum Theorem [5, p.116] yields the convergence of mf(Kn) to mf(Koo) as well as the upper semi-continuity (in the sense of [5, p.109]) of the set mapping F(n) = Mf(K,), n = oo,1,2,..., where {0o,1,2,...} is viewed as a compact metric space under stereographic projection. By Theorem 6 of [5, p.112], it follows that the mapping F is also closed. Consequently, limsupMf(Kn) C Mf(K) by Theorem 4 of [5, p.111]. * There is an important special case where Mf and all f-selections are continuous. 6

Theorem 3.2. Let K E KC(X) and suppose f: X — + is continuous. If Mf(K) is a singleton, then: (i). The transformation Mf: C(X) — + C(X) is continuous at K. (ii). All f-selections sf: )C(X) — + X are continuous at K. Proof. Suppose K, -+ K in AC(X). By Lemma 3.1, limsupMf(Kn) C Mf(K). Since limsup Mf(K,) is necessarily non-empty, it must be a singleton by hypothesis. The theorem then follows from Theorem 2.1 and Corollary 2.2 applied to {Mf(Kn)}. * Corollary 3.3. Let K be an element of SC(X) and p a point in U(K). Then: (i). The transformation Mp: KC(X) -X AC(X) is continuous at K. (ii). All nearest-point selections sp defined by p are continuous at K. The following are our main results. The first says that, given a convergent sequence of sets, all nearest-point selections defined by uniqueness points of the limit set converge to a point in the limit set. Theorem 3.4. Let {K,} be a sequence in C(X) and K an element of 1C(X). Suppose limsup K, C K. Then the following are equivalent: (i). K C liminf K, i.e. K, -> K relative to D in IC(X), as n - oo. (ii). sp(Kn) - sp(K) in X, as n -+ oo, for all nearest-point selections sp defined by any p in U(K). (iii). sf(Kn) -+ sf(K) in X, as n -+ oo, for all f-selections sf defined by all continuous functions f: X -, R for which Mf(K) is a singleton. (iv). s(Kn) -+ s(K) in X, as n -- oo, for all selections s which are continuous at K. Proof. (i) implies (iv): Apply Theorem 2.1. (iv) implies (iii): Apply Theorem 3.2. (iii) implies (ii): If p E U(K), then Mf(K) is a singleton, where f = fp is continuous. (ii) implies (i): Let p be an element of K, so that p E U(K). By (ii), sp(Kn) -+ sp(K), as n -+ oo, where sp(Kn) E Kn, all n, and Sp(K) = p. Hence, p E liminf K, by definition. * 7

Corollary 3.5. If U(K) = X, then the following are equivalent: (i). K, -- K in KC(X), as n -- oo. (ii). sp(Kn) -+ sp(K) in X, as n - oo, for all nearest-point selections sp defined by any p in X. Analogously, we have the following result on existence of continuous selections. Intuitively, it is a dual version of Theorem 3.4. Theorem 3.6. Let {Kn} be a sequence in K(X) and K an element of K(X). Suppose limsup Kn C K. Then the following are equivalent: (i). # liminf K. (ii). sp(Kn) -sp(K) in X, as n - oo, for all nearest-point selections p defined by some p in U(K). (iii). sf(Kn) - Sff(K) in X, as n -~ oo, for all f-selections sf defined by some continuous function f: X —,+ for which Mf(K) is a singleton. (iv). s(Kn) - s(K) in X, as n -oo, for some selection s which is continuous at K. Proof. (i) implies (ii): Let x be an element of liminf Kn. By definition there exists Xn in Kn, all n, such that Xn -- x, as n -- oo. By hypothesis, x E K, so that x E U(K). Let s. be any nearest-point selection corresponding to x. Then d(x, s(Kn)) -+ 0, as n -+ oo, since d(x,sx(Kn)) < d(x,xn), all n. Hence, s,(Kn) -- x =,(K), as n -, oo. (ii) implies (iii): Let p be an element of U(K) and sp a corresponding nearest-point selection satisfying sp(Kn) -+ sp(K). As before, define f(x) = d(p, x), x E X. Then f is continuous on X, Sp is an f-selection and Mf(K) is a singleton since p 6 U(K). (iii) implies (iv): Let f: X -+ a be a continuous function for which Mf(K) is a singleton. Then each f-selection sf has the desired properties (Theorem 3.2(ii)). (iv) implies (i): Let s be a selection which is continuous at K and satisfies s(Kn) -+ s(K), as n -- oo. Then s(K) E liminfKCn by definition. * Remarks. (1) In the statement of Theorem 3.6 it suffices to assume more generally that 8

limsup K, C U(K). This is the case for example if U(K) = X. (2) Obviously, we are interested in approximating a point in liminf Kn. Theorem 3.6 gives conditions under which this can be done. While this result is of theoretical interest, it doesn't tell us how to construct such an approximation. The previous results show that in dealing with nearest-point selections, it is essential that the reference point p be a uniqueness point of the relevant limit set. In general, such p may be difficult to find. Thus, it is desirable that the uniqueness set be the entire space, so that any point can be chosen as a reference point. This will be the case, for example, if K is a convex subset of a Hilbert space. Specifically, we have: Lemma 3.7. If X is a compact subset of a Hilbert space and K is a convex element of IC(X), then U(K) = X. Proof. Follows from [2, p.15] or [8, p.23]. I In order to apply this lemma, we need a useful sufficient condition for the limit of compact sets to be convex. The following lemma gives us this condition. Lemma 3.8. Suppose X is a compact subset of a linear metric space. Let {Kn} be a sequence in C(X) having the property that each Kn is convex, n = 1,2,....Then liminf Kn is also convex. Consequently, the limits of sequences of convex elements of /C(X) are also convex. Proof. Let x,y E liminfK, and 0 < a < 1. Then there exist sequences {xn} and {yn} such that xn, n E K, all n, xn -, x and yn -, y, as n oo. Also, ax n+(1-a)y E IKn, all n, by hypothesis and axn + (1 - a)yn - ax + (1 - a)y, which must be an element of liminf Kn. Thus, liminf Kn is convex. The remaining statement follows from Theorem 2.1. * We conclude this section with several illustrative examples. Example 3.9. Let (X, d) be an arbitrary compact metric space with at least two distinct 9

points. Suppose xi -+ x and yi - y in X, as i -> oo, where x 5 y. Define Ki = {(i,yi}, i = 1, 2,..., and K = {a, y}. Then it is easy to see that K C lim inf Ki and lim sup Ki C K, so that they are equal. Hence, Ki -- K in 1C(X), as i -, oo. Moreover, U(K)= z E X: d(x,z) # d(y, z)}. Let p be any element of U(K). Without loss of generality, assume d(p,x) < d(p, y), so that Sp(K) = x. Then Sp(Ki) = xi eventually and sp(Ki) - sp(K), as i -~ oo. Example 3.10. Let X be any compact subset of J2 which contains the unit disk. For each n, let K, be the ellipse given by x2 + n2y2 = 1, n = 1, 2,... Then K, is a compact subset of X, n = 1,2,..., i.e. {Kn} is a sequence in C(X) whose limit K is easily seen to be the compact, convex interval {(x,0): Ixl < 1} in?2 [10, p.169]. Moreover, it is also clear that the uniqueness set of K is all of X. Thus, for any p in X and any nearest-point selection sp defined by p, sp(Kn) -- sp(K) in R2, as n -- oo. Example 3.11. Let X denote the product of countably many copies of the interval [-1, 1]. The compact product topology is metrizable by the metric d given by oo d(ax,y)=: IXi-yil/2,,y E X, i=1 where x = (xi), y = (yi). Let 0 < a < 1 and consider the following infinite horizon mathematical program: oo (MP) max E ai x2 i-= subject to xil < 1, i = 1,2,... Let X* denote the set of optimal solutions to (MP). It is easy to see that X* ={x E X: xi 1,all i}, 10

so that X* is uncountable. One interpretation of this is that for every discount factor a, there exist uncountably many infinite horizon optima for this problem. Now let N be a positive integer and consider the following N-horizon version of (MP): N (MP)N max E 3 i x i=1 subject to Ixil < 1, i = 1,2,...,-N. If X7 denotes the set of optimal solutions to (MP)N, then it is obvious that X {x X: xi = ~1, 1 < i < N }, N =1,2,... Thus, X1 D X; D X D... D X* D... D X* and 00 x = nXNT N=1 It then follows [11, p.339] that X* = lim inf X* = lim sup X*, so that X7v -- X* in K(X), as N -+ oo. In fact, one may verify that D(X,X*) < 2-N, N = 12,... The uniqueness set of X* is given by U(X*)= {x E X: xi $ 0, all i}. Let p be an element of U(X*). Then a nearest-point selection sp defined by p satisfies: S(X*)i _{ 1, if p >0, 11 if < 11

and 1, if 1 < i < N andpi > 0, SP(XN)i = -1, if 1 < i < N andpi < 0, pi, ifi> N. Note that Sp(X7) is the unique element of X7 having the property that d(p,Xj) = d(p,sp(X*)), N = 1,2,.... Of course, sp(X) - p(X*), as n — > oo, as required by Theorem 3.4. In fact, we may verify that d(sp(X;), Sp(X*)) < 2-N, N=1,2,.... 4. Applications We are now ready to apply our main results on selection convergence. The following result will be very useful in this section. As in section 1, let A be an algorithm in X, where we assume that X is a compact subset of some Hilbert space. Theorem 4.1. Suppose each An is convex, n = 1,2,..., and A is convergent to Aoo. Then for each p in X, we have Sp(An) -~ sp(Aoo) in X, as n -~ oo, where sp is the unique nearest-point selection corresponding to p. Proof. Since {An} converges in C(X), it follows that Ao is also convex (Lemma 3.8). Hence, for each n = oo, 1, 2,..., An contains a unique nearest-point to any p in X (Lemma 3.7), i.e. the uniqueness set of An is X. Thus, for each p in X, there exists a unique nearest-point selection defined by p. It follows from Corollary 3.5 that sp(An) -+ sp(Aoo), as n -+ oo. I 4.1 Systems of Inequalities Now consider the following problem of finding a solution to an infinite system of inequalities; that is, we seek x = (1,..., n) in An such that gi(x) < bi, i = 1,2,..., where each gi is a real-valued continuous function of a real variable, and uj < xj < n v, j = 1,...,n [6, 7]. Let X = j [uj,vj] so that X is a compact, convex subset of j=1 12

tn. For each N = 1, 2,..., define KN = {x E X: gi(x) bi, i =1,...,N}. Then KN is the set of solutions in X to the first N inequalities. Moreover: (i) Each KN is a compact subset of X. (ii) The KN are decreasing, i.e. KN+1 C KN, N = 1,2,.... (iii) The set KO of all solutions to the original problem is equal to nl1 KN. Theorem 4.2. Suppose Koo 0, so that KN E K(X), all N. Suppose also that gi is convex, i = 12,.... If p is any point in X, then the sequence of points {XN}, where XN is the solution to the first N inequalities nearest to p, converges to the solution of the system of inequalities which is nearest p. Proof. By hypothesis, each KN is also convex, N = 1,2,.... Moreover, KN - Ko in t(X) relative to the Hausdorff metric [11, p.339]. Now apply Theorem 4.1 to complete the proof. I Remark. Without loss of generality, we may assume the origin is in X.Thus, in particular, we may choose p to be the origin in in?. Then the sequence of points in the KN closest to the origin converges to the solution of the original problem which is closest to the origin (i.e. of minimum norm). 4.2 Semi-infinite Programming Consider the following semi-infinite, convex mathematical program: (P) max c(xl,...,,n) subject to n Z aij(xj) < bi, i =1,2,..., j=l Uj <,xj Vj, j = 1,...,n, 13

where -c and each ai is continuous and convex, j = 1,..., n, i = 1,2,... [1, p.66]. Define X as in the previous problem and let K denote the feasible region to (P). As above, K is a compact, convex subset of X. Assume K 54 0. For each N = 1, 2,..., consider the finite subprogram given by: (PN) max c(xl,...,,n) subject to n S aij(xj) < bi, i= 1,...,N, j=1 uj <xj<vj, j =,...,n. If KN denotes the corresponding feasible region, then the KN are as in the previous application. In particular, KN — * K in KC(X), as N -, oo. Let v = max {c(x) I x KN}, N = 1,2,... and c* =max {c(x)I ax E K}. Then, by Lemma 3.1, we have c* -f c*, as N -- oo, i.e. value convergence holds. Also, let K = {x E KN I c(x) =CN}, N =1,2,..., and K* ={x E K I c(x) c*}. Then K* and each K* is a compact, non-empty subset of X, for N = 1,2,..., and in addition, limsupK7* C K* (Lemma 3.1), i.e. solution convergence holds partially in general. If (P) has a unique solution, i.e. K* is a singleton, then K* -* K*, as N -+ oo, by Corollary 2.2. Theorem 4.3. Suppose A is an algorithm for the (PN) for which AN is a non-empty, compact, convex subset of K N = 1, 2,.... If A converges, then for each point p in X, 14

the sequence {xN}, where x* is the solution to (PN) in AN nearest to p, converges to an optimal solution to (P). In particular, this is true if p is the origin in R'. Proof. By hypothesis, there exists AOO in KC(X) such that lim sup AN = lim inf AN = Ao,. Since AN C K, N = 1,2,..., it follows that Aoo C limsup K [5, p.121], i.e. Aoo C K*. Now apply Theorem 4.1 and the succeeding remark. m Remark. If (P) has a unique solution, then the theorem is valid for xN any element of AN, N=1, 2,... 4.3 Non-convex Optimization Consider the optimization problem (P) max f(x), zEK where K is a non-empty, compact subset of m-dimensional Euclidean space ymn and f is a continuous function of m real variables [3]. Suppose we try to solve this problem by the following grid-approximation technique. For convenience, let X be any compact subset of Rm satisfying K C X. Assume also that K is the closure of its interior K0. For each n = 1,2,..., let Zn = {k/n: k = integer}, Gn = Zn x... x Zn (m times) and Kn =KnGn. Then Kn is a finite subset of K which is eventually non-empty since K~ is non-empty. Thus, {Kn} is a sequence in Ai(X), for sufficiently large n. Lemma 4.4. The sequence {Kn} converges to K in (X) relative to the Hausdorff metric. Proof. Since Kn K, all n, it is clear that lim sup Kn C K. Suppose x is an element of 15

K~ and V is an arbitrary neighborhood of x contained in K~. It is easy to see that V is eventually intersected by the Kn, i.e. x E liminf K, [11, p.335]. Thus, K~ C liminf KI,, which implies that K C liminf Kn, since K is the closure of K~. The result then follows from Theorem 2.1. I Let K* denote the non-empty, compact set of optimal solutions to (P) and f* the optimal objective value. Also let Kn denote the set of optimal solutions to the finite approximation problem (Pn) max f(x) x EK, and fn the corresponding optimal objective value, n = 1,2,... (These are welldefined for sufficiently large n.) Note that K* is a finite, (eventually) non-empty subset of K,, n = 1, 2,..., i.e. {K*} is a sequence in SC(X), for large n. Theorem 4.5. The sequence {fn} converges to f*. Also, limsup K C K*. Moreover, if K* is a singleton {x*}, then any selection of x* in K*, n = 1,2,..., converges to x*. Proof. Follows from Lemma 4.4, Lemma 3.1 and Theorem 3.2. m Remark. A sufficient (but not necessary) condition for K* to be a singleton is that K be convex and f be strictly convex. Corollary 4.6. If K is convex, then for any p in X, any sequence of solutions to the problem (Pn) closest to p converges to a solution of (P). 4.4 Discrete Infinite Horizon Optimization Consider an infinite sequential decision problem where the jth decision is to be chosen from the finite set {0, 1,..., M} (see [4]). An infinite sequence of such decisions will be called a strategy. (It is assumed that all strategies extend over the infinite time horizon.) In particular, let 0 = (0, 0,...). The strategy space Y is then the product of countably many copies of the given decision set; it is a compact Hausdorff space relative to the 16

product topology. If we fix 0 < / < 1, then Y is also a metric space with metric given by 00 d,(x, y) = fl j zIj-yji,,yEY. j=1 In general, not all strategies are feasible. Thus, we will assume there exists a closed, non-empty subset X of Y consisting of the feasible strategies. If 3 <, then Ryan, Bean, and Smith [14] have shown that 6 is in the uniqueness set of K, for K any element of KC(X). Moreover, they also show that for each K in KC(X), the unique element se(K) of K closest to 9 is the lexicographic minimum of K under the canonical lexicographic ordering of the elements of Y. Suppose there is a cumulative net cost function associated with each strategy. In order to compare costs over a finite or infinite horizon, we continuously discount them to time zero relative to a suitable interest rate. Let X* denote the subset of X consisting of those feasible strategies having minimum discounted infinite horizon cost. Assume X* is non-empty and closed. Likewise, for T > 0, let X*(T) denote the subset of X consisting of those feasible strategies having minimum discounted T- horizon cost. As above, assume each X*(T) is non-empty and closed. Then X* is an element of K(X) and {X*(T) | T > 0} is a generalized sequence in K(X). (Note that the results of sections 2 and 3 are valid for sets indexed by T > 0. We omit the details.) Let D be the Hausdorff metric on IC(X) corresponding to do. Application of Theorem 3.4 yields: Theorem 4.8. Suppose f < M+ If X*(T) -! X* in KC(X), as T -, oo, relative to DA, then the generalized sequence of lexicographic minima of the X*(T) converges to the lexicographic minimum of X*. Remarks. (1) In the presence of Hausdorff convergence of the finite horizon optimal solution sets, the previous theorem yields a tie-breaking algorithm for approximating an infinite horizon optimum by finite horizon optima. (2) In [15], Shapiro and Wagner considered an infinite horizon version of the knapsack problem. Ryan, Bean and Smith [14] 17

have shown that Hausdorff convergence holds in this case. Hence, this problem provides an example where Theorem 4.8 holds. 18

REFERENCES 1. E. Anderson and P. Nash, Linear Programming Over Infinite-Dimensional Spaces, 1987, Wiley, NY. 2. J.-P. Aubin, Applied Functional Analysis, 1979, J. Wiley, New York. 3. M. Avriel, Nonlinear Programming: Analysis and Methods, Prentice-Hall, 1976, Englewood-Cliffs. 4. J. Bean and R.L. Smith, "Conditions for the Existence of Planning Horizons," Mathematics of Operations Research 9, 391-401, 1984. 5. C. Berge, Topological Spaces, 1963, Oliver and Boyd, London. 6. L. Bregman, "The Method of Successive Projection for Finding a Common Point of Convex Sets," Soviet Mathematics Doklady 6, 688-692, 1965. 7. L. Bregman,'The Relaxation Method of Finding the Common Point of Convex Sets and its Application to the Solution of Problems in Convex Programming," U.S.S.R. Computational Mathematics and Mathematical Physics 3, 200-217, 1967. 8. E. W. Cheney, Introduction to Approximation Theory, 1966, McGraw-Hill, New York. 9. J. Dugundji, Topology, 1966, Allyn and Bacon, Boston. 10. F. Hausdorff, Set Theory, 2nd ed., 1962, Chelsea, New York. 11. K. Kuratowski, Topologie I., 1966, Acad. Press, New York. 12. K. Kuratowski, Topologie II., 1968, Acad. Press, New York. 19

13. D. G. Luenberger, Linear and Nonlinear Programming, 2nd ed., 1984, AddisonWesley, Reading. 14. S. Ryan, J. C. Bean and R. L. Smith, "A Tie-Breaking Algorithm for Discrete Infinite Horizon Optimization,", Working Paper, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109, 1988. 15. J. F. Shapiro and H. M. Wagner, "A finite renewal algorithm for the knapsack and turnpike models," Operations Research 15(2), 319-341, 1967. 20