Solution Existence for Infinite Quadratic Programming I. E. Schochetman Dept. of Math Sciences Oakland University Rochester, MI Robert L. Smith Dept. of Industriall ind ()perations Engineering IUniversity of Michigan Ann Arbor, MI 48109 S. K. Tsui Dept. of Math Sciences Oaklanld.Jniversity Rochester, MI Technical Report 97-10 June 1)97

Solution Existence For Infinite Quadratic Programming I. E. SCHOCHETMAN Department of Mathematical Sciences, Oakland University, Rochester, Michigan R. L. SMITH Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan S. K. TSUI Department of Mathematical Sciences, Oakland University, Rochester, Michigan ABSTRACT We consider an infinite quadratic programming problem with positive semi-definite quadratic costs, equality constraints and unbounded variables. Sufficient conditions are given for there to exist an optimal solution. Specifically, we require that (1) the cost operator be strictly positive definite when restricted to the orthogonal complement of its kernel, and (2) the constraint operator have closed range when restricted to the kernel of the cost operator. Condition (1) is shown to be equivalent to the spectrum of the restricted cost operator being bounded away from zero. Similarly, condition (2) is equivalent to the minimum modulus of the restricted constraint operator being positive. In the presence of separability, we give a sufficient condition for (2) to hold in terms of finite dimensional truncations of the restricted constraint operator. We apply our results to a broad class of infinite horizon optimization problems. In this setting, the finite dimensional truncations can be considered to be finite dimensional approximations to our problem whose limit, in a somewhat formal sense, is our infinite dimensional problem. Each of these approximations has properties (1) and (2) by virtue of their finite-dimensionality, i.e., each admits an optimal solution. However, our infinite dimensional problem may not. Thus, we give sufficient conditions for our problem to also admit an optimal solution. Finally, we illustrate this application in the case of an infinite horizon LQ regulator problem (a production planning problem). 1 INTRODUCTION We consider the infinite quadratic programming problem (Q) given by: min (x, Qx) subject to (Q) Ax = b,

x E H, where H and M are separable, real Hilbert spaces, the constraint operator A: H -, M is a bounded linear operator, b E M, and the cost operator Q: H - H is a non-zero, (self-adjoint) positive semi-definite, bounded linear operator. Recall that Q is positive semi-definite if (x, Qx) > 0, Vx E H, and that Q is positive definite if (x, Qx) > 0, Vx E H, x ~ 0. In general, the quadratic function (x, Qx) evaluated on the linear manifold defined by Ax = b is either unbounded below or Q is positive semi-definite on this manifold. Moreover, in the latter case, if the manifold is also finite-dimensional, then the minimum is attained. We assume that problem (Q) is non-trivial in that there exists a feasible solution, i.e., b is in the range of A. Our objective in this paper is to give sufficient conditions on the problem data Q, A, b to guarantee the existence of an optimal solution to (Q). (Uniqueness is not an issue here since, in general, multiple optimal solutions will exist in this context.) For convenience, we let K = ker(Q) denote the kernel of Q in H and L = K' its orthogonal complement in H, so that H = KE L. Our sufficient conditions require that the operator restriction QIL of Q to L be strictly positive definite (see section 2), i.e., coercive, and that the operator restriction AIK of A to K to have closed range in M. The strictly positive definite property is equivalent to requiring that the non-zero spectrum of Q be bounded away from zero, and the closed range property is equivalent to requiring that the minimum modulus (Goldberg, 1966) of the restriction AIK be positive. As a special case, problem (Q) includes infinite horizon, discrete-time, linearquadratic programming. This class of problems includes the important time-varying, discrete-time LQ tracker and regulator problems (Lewis, 1986). Our objective in these cases is to give sufficient conditions, in terms of the time-staged data over increasing finite horizons, for there to exist an optimal solution. Here, decisions, costs and constraints are specified (and vary) over an infinite, discrete time-line. By assumption, the operator restrictions referred to above (automatically) have, in each period, positive spectra bounded away from zero and positive minimum moduli, respectively. (Note that the positive spectrum of a finite-dimensional operator is the set of positive eigenvalues.) The same is true of the finite direct sums of these restrictions over finite horizon approximations. The problem is that these conditions may not hold for the direct sums of these operator restrictions over the infinite horizon. We give sufficient conditions for this to occur, i.e., for QIL to have positive spectrum bounded away from zero, and for AIK to have positive minimum modulus, in terms of increasing finite-horizon approximations of these operators. In Schochetman, Smith and Tsui (1995), the authors considered the solution existence question for a problem somewhat similar to (Q). There, the operator analogous to Q above was assumed to be a direct sum ED=1Qj of positive definite matrices Qj. The (possibly unbounded) operator Q itself need not be positive definite. We showed that if the (positive) eigenvalues of the Qj are bounded away from 0, then the optimization problem admits a (unique) solution. Even if Q is a bounded operator which is positive definite, but not stictly positive definite (i.e., coercive), then the optimization problem will not admit an optimal solution. See Dontchev and Zolezzi (1992, Theorem 32). For such positive definite Q, we have K = 0, so that H = L and AIK = 0, which obviously has closed range. Thus, the current paper can be viewed as an extension of Schochetman, Smith and Tsui (1995) to the more general case where Q is only positive semi-definite, i.e., where

AIK is non-trivial. In section 2, we give sufficient conditions for problem (Q) to have an optimal solution. In particular, we show that if the restriction AIK has closed range in M and the (positive definite) restriction QIL is strictly positive definite, then there exists an optimal solution for (Q). We then give an equivalent conditon for AIK to have the closed range property. If M is separable, we obtain a sufficient condition for range closure in terms of finite-dimensional truncations of AIK, i.e., in terms of finite horizon approximations to AIK. (These responses are based on the relevant operator-theoretic results established in the Appendix.) We also give equivalent conditions for QIL to be strictly positive definite in terms of the finite horizon approximations to QIL. In section 3, we apply our main results to an infinite horizon, discrete-time, non-stationary, linear quadratic programming problem with positive semi-definite cost matrices and lower-staircase constraint structure. We obtain sufficient conditions for such a problem to admit an optimal solution. In section 4, we illustrate these results in a special case of the infinite horizon linear-quadratic regulator problem (a production planning problem). 2 OPTIMAL SOLUTION EXISTENCE In this section, we establish sufficient conditions for (Q) to admit an optimal solution. Since Q is positive semi-definite, it is not difficult to see that its kernel K is given by K={xE H: (x,Qx) = 0}. Since H = K E L, we may let EK (resp. EL) denote the orthogonal projection of H onto K (resp. L). Moreover, since Q is self-adjoint, it follows that K and L are invariant under Q. Hence, Q also decomposes into 0 E P, where 0 is the zero operator on K and P: L -- L is the restriction operator QIL. Note that P is a positive definite, bounded linear operator on L. Now let F denote the feasible region for (Q), i.e., F = {x E H: Ax = b} = {7 +: E K, E L, A(7 + = b}, so that (Q) becomes min <, Qx >. (Q) xEF To avoid trivialities, we suppose that F # 0. Also let FK = {r7 E K: r7 + ( E F, for some C E L} = EK(F), i.e., FK is the image of F under EK. It is non-empty and convex in K, since this is the case for F in H. It is also true that F is closed in H; however, FK need not be closed in K. Analogously, let FL = {( E L: r + E F, for some 7 E K} = EL(F),

the image of F under EL. As with FK, the set FL is non-empty and convex, but not necessarily closed in K. Moreover, F C FK @ FL. We may now consider the following related problem (P): min (e, P) (P) CEFL where, as we have seen, P is positive definite on L and FL is a non-empty, convex subset of L. Moreover, (C, P) = (, Qx), for all E L, E K and = 7 +. Note that solving (P) is equivalent to solving (Q) in the following sense. If C* is an optimal solution to (P), then there exists *. E FK such that x* = 7* + * E F and x* is optimal for (Q). Conversely, if x* is optimal for (Q), then x' = r* + *, for 7* E FK and * E FL, where * is optimal for (P). Given the formulation of problem (P), it is desirable to know when FL is closed in L. We have the following sufficient condition. LEMMA 2.1. If AIK has closed range in M, then FL is closed in L. PROOF. For the remainder of this paper, it will be convenient to denote AIK by B. Let {I} be a sequence in FL and C E L such that e -. C, as n -, oo. By definition of FL, for each n, there exists r'n E K such that r7n + Cn e F, i.e., A(7n + C) = b. But A("7n + A) = B1"n + Ab = b. Hence, B7" = b - AC, which converges to b - AC. Consequently, by hypothesis, b - AC is necessarily an element of the range B(K) of B. Thus, there exists rt E K such that Arn = B77 = b - AC, i.e., Ar+ AE =b, so that 77 + C E F and c E FL. We are now in position to prove the main result of this section. We will say that the operator P is strictly positive definite if it is coercive, i.e., if there exists up > 0 satisfying apllE112 < (C, Pe), v E L. (In Lee, Chou and Barr (1972), such an operator P was called positive definite; in Milne (1980) it was called positive-bounded-below.) This condition is known (Dontchev and Zolezzi (1992, p.73)) to be necessary and sufficient for (P) to admit a (unique) optimal solution. Observe that even if FL is closed in L, (7) may not admit an optimal solution, despite the fact that P is positive definite. See Schochetman, Smith and Tsui (1995) for an example.

THEOREM 2.2. If B = AIK has closed range in M and P = QIL is strictly positive definite on L, then there exists an optimal solution to (Q). PROOF. If B has closed range, then FL is closed in L by Lemma 2.1. Thus, the feasible region in problem (P) is a closed affine subset of L. If, in addition, P is strictly positive definite, then it is well-known that problem (P) admits a (unique) optimal solution (Dontchev and Zolezzi (1992)). (The space L can be equivalently re-normed by IICIlp = P-/). An optimal solution to (P) is then a best-approximation to the origin in the closed, convex, non-empty subset FL of the Hilbert space L. Such is well-known to exist (Aubin, 1979).) The proof is then completed by recalling the correspondence between solutions of (P) and (Q). The previous theorem suggests the following questions. Under what conditions: (i) is P strictly positive definite on L? (ii) does B have closed range in M? To respond to these questions, we require some additional notation. Let T: X - Y be an arbitrary bounded linear operator from the Hilbert space X to the Hilbert space Y. Define the operator index aT as follows: aT = inf{llTxll:x E ker(T)', 111xi = 1}. It is shown in Theorem A.1 of the Appendix that this index is an alternate characterization, in the context of Hilbert space, of the well-known minimum modulus (Goldberg, 1966) of operator T. Thus, in particular, aB = inf{llBr7l: r7 E ker(B)~K, 117711 = 1} > 0, where ker(B)~K denotes the orthogonal complement of ker(B) in its domain K. Now assume Y = X. As usual, let a(T) denote the spectrum of T (Helmberg, 1969). In particular, a+(T) will denote the positive elements of a(T). Recall that if T is self-adjoint and positive semi-definite, then a(T) ~ 0 (Helmberg, 1969, p.226) and a+(T) consists of the non-zero elements of a(T), because a(T) C [0, oo). If T is non-zero as well, then a+(T) # 0 also. If T is given by a matrix on a Euclidean space with standard basis, then a(T) is simply the set of eigenvalues of the matrix. Finally, if S is any non-empty set of real numbers, then inf(S) will denote the infimum of the elements of S. In response to question (1) above, we have the following well-known result. LEMMA 2.3. The following are equivalent for the self-adjoint, positive definite operator P on L. (i) P is strictly positive definite. (ii) The quantity inf{((, PC)/Iie|l2: ~ E L, C ~ 0} is positive. (iii) The operator P is invertible. (iv) The quantity inf(a(P)) is positive. Before proceeding, it is perhaps worthwhile to restate Lemma 2.3 in terms of the original operator Q. Recall that a+ (Q) # 0 because Q # 0.

LEMMA 2.3'. The following are equivalent for the self-adjoint, positive semi-definite operator Q on H. (i) There exists aQ > 0 such that aQ|Ilxl2 < (x, Qx), Vx E H, x I K. (ii) The quantity inf{(x, QX)/llx112: x E H, x ~ 0, ~ K) is positive. (iii) The operator Q\Kl is invertible. (iv) The quantity inf(a+(Q)) is positive. In response to question (2) above, we have: THEOREM 2.4. The operator B = AIK has closed range in M if and only if aB >0. PROOF. Apply the Corollary to Theorem A.1 of the Appendix. Since M is separable, we can obtain a useful lower bound for aB as follows. Let {em} be a complete othonormal system for M and {m,} a sequence of strictly increasing positive integers. For each i = 1,2,..., let Mi denote the span of {el,..., ei }. Then {Mi} is a sequence of finite-dimensional subspaces of M such that Mi C Mi+l, 1 i = 1,2,..., and UlMi is dense in M. Let J: M -- M be the mapping given by oo mi Ji(> amem) = E amem, Vi = 1,2,.... m= 1 m=l 1 Note that Ji(M) = Mi C M and Ji* = Ji, for all i. Define Bi: K -- M by Bi = JiB, Vi = 1,2,.... Our objective is to investigate the range-closure property for the operator B in terms of its finite-rank truncations Bi, which have (finitedimensional) closed range. LEMMA 2.5. Let the notation be as above. Then: (i) {Ji} converges strongly to the identity operator on M. (ii) {Bi} converges strongly to B on K. (iii) {B } converges strongly to B* on M. (iv) IIBlt1, 11iBII < max(llB*ll, IIBI!), Vi = 1,2,.... (v) {BiBi} converges strongly to BB* on M, as i -, oo. PROOF. Items (i) - (iv) are straightforward. To prove (v), apply Remark 2.5.10 of Kadison and Ringrose (1983). Since the image of Bi is finite-dimensional in M, it follows from Theorem 2.4 that aBi > 0, for all i. However, infi aBi > 0, in general, as the next example shows. EXAMPLE. Let H = M be a separable Hilbert space with mi = i, Vi = 1,2,.... Define A: M --- M by 00 00 A(E amem) = E -amem. m=l m=l

Then A is bounded with norm equal to 1 and ker(A) = {0}. Moreover, Mi is the span of (el,..., ei} and Bi( amem)= -arem, Vi = 1, 2.... m=l1 m=1l Hence, Bi(ei) = ej, so that IBi(ei) =, i.e., aBi 1, Vi 1,2,.... Consequently, infi aBi = 0. In fact, aA = 0 by Lemma A.3, since the eigenvalues of A are of the form 1/i, Vi = 1, 2,.... We next show that the index of the operator B is at least the infimum of the indices of its finite-dimensional truncations Bi. THEOREM 2.6. Let the notation be as above. Then CB > infi aB,. PROOF. Let c > 0. By the definition of /B = aB (Theorem A.1), there exists ye in M such that B*yE 0 and aB8 < IBye1 < a + IIB*yJI 2' But, by part (2) of Lemma 2.5, I Bye J = lim I Biy Il so that Biye is eventually non-zero. Also, by part (5) of Lemma 2.5, we have that BiB converges strongly to BB* on M. Hence, II BiByI II BB'y, II CIas i —, oo. IIB yjllI lB'v*y, 11 Therefore, there exists if such that aB - 2 < i B: <C + s so that aBi, < aB + c, where e is arbitrary. Consequently, aB > infi abi COROLLARY. Let the Bi be as above. If infi aB, > 0, then B has closed range in M. We thus have the following numerical version of Theorem 2.2. THEOREM 2.7. If infi aBi > 0 and inf(a+(Q)) > 0, then there exists an optimal solution to (Q). PROOF. Apply Lemma 2.3, as well as Theorems 2.2, 2.4 and 2.6.

REMARK. The previous results from Lemma 2.5 through Theorem 2.7 inclusive are valid more generally for the operator T: X - Y of the Appendix. 3 AN APPLICATION TO INFINITE HORIZON OPTIMIZATION As an application of the above, consider the following special case (S) of (Q). 00 min (xjzQjxj) j=1 subject to (S) Ai,i-1xi- + Aiixi = bi, Vi = 1,2,..., xj E Rni, j = 1,2,..., and 00oo E llXjll2 < oo, j=1 where A0o = 0, Qj is a symmetric, positive semi-definite matrix, Vj, and bi E Rm', Vi. Of course, the Ai,i_1, Aii are matrices of appropriate size. Let IH denote the Hilbert sum of the Rni and M the Hilbert sum of the Rm'. All Euclidean spaces are assumed to be equipped with their respective standard bases. Note that II and M are separable. In fact, we may let Mi (as in section 2) denote the subspace Rm"' E... ED Rmi EO 0 E... of M, Vi = 1, 2,.... In this case, J,: M - M is given by Ji(y) = (yl,...,, 0...), Vy = (yi) E M, Vi = 1,2,.... If 1IQj12 denotes the spectral norm (Horn and Johnson, 1988) of Qj, Vj, we assume that supj 11Qjl12 < oo. This will be the case, for example, if there exist finitely many distinct Qj. We then obtain a self-adjoint, positive semi-definite operator Q on H given by Qx = (Qjxj), for x E H. Moreover, 00 (z,Qz) = (xzj, Qjzxj, r E H..=1 We also assume that sup{lAi,,.,-112, l1Aii112} < 00. As above, this will be the case, for example, if there exist finitely many distinct Aj. We thus obtain a bounded linear operator A: H -. M given by (Ax)i - Ai,i-lxi-l + Aiixi, tri = 1121... I Vx E H.

The matrix representation of A has the following lower-staircase form: A11 0 0... A21 An 0... 0 A32 A 33.. Finally, assume that b = (bi) E M, i.e., |b_1 bi12j < oo, and the feasible region F = { E H: Ax = b} is non-empty. For each positive integer k, consider the following "finite dimensional" problem (Sk) which approximates (S): min E(xj,Qjxj) j=1 subject to (Sk) Ai,i-irxi- + Aiixi = bi, Vi = 1,..., k, xj E Rnj, Vj = 1,2,..., and 00 E ljj 12 < m. j=1 Note that (Sk) is essentially finite dimensional since the objective function depends only on the first k variables and the feasible region consists of those square-summable extensions of the first k variables which satisfy the first k constraints. If we let Fk denote the feasible region for (Sk), then {Fk} is a sequence of closed, non-empty subsets of H satisfying Fk+1 C Fk, Vk, and F = n'=1Fk, i.e., limko Fk = F, in the sense of Kuratowski (1966). It is then of interest to ask if the problems (Sk) "converge" to (S) also, in the sense of Fiacco (1974). This would be the case if, in addition to the feasible region convergence, the functions k fk(x) = (j,QjZj), z E H, j=1 converge uniformly to the function 00 f() = Z(x, Qjxj), x E H, j=1 on H as k -, oo. Unfortunately, this is not so. However, it is true that the sequence {fk} converges pointwise to f on H. Thus, the problems (Sk) do converge to (S) in this weaker sense.

Since Q = EjQj, it follows that the kernel K of Q is given by K = ej Kj, where Kj is the kernel of Qj in Rn', Vj. Similarly, L = $jLj, where Lj is the orthogonal complement of Kj in Rnj, i.e., R = Kj Lj, Vj = 1,2,.... If Pj = QjILj, then Qj = O @ Pj, Vj, and P = Q|L = ~EjPj. LEMMA 3.1. The (real) spectrum of P is equal to the closure of UAlja(Pj) in R, i.e., a(P) = UT1lo(Pj). PROOF. Note that each Pj is positive definite and defined on a subspace of Rnj. Hence, a(Pj) consists of a set of (positive) eigenvalues {Ajl,..., Ajkj} of Pj, which are also eigenvalues of P. Consequently, a(Pj) C a(P), Vj = 1,2,..., so that U,=la(Pj) C a(P) and U=,la(Pj) Ca(P), since a(P) is closed. For the other inclusion, let A E R and suppose A ~ U=ola(Pj). Thus, there exists r > 0 such that {t E R: It- AI < r} n U la(Pj) = 0, i.e., for each j= 1,2,..., I|Aj-A > r, Vi=i,... kj. For each j, consider the bounded linear operator Pj - Alj, where Ij is the identity operator on Lj. Since Pj is self-adjoint and A is real, we have that each Pj - AIj is self-adjoint. Moreover, since A 0 a(Pj), it follows that Pj - Alj is invertible and hence, also self-adjoint. Consequently, if we denote the inverse of Pj - AIj by Vj, Vj = 1,2,..., then by Helmberg (1969, p.227) (Vj) = {p- l: p E a(Pj - A/Ij)} = {(Aji - A)-l: i = 1,...,kj} and 11Vjl = max {I(Aji - A)-1} < 1/r, 1<i<k, so that the Vj are uniformly bounded. Now let V = G j~l= Vj. Then V is a bounded linear operator since IVll = sup IIVj _< 1/r, 1<j<oo and 00 00 V(P- AI)= V(Pj - A,)= ( j = I, j=l j=l

i.e., V is the inverse of P - I on L. Therefore, A 4 a(P), i.e., we have shown equivalently that a(P) C u= la(Pj). REMARKS. The previous lemma can be shown to be true for self-adjoint operators in general. However, it is well-known to be false for arbitrary bounded operators. Moreover, the spectrum of the finite-dimensional operator (i=1 Pi consists of its eigenvalues, which are the eigenvalues of the Pi, i = 1,..., n, i.e., a(,=1 Pi) = U 1o(Pi), n = 1,2,..., an increasing sequence of closed sets. On the other hand, ca(Q(= 1 Pi) U rla(Pi), which may not be closed in general. But Uila(Pi) = lim U=la(Pi) n —oo in the sense of Fiacco (1974) (see (Kato, 1980, p.339)), i.e., in the sense of Kuratowski set convergence. Thus, Lemma 3.1 asserts that 00 n r(( Pi) = lim a(( Pi) i=1 i=1 in the sense of Kuratowski. As a consequence of the above discussion, we have that El 0 0...[ E2 0... EK = 0 0 E3... where EK is the orthogonal projection of Rnj onto Kj, Vj = 1,2,.... Analogously, El... 0 EL 0... 04 o.E2 EL = 0 0 EL where EL is the orthogonal projection of Rni onto Lj, Vj = 1,2,.... Now let Ck: Rnl... Rnk -- R"ml... E Rm7k denote the matrix operator given by All E 0 0... 0 0 A21Ek A22E2 0... 0 0 0 A32E2 A33E3k0... 0 0 0 0 0. Ak,k.1Ek- Akk.. with minimum modulus given (as in section 2) by acc = inf{llCk(xl,...,xk)112 (x1,...,k) E ker(Ck)', j||(X1,.,Xk)112 = 1},

Vc = 1, 2,.... The (doubly-infinite) matrix operator Bk = Jk(AIK) then satisfies Ck 0... BkEK= 0..., Vk=1,2,... The following is the main result of this section. THEOREM 3.2. Problem (S) admits an optimal solution if infk aCk > 0 and inf(U_=la+(Qj)) > 0. PROOF. First observe that ker(BkEK) = ker(Bk) E L, so that ker(BkEK)' = ker(Bk)-K E 0, Vk = 1, 2,.... From this, it follows that c'BEK = aBk, Vk. By a similar argument, we have that aBkEK = ac*, i.e., CBk = Ack, Vk. Thus, by hypothesis, aB > 0 (Theorem A.5). Now P is strictly positive definite if and only if inf(a(P)) > 0 (Lemma 2.3). But a(P) = Uj1la(Pj) by Lemma 3.1. Thus, P is strictly positive definite if and only if inf(U7la(Pj)) > 0. However, a+(Qj) = a(Pj), Vj = 1,2,.... Finally, apply Theorems 2.4 and 2.7 to complete the proof. The previous theorem guarantees an optimal solution for (S) if the two specified quantities are positive. The first quantity is the infimum of the minimum moduli of operators derived from the time-staged constraint matrices. The second quantity is the infimum of the positive eigenvalues of the time-staged cost matrices. Although the second quantity is not difficult to compute in general, determination of the first can lead to significant complications, as the next section shows. 4 AN APPLICATION TO CONTROL THEORY We consider a special case of the discrete-time, infinite horizon, time-varying lQ regulator problem (Lewis, 1986). Our example may also be viewed as a production planning problem with positive semi-definite quadratic inventory and production costs (Denardo, 1982). Consider the following problem (): min Z[rjy + suj] j=l subject to (7R) yi = y-l + -di, Vi=1,2,..., yj ER, u ER, Vj=1,2,..., where each rj, sj > 0. For the jth period, yj is the jzh state (eg., ending inventory) and uj is the contributing jth control (eg., production decision). The initial state yo7

is assumed given. (Thus, its cost is constant and may be omitted.) The quantity di is assumed to be a known exogenous parameter (eg., demand) in period i = 1,2,.... Since it is customary for control costs to be positive definite (Lewis, 1986), we further assume that sj > 0, for all j. For our purposes, we also assume that inf{rj:rj 7 0} > 0 and inf{sj} > 0. Problem (IR) may be rewritten in the form of problem (S) as follows: YJ, t -j 0 yj min [i' [ subject to (7Z) [-I I] =[Y ]1, [-11} []= —y0+dl, i=l, [L 0][ ]+[-I][i=di, Vi=2,3,... [Y]E R2, Vj=1,2,..., where Qx= [ ],Vj = 1,2,..., Ai,i_- = [I 0], Vi = 2,3,..., Ai= [-I I], Vi=1,2,..., and bi -yo+ dl, i=1, di, i = 2,3,.... Let the remaining notation be as above. In particular, H (resp. M) is the lilbcrt sum of countably many copies of R2 (resp. R). Obviously, b = (bi) E M if and only if 2-i= lldti2 < oo. To obtain the bounded operator Q, we assume the hypothesis of the next lemma. LEMMA 4.1. If supl<j<,{rj,sj} < oo, then Q: H -- H is bounded with IIQII < supl<j<oo{rj, sj}. PROOF. Left to the reader. Note that sup{llAi,i_-l11, IIAiill2} = 1 in this case, so that the linear operator A: H -, M is bounded with IAlil < 1, where A has the following matrix form: [-I I]... [I 0] [-I I] O... A= 0 [I 0] [-I]... L

Recall that Kj is the kernel of Qj in R2 and E4 is the orthogonal projection of R2 onto Kj, j = 1, 2,.... Consequently, for problem (7Z), [I O]E: [-I I]E1 o K. 0 [I O]E2 0 0 0 0 [-I I]E: 0 ~.. 0... 0 ~.. 0... [I O]Ek 0 0 0 [o-] [-I I]E k so that CkCk is given by 2EK -El o... -EK 2EK2 + EK -.. O O O... 0 0... 0 0 0... 0 0 _Ek-2 0 0 0 2Ek-' + E-2 -E-l1 -K 0 0 (*) -Ek-1 2EK + EK - for k = 1, 2,.... Note that our reason for computing CkCk* is contained in Lemma A.2 which gives a lower bound for ack in terms of the non-zero eigenvalues of Ck(k. Thus, for each j, a(Qj) = {rj j, }, and a+(Qj) = { sj}, for rj > 0 Therefore, inf(U jla+(Qj)) = inf{rj, j: rj > 0} = min(inf{r: rj > 0},inf{sj}) > 0. For each j = 1, 2,..., define bj as follows: J 0, for rj >0 i I 1, for rj = 0. Then the kernel Kj of Qj in R2 is given by Kj = KJ 0, where K. = jR, Vj = 1,2,... Consequently, K K 0 0 ' i.e., there are two possible values for each E3K, for each j. Once again, our objective is to apply Theorem 3.2 to problem (7Z). Since inf(U1 la+(Qj)) > 0 by our hypotheses, it suffices to show that infk ac, > 0. Observe that ac7, > - { inf(a+(CkCk)) }lCkll2

by Lemma A.2. In this case, r[-ll]~E' 0 0... 0 0 -[-1 1]E - O O... O O [1 0]E [-11]E2 0... o 0 Ck= ~O [1 0]E2 [-1 1]E *. 0 0 0 0 0... [1 0]JE -1 [-1 1]E and CkCk is as in (*) above, for k = 1, 2,.... For each k, the spectral norm IICkll2 of Ck is equal to i/[ICkC1j2, where IICkC'112 is the largest eigenvalue of CkCZ (Horn and Johnson, 1988). Given the form (*) of CkCk and the description of the EK, j = 1,2,..., we see from the Gershgorin Circle Theorem that every eigenvalue of CkCZ is at most 5. Consequently, lCkll12 ~< 5, so that 1 ac, > ~ inf(a+(CkCk)), Vk = 12,.... Thus, to show that infk cckc > 0, it suffices to show that the sequence {inf(a+ (CkCk))} k=l is bounded away from 0. A careful inspection of the matrix CkCZ reveals that it is of the form D1... 0. *.., k=l,2,.... k 0... Dki where D', for 1 < i < pk, is necessarily either the 4 x 4 matrix 2 0 -1 0 -1 0 3 0 0 O O 0 with positive eigenvalues (5 ~ v/5)/2, or the 4 x 4 matrix 2 0 -1 0 O 0 0 -1 0 1 0 0 0 0 0 with positive eigenvalues (3~ V /)/2, or the 2n x 2n matrix Un 2 0 -1 0 0 0 0 0 0 0 -10 3 0-1 00000 00000 00000 00000 00000 ~ O O O O 0... 0 O O 0 0... 0 O O 0 0... 0 O O 0 0... 0 O O 0 0... -1 0 3 0 0... 0 0 0 0 0... 0 0 -1 0 0... 0 O O 0 0 0 0 0 0 0 0 0 -1 0 0 0 3 0 0 0

or the 2n x 2n matrix 2 0 -1 0 0 0... 0 0 0 0 0 0' 000000...000000 -1 0 3 0 -1 0... 0 0 0 0 0 0 000000... 000000 0 O O O O 0... -1 0 3 0 -1 0 000000... 000000 0 O O O 00000... O O -1 0 1 0.000000... 000000., n=3,4.... or a square zero matrix; each of these is symmetric and positive semi-definite. Since a+ (CkCk) = U 1{ +(D): D 0), it suffices to determine inf(a+(U,)) and inf(a+(Vn)), n = 3,4,.... But inf(a+(Un)) > 1, all n, by the Gershgorin Circle Theorem. Thus, it remains to determine inf(a+ (V,,)), for all n. However, det(V, - AI) = ~Andet(Wn - AI), where Wn is the n x n symmetric matrix given by 2 -1 0... 0 0 0 -1 3 -1... 0 0 0 Wn=::.. n=3,4,... 0 0 0... -1 3 -1 0 0 0... 0 -1 1 Consequently, a+(Vn) = a+ (W), for all n. Let On denote the nth degree characteristic polynomial of Wn, i.e., <n(A) = det(Wn - A), n = 3,4,... It suffices to show that the positive roots of the On are bounded away from 0. In particular, we will show that if On(A) = 0, for any n > 3, then A >.1. To this end, define,n (A) = det(Z, -A ), n = 1,2,..., (where Zn is as in the first example) so that.n(A) = (3 - A) -1 (A) - tn-2 (A) n(>A) = (1 - A)>tn- (A) - On-2(A)

and 4n(A) = On(A)- 2n_-1(A), n = 3, 4,.... Next define pl(A) = 3- A, p2(A) = A2 - 6A + 8 and 33-A -1 0... O -0 -1 3-A -1 0... 0 0 pn(A) = det. "..., n = 3, 4,..., 0 0 0... -1 3- A -1 0 0 0... 0 -1 3-A so that pn(A) = (3- A)pn-() - Pn-2(A), nn(A) = (2 - A)pn- (A) - pn-2(A) and n(A) = pn(A) -n-l(A), n = 3,4,.... For convenience, let w = (3 - A)/2, so that A = 3 - 2. Writing T,,(w) for pn(3 - 2w), for all n > 1, we obtain the recursion 'n(w) = 2w.Tn- () - Tn-2(W), n = 3,4,..., with r1(W) = 2w, r2(w) = 4w2 - 1, w E R. Restricting w to the interval [-1, 1] and letting 0 = cos-'(w), 0 < 0 < 7r, the solutions to this recursion are wellknown (Davis, 1975) to be the Chebychev Polynomials of the Second Kind, i.e., r,(cosO) = s(n ) < 0 < rr, n= 34,.... sinO Consequently, -n(3 - 2cos9) = pn(3 - 2cos0) - pn-(3 - 2cos0) = Tn(cos0) - rn1(coso) sin(n + 1)0 - sin(n8) sinO 2cos(2n+j1 )sin(0) sinO and n (3 - 2cos9) = 'n(3 - 2cos0) - 2n,_ (3 - 2cos0) 2cos(2n+ 1 )sin() - 4cos( 2- 0o)sin(.) sin0 2sin(2)[cos(2n1) - 2cos(2 0)] sinO cos( 2ne10 -2cos( el0) 0 Io+ --- —1os() ---, n,.... 2( )

Hence, for -1 < w < 1 and n = 3,4,..., the zeros of On correspond to those 0 in the interval [0, 7r] for which cos(2n 0) = 2cos(2n-1). 2 Fix n > 3 and let Y1 = cos( 2 ), 0 < 0 7r, and y2 = 2cos(2 - ), <0< r. Then the zeros of y1 are given by 2k + 1 0 = 2 +, k=0,1,...,n, 2n+ 1 while the zeros of Y2 are given by 2k + 1 0= 2- r, k = 0,,...,n-1. 2n - 1 Moreover, 2k + 1 2k + 1 2(k + 1) + 1 2(k + 1) + 1 itr< 2 7r< < 7ri< 2- 7r, k =O, 1,...,n- 1, 2n + 1 2n-1 - 2n+ 1 2n-1 where 2k1+ 1 2k+3 — ~ ---7 < ---— 7T 4==>k A < n -1 2n - 1 2n + 1 and 2k + 1 2k +3 ---— 7T = --— 27+T => k=n-1. 2n- 1 2n + 1 Therefore, 2k + 1 2k + 3 2n ---I ---r <2+ 7r, k = O, 1,..., n - 2, while for k = n- 1, 2k + 1 2k + 3 ------- =7~ --- r = 7r. 2n-1 2n7+ 1,From the previous discussion, we see that in each of the n - 1 intervals 2k+1 2k+3 [2n - I r 2n+ 7r], k = i,...., n - 21 there exists a unique 0 for which 2n+ 1 2n -l 1 cos( + 0) = 2cos( 0), 2 2

i.e., for which On(3 - 2cos0) = 0. For k = n - 1, the nth such interval reduces to the point 7r. In this case, bn(3- 2cos7r) is indeterminate of the form 0. By.'Ilopital's Rule, we find that On(5) = On(3 - 2cosir) = (-1)(6n - 1) 0. Note also that on(1) = On(3 - 2cos0) = -1. Hence, On,(3 - 2cosO) has n - 1 roots in the interval 0 < 0 <_ r, i.e., 4n,(A) has n - 1 roots in the interval 1 < A < 5, n = 3,4,.... It suffices to show that n((.1) > 0 in order to show that the remaining root of each On is strictly between.1 and 1. To this end, define v1 = 1.9, v2 =.71 and vn = bn(.1), n = 3,4,.... Also define ul = 1.9, U2 = 4.51 and '1.9 -10 0... O -1 2.9 -1... O0 Un = det 0 O 0... -1 O O O... O0 0 0 0 0 2.9 -1 -1 2.9. n = 3,4,... so that Vn =.9un-1 - n-2, n = 3, 4,.. Next define ql = 2.9, q2 = 7.41 and -2.9 -1 qn = det 0 L0 -1 0... 0 0 0 2.9 -1... 0 0 0 * *.'.'.. 0 0... -1 2.9 -1 0 0... 0 -1 2.9 n = 3,4,... so that Un = 1.9qn-l - n-2 and qn = 2.9q,_- - qn-2 n = 3,4,... Using the techniques of Goldberg (1986), we may solve this recursion with the given initial values to obtain 2.5n+1 _.4n+l n= 2., n=1,2,... 2.1

Consequently, for n > 5, we have Vn =.9un-1 -,n-2 =.9(1.9qn-2 - qn-3) - (1.9qn-3 - qn-4) = 1.71qn-2 - 2.8qn-3 + qn-4 = 1271 [2.5n1 -.4n-].8 [2.5 n-2 -.4-2] + 1[2.5n-3.4n_-3 2.1 2.1 2 + -1 n-2 n-3 n-4 - 1.71 A.4k2.5n-k-2 - 2.8 E '4k2.5n-k-3 + E.4k2.5n-k-4 k=O k=O k=O n-3 n-3 n-4 = 1.71 E.4k2.5n-k-2 + 1.71(.4-2)- 1.12 E 4k2.5n-k-2 +.k2.5n-k-4 k=O k=O k=O n-3 n-4 =.59 A.4k2.5n-k-2 + 1.71(.4n-2) +.4k2.5n —4 k=O k=O >0. Since vi = 1.9, V2 =.71, V3 = 2.159 and v4 = 6.0021, we see that n,(.1) > 0, n = 1,2,.... Also, since nM(l) < 0, all n, it follows that the remaining nth root of each bn is in the interval (.1, 1), i.e., the roots of the On are bounded away from 0. We have thus proved that infck c, > 0. Since we are also assuming that inf{rj: rj > 0} > 0 and inf{sj} > 0, it follows from Theorem 3.1 that problem (R) admits an optimal solution under our assumptions. APPENDIX Let X and Y be Hilbert spaces and T: X -_ Y a bounded linear operator with adjoint T*: Y -, X. We consider the question of when the range 7'(X) of 7' is closed in Y. In Chapter IV of S. Goldberg (1966), the author showed that this is the case if and only if a certain operator index -r is positive. In the cont.ext of normed linear spaces, this index is given by YT = inf d(xker(T)):x ker(T) } where, as usual, d(x, ker(T)) = inf{(ll - yll y E ker(T)}. One of our objectives here is to give alternate characterizations of YT in the I lilbert space context. To this end, we define the following additional non-negative indices

for T: aT = inf{iJTxll: x E ker(T)l, |JxI| = 1} = inf IIT1:x E ker(T), x O}, IIT*yll e /3T=inf { IITT,*y II 1 Y Y. T'yia O = inf{lTT*yll: y e Y, IIT*yll = 1}. We show that all three indices are the same. THEOREM A.1. For the bounded operator T, the indices aT, ATr and 7T are equal. PROOF. First we show that aT = 1Tr. Recall (Dunford and Schwartz, 1964) that ker(T)' = range(T*) in X. Hence, aT < inf { 1: x range(T*), x O}= P Conversely, let x E ker(T)~, x - 0. Then x E range(T*), so that there exists a sequence {Yn} in Y such that T*y, -- x, i.e., TT*y, -, Tx, as n -- oo. Consequently, JITT*ynl -- IITxiI and IIT*ynl -- l||xi # 0, as n - oo, so that lim ITT*ynl I = ITxII n-*oo IITynll 1111 Thus, a typical element of the set { ITxj: x E ker(T)L#, ~O} is the limit of a sequence from the set { IlrliTII: E Y, Ty # 0}. Now suppose aT < 3i. Then, for c = Ar - aT > 0, there exists x in ker(T)', x: O0, such that IaT - Iii I< 3. Also, there exists y E Y such that Ty # 0 and IIT11 IIrTT'YII f I1T11 IIT'yll 3' lxi ITOTIl I 3

so that -IITT*yI 2 2 (T- T.(IT*(y)| I < 2 =2 (ST - aT) i.e., IITT*jIl 2 1 IIT*'yl <2 3r + aT << AT by hypothesis. This is a contradiction, i.e., rT = f3T. We leave the proof of the fact that cT = rT to the interested reader. It depends on the fact that X is the direct sum of ker(T) and ker(T)'. COROLLARY. The range T(X) of T is closed in Y if and only if aT > 0. PROOF. See Goldberg (1966) and Kato (1980). The following lemma yields a lower bound for aT. LEMMA A.2. Let the notation be as above. Suppose T # 0. Then ~ 1 rT ~> -]Ti inf{A: A # 0, = eigenvalue of TT*}. PROOF. Let y E Y, T*y # 0. Then IITT*yl| _ TT*yll Ilyll IIT*yll Ilyll IIT*yll _ITT*yll IlITyll llyll llyll i inf JITT'y sup IITyl T'yHO l 'yl / 1 lIITT*y|l =lI inf lT11 T '#O IIYII = lT*hI inf{A: A # 0, A = eigenvalue of TT*}, by the spectral resolution of the identity induced by TT*. See Theorerr 5.2.2 of Kadison and Ringrose (1983). This completes the proof since IIT* I = 117'|1. By a similar argument, we obtain the following alternate charactcrizatiorn f (a', in an important special case. LEMMA A.3. Let T be a non-zero, bounded operator. If T is self-adjoint and positive semi-definite, then aT = inf(a+(T)).

ACKNOWLEDGEMENTS The second author was partially supported by the National Science Foundation under Grant DDM-9214894. The first and third authors were partially supported by Oakland University Fellowships. The authors are grateful to one of the referees for several helpful suggestions and comments. REFERENCES 1. J.-P. Aubin (1979), Applied Functional Analysis, J. Wiley, New York. 2. P. J. Davis (1975), Interpolation and Approximation, Dover, New York. 3. E. V. Denardo (1982), Dynamic Programming. Models and Applications, PrenticeHall, Englewood Cliffs. 4. A. L. Dontchev and T. Zolezzi (1992), Well-posed optimization problems, Lecture Notes in Mathematics, No. 1543, Springer-Verlag, New York. 5. N. Dunford and J. T. Schwartz (1964), Linear Operators. Part I,.1. WileyInterscience, New York. 6. A. V. Fiacco (1974), Convergence properties of local solutions of sequences of mathematical programming problems in general spaces, J. of Opt. Theory and Appl., 13: 1-12. 7. S. Goldberg (1966), Unbounded Linear Operators, Dover, New York. 8. S. Goldberg (1986), Introduction to Difference Equations, Dover, New York. 9. G. Helmberg (1969), Introduction to Spectral Theory in Hilbert Spaces, NorthHolland, Amsterdam. 10. R. A. Horn and C. R. Johnson (1988), Matrix Analysis, Cambridge Univ. Pr(ess, Cambridge. 11. R. V. Kadison and J. R. Ringrose (1983), Fundamentals of the Theory of Operator Algebras. Vol. I Elementary Theory, Academic Press, New York. 12. T. Kato (1980), Perturbation Theory for Linear Operators, Springer-Verlag, Berlin. 13. C. Kuratowski (1966), Topologie I, Academic Press, New York. 14. K. Y. Lee, S. Chow and R. O. Barr (1972), On the control of disrete-tirme distributed parameter systems, SIAM J. Control, 10: 361-376. 15. F. L. Lewis (1986), Optimal Control, J. Wiley, New York. 16. R. D. Milne (1980), Applied Functional Analysis, Pitman, Boston. 17. I. E. Schochetman, R. L. Smith and S.-K. Tsui (1995), Solution existence for time-varying infinite horizon control, J. Mathematical Analysis and Applications, 195: 135-147.