Shadow Prices in Infinite Dimensional Linear Programming H. Edwin Romeijn Rotterdalm Sch(x)l of Management ErLamus I liversity Rotterdun, Netherlands Robert L. Smith Dept. of Industrial and Operations Engineering University ot Michigan Ann Arbor, MI 48109 Technicld Report 97-)8 May 19)7

Shadow prices in infinite dimensional linear programming* H. Edwin Romeijnt Robert L. Smith+ May 30, 1997 Abstract We consider the class of linear programs that can be formulated with infinitely many variables and constraints but where each constraint has only finitely many variables. This class includes virtually all infinite horizon planning problems modeled as infinite stage linear programs. Examples include infinite horizon production planning under time-varying demands and equipment replacement under technological change. We provide, under a regularity condition, conditions that are both necessary and sufficient for strong duality to hold. Moreover we show that, under these conditions, the Lagrangean function corresponding to any pair of primal and dual optimal solutions forms a linear support to the optimal value function, thus extending the shadow price interpretation of an optimal dual solution to the infinite dimensional case. We illustrate the theory through an application to production planning under time-varying demands and costs where strong duality is established. 1 Introduction Consider the following doubly infinite linear programming problem: min c'xi (P) 1=1 subject to A.4i.ix,_1 +.4Jx, > bi ( = 1,2,...) (1) x, 0 (i=1,2,...) r E X "This material is based on work supported by the National Science Foundation under Grant No. DDM-9214894. tRotterdam School of Management, Erasmus University Rotterdam, P.O. Box 1738, NL-3000 DR Rotterdam. The Netherlands: e-mail: E.Romeijn~6fac.fbk.eur.nl. The work of this author was supported in part by a NATO Science Fellowship of the Netherlands Organization for Scientific Research (NWO). 1Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2117; e-mail: rlsmithcfumich.edu. The work of this author was supported in part by the Netherlands Organization for Scientific Research (NWO). 1

and its dual 00 max b'yi (D) i=1 subject to A. c (+liYi+l ) ci (i= l,2,)..) yi _ 0 (i=1,2,...) y E Y where ci. xi E RI and bi, y GE Ritr are all column vectors, and Aij is an (mi x nj)-matrix, with A1o 0 and Xo 0. The set X (resp. Y) is precisely the set of points in J-t: Rn' (n~l Em= ) for which the primal (dual) objective function is well-defined and finite. Note that X and Y are linear subspaces of nIi= I ' and flJ^1 Rm'. Since every (countable) constraint system having the property that every constraint contains at most a finite number of variables may be transformed into an equivalent lower staircase system of the form (1) (see the appendix), (P) includes virtually all infinite horizon planning problems modeled as infinite stage linear programs. Examples of the latter are infinite horizon production planning under time-varying demand and cost data, equipment replacement under technological change, and capacity expansion under nonlinear demand for capacity. In this paper we explore conditions under which optimal solutions x" and y* exist for (P) and (D) which satisfy strong duality. We also extend the interpretation of y- as a vector of shadow prices to this infinite dimensional domain. Although there is an extensive literature on the semi-infinite linear programming case where either the number of variables or the number of constraints is allowed to be infinite (see for example Charnes. Cooper. and IKortanek [3], Borwein [1, 2], and Duffin, Jeroslow, and Karlovitz [5]). there has been correspondingly little published work on the doubly infinite case. Notable exceptions include Grinold [7, 8, 9], Grinold and Hopkins [10], Jones. Zvdiak and Hopp [12]. and Romeijn. Smith, and Bean [13]. Grinold [7] provides conditions for existence of optimal dual solutions for a special class of doubly infinite linear programs and in [8] establishes weak duality for a stationary infinite stage linear program. The latter work is extended to convex programs in [9]. Jones, Zydiak, and Hopp [12] apply the general theory developed in Grinold and Hopkins [10] to a cost stationary infinite horizon equipment replacement problem with time-varying demand to establish the existence of an optimal dual solution. Romeijn. Smith, and Bean [13] establish strong duality for doubly infinite linear programs with bounded variables under a transversality condition that dual component values are asymptotically zero. In this paper, we extend the results of Romeijn. Smith. and Bean [13] to the unbounded variable case and provide economic interpretations of the resulting optimal dual solutions. Our approach, as in Grinold [7, 8. 9] and Romeijn. Smith. and Bean [13] is to derive properties for (P) and (D) indirectly through their inheritance from finite dimensional approximations (P(N)) and (D(NA)) of (P) and (D). VWe form these by truncating (P) and (D) keeping only the first AN vector variables and constraints from each. This approach avoids the potential failure of interior point or closedness properties to hold in the infinite dimensional space. Viewing the index i in (P) as corresponding to the ith period in a multiperiod planning problem. the above truncation of (P) corresponds to a finite horizon approximation to an 2

infinite horizon problem. Within infinite horizon optimization, this method is termed a planning or solution horizon approach to approximating the infinite horizon solution (see for example Schochetman and Smith [14, 15]). 2 Mathematical preliminaries Following Romeijn, Smith, and Bean [13], we equip the product spaces I1 Rnt and H IoR'mi for the primal and dual problems with the corresponding product topologies inherited from their component Euclidean spaces. Thus the sequence {zn} C 1fjl IR'n converges precisely when its components xs converge in the ordinary Euclidean metric for each i. That is, x - x as n - oc ifand only if - xi as n oo for all i = 1,2.... Similarly for {yfl} C Ie= R1m'. We note that this product topology is in fact metrizable since we have a countable product of metric spaces (see Dugundji [6], p. 191). For example, we may set d(x.x') = suPkmin{dk(xk, [), 1/k) where dk(, ) is the ordinary Euclidian metric on Ink and x. ' E nk= IRnk, and similarly for the dual space n=i Rmt' Such infinite dimensional spaces share a disturbingly large number of what would be viewed as pathological properties in the finite dimensional context. For example, for any open set. all but finitely many of its projections on the coordinate spaces R1' (i = 1. 2....) must equal those spaces lR n (see e.g. Dugundji [6], p. 99, example 4). Therefore, all compact sets. as well as the nonnegative orthant, have empty interiors. In particular. the unit closed ball is not compact. We intend to avoid potential difficulties arising from these properties by taking the indirect approach of approximating (P) and (D) by finite dimensional surrogates (P(N)) and (D(N)). We then demonstrate the inheritance of conventional finite dimensional duality properties for (P) and (D) by taking the limit as AN converges to infinity. The finite dimensional subproblems (P(N)) and (D(N)) are formed by dropping all vector variables and constraints beyond the first N of (P) and (D). respectively. That is. (P(N)) is the finite dimensional linear program N min ci (P(N)) i=1 subject to A4.,,-i.,- _ +.4Ax > b; ( 1,. N) (3) xi > 0 (i 1...,N) and (D(N')) is N max y by (D(N)) i=l subject to A.,y + A.,+liy+ 1 < c, ( = 1,..., N - 1) (4) -4AxNY ~ CN (5) y, > 0 (i=...,N) 3

As in Romeijn, Smith, and Bean [13], note that when (P(N)) admits an optimal solution. since (D(N)) is the ordinary linear programming dual of (P(N)), we have classical weak and strong duality holding for the pair ((P(N')), (D(N))). In the next section. we develop conditions that, under a regularity assumption, are both necessary and sufficient for strong and hence weak duality to hold for (P) and (D). We end this section with a summary of the notation we will use. It will at times be convenient to think of the feasible region X(N) of (P(N)) as embedded in nJ^=I RnJ instead of fI=l I n. This can be accomplished without loss of generality by arbitrarily extending the first N elements to elements of 1Ij= LIjR We shall use the notation X(AN) for both where the proper interpretation should be clear from the context. Similarly for the dual feasible region of Y(N) of (D(N)). In the same way, feasible elements x(N) and y(N) of X(N) and Y(N), respectively, can be viewed as finite or infinite vectors. 00 C(x) = E C'X i=1 oo B(y) = b'yi i=1 X = set of points for which C(x) is well-defined and finite Y = set of points for which B(y) is well-defined and finite X = feasible region of (P) Y = feasible region of (D) X(N) = feasible region of (P(N)) Y(N) = feasible region of (D(N)) C' = optimal value of (P) B' = optimal value of (D) Ax = {x e: C(x)= C'} Y' = {y E Y: B(y)= B}' C (N) = optimal value of (P(N)) B'(N) = optimal value of (D(N)) X'(N) = {x X(N) C(x) = C'(N)} Y"(N) = {yE 'Y(N) B(y)= B'(N)} 3 Weak and strong duality Romeijn, Smith, and Bean [13] provide an adaptation of the example from Grinold and Hopkins [11], where a pair of linear programs of the form (P) and (D) is given for which not only strong duality fails to hold but also weak duality. We will impose the following assumption on (P) for most of the results that follow to help eliminate such pathological cases. 4

Assumption 3.1 For all x E X, y E Y lim nf Yk+ Ak+l,kXk > 0. k- oo Remarks: 1. All of the results to follow that invoke assumption 3.1 remain valid under the alternative assumption that the condition in assumption 3.1 holds for all y in some subset Y' C Y such that Y' n Y'* 0. 2. Assumption 3.1 is satisfied in the important special case where the off-diagonal matrices {Ak+1,k} are (eventually) nonnegative, since Xk, k > 0 for k = 1,2.... when x E X, y E Y. In view of remark 1, and the fact that we will demonstrate later that optimal dual vectors can be interpreted as shadow prices, assumption 3.1 roughly requires that one be no worse off in the long run with respect to the infinite dimensional problem by ending the N-th period in any N'-th period feasible state than by ending it in the 0-state (see Schochetman and Smith [15] for the formal definition of states in this context). This assumption is clearly met for problems where the state variables denote some kind of inventory, as in production planning problems. In order to establish weak duality under assumption 3.1, we begin with a lemma, proven in the appendix. Lemma 3.2 For all x E X. y E Y C(x) > B(y) + limsupyk+ Ak+l.kXk. k-+oc Theorem 3.3 (Weak duality) Under assumption 3.1, any feasible value C(x) of the primal problem (P) is bounded from below by any feasible value B(y) of the dual problem (D), i.e. B(y) < C(x) for all x E X and y E. Proof: This follows immediately from lemma 3.2 and assumption 3.1. D Remark: Theorem 3.3 continues to hold if assumption 3.1 is relaxed to the requirement that limsupy,A.k+l.kk > 0 k-+oc for all x E X, y E ". Our goal now is to provide conditions that are both necessary and sufficient for strong duality to hold, i.e. that optimal solutions x' and y' exist to (P) and (D) respectively with C(x-) = B(y-). As we shall see. the conditions are analogous to those of finite dimensional duality (namely, primal feasibility, dual feasibility, and complementary slackness) but with the exception of an additional necessary condition we term transversality.

Definition 3.4 The pair (x, y) E X x Y is said to satisfy the transversality condition if lim inf y Ak+,kXk = 0 k-oo Y+ We begin with a lemma and its corollary. Lemma 3.5 For all x E X', y E Y the following three statements are equivalent: (i) x and y satisfy the complementary slackness conditions, i.e. (Aii-lxsi- + Aix - bi)'yi =0 for i 1,2,... and (c, - Ayi - Ai+liyi+l)'x =0 for i = 1,2,... (ii) C(x) = B(y) + lim y+lAk+l.kxk. k —+oo (iii) C(x) = B(y) + liminfy i+Ak+l.kxk. Proof: See appendix. ~ Corollary 3.6 For all x E X, y Ef that satisfy the transversality condition the following tuwo statements are equivalent: (i) x and y satisfy the complementary slackness conditions; (zi) C(x) = B(y). We are now ready to state conditions for strong duality to hold. Theorem 3.7 (Strong duality) Let assumption 3.1 be satisfied. Then for all x* E X. y' E Y the following two statements are equivalent: (i) x' is primalfeasible, y' is dualfeasible. and they satisfy the complementary slackness and transversality conditions. (ii) x is an optimal solution of (P). y is an optimal solution of (D), and C(x) = B(y'). i.e. strong duality holds in (P) and (D): Proof: (ii) =~ (i) By corollary 3.6 we have that C(xr) = B(y"). But then weak duality implies that x" is optimal for (P) and y' is optimal for (D). (i) = (ii) By assumption 3.1. together with lemma 3.2 and the fact that C(x*) = B(y*), we have that 0 < lim inf Y+l.k+l.kXk < lim sup Y A+l A k < C(x) - B(y ) = 0 k-.oc k —oo 6

so that lim inf Y+ Ak+l,kk = k-+oo i.e. the transversality condition is satisfied. Now by corollary 3.6 we have that xa and ysatisfy the complementary slackness relations. o Remark: The implication (i) =X (ii) in theorem 3.7 continues to hold if assumption 3.1 is relaxed to the requirement that weak duality holds. Theorem 3.7 provides a criterion for strong, and hence weak, duality to hold. Using this theorem, it can be verified that an educated guess of the primal and dual solutions (obtained using for instance economic insights; see section 5 for an example) are indeed optimal solutions. We now turn to establishing sufficient conditions for strong duality that are more readily verifiable from the problem data. The challenge of invoking the criterion in theorem 3. 7 is to propose candidates x e X. y E Y for testing the conditions in (ii). The following theorem establishes that strong duality holds for certain accumulation points of finite dimensional optima for (P(Ni)) and (D(N)) respectively. These are termed algorithmic optima by Schochetman and Smith [15]. Theorem 3.8 Suppose assumption 3.1 holds. Suppose x* E X, y* E Y, and (x*,y*) E lim supN oo (X (N) x Y'(N)) (the set of all accumulation points of sequences drawn from {X_(N) x Y'(N)}), and transversality holds for the pair (x*, y*). Then x" e X', y' E and C(x') = B(y'). i.e. strong duality holds for (P) and (D). Proof: It is easily shown that the following result holds. Suppose x*(N) E X'(N), y'(N) Y) are optimal solutions for (P(N)) and (D(i)), respectively, for N = 1.2.... Furthermore. assume that for some subsequence {ANk} of the positive integers that is independent of i. we have lim x(NAk) = x' (i = 1.2....) k —, and limy[(A'k)=y' (i=1,2,...). k -- c.:I Then x- satisfies the primal linear inequality constraints, and x and y satisfies the dual linear inequalit' constraints. and x' anld y' satisfy the complementary slackness conditions. Tile desired result now follows immediately from theorem 3.7. 0 The following application of the above result provides sufficient conditions for strong duality to hold. including a guarantee of existence of primal and dual optimal solutions, that should in practice be easily checked. Corollary 3.9 Suppose that (P(. )) is feasible for all N and that there exist vectors tL < oc such that;J,[O.u,] C X and X'(N) n fI- [0, uj] # 0 for all N, and vectors vi < oc such that n0,r[O. ] C Y and 't (A') n H.[0o. t] 0 for all N. Moreover, suppose that lim, +1,Ai+lilui = 0 i- OCX

where IA;+,iJ is obtained by taking the absolute value of all entries in A,.+li. Then strong duality holds for (P) and (D), i.e. there exist optimal primal and dual solutions x' and y* with C(x*) = B(y*). Proof: Without loss of generality we may assume that X'(N) x Y*(N) C nI [0. ui] x no[0,^v]. Moreover, since (P(N)) is feasible for all N, X(N) x Y'(N) # 0 for all N. Now let {(x*(N), y*(N))} be a sequence of optimal solutions to (P(N)) and (D(N)). i.e. (xt(N),y*(N)) E X'(N) x Y*(N) for all N. Since this sequence is contained in the compact set Ii [0, u] x Hi= [0, vi], it has a convergent subsequence (x'(A7k), y(Nk)) (x,y') as k -+ oo so (x', y) E (X x Y) n limsupPN;,(A'*(N) x Y'(N)). Now consider without loss of optimality the pair x E X, y E Y. Then y:+A;+.,ixiI < ly'+11 jA,+i+,I * Ix I = y+ IAi+l,;lxi _< J+1 Ai+.ilU|Ui - 0 as z -+ oo. Hence assumption 3.1 holds without loss of optimality, as well as transversality for the pair (x', y*). Now by theorem 3.9. strong duality holds. o Note that while the upperbounds that are used in corollary 3.9 have to be satisfied by some primal and dual optimal solution, it is not necessary to explicitly take them into account in the problem formulation of the primal. This would necessitate the introduction of dual variables corresponding to the primal upperbounds - with the corresponding complications in establishing transversality, and thus strong duality (see Romeijn. Smith, and Bean [13]). Since the existence guarantee offered by corollary 3.10 for a pair xT' and y" satisfying strong duality is not constructive, in practice, we are lead to their numerical approximation by x(lA') and y-(AN) respectively. The following theorem provides a. condition, which is satisfied for example when the off-diagonal matrices {Ak+1,k} are eventually nonnegative (see remark 2 following assumption 3.1). that assures that x*(N) and y*(NT) will be close to x' and yo in value for A' large. This result is perhaps surprising since, even when x*(N) converges to xt in policy, the failure of C(x) to be continuous in general prevents us to thereby conclude value convergence. Theorem 3.10 (Value convergence) Let x' be an optimal solution of (P), let y* be an optimal solution of (D). and suppose strong duality holds, i.e. C(x*) = B(y*). Furthermore. suppose that y' n i =A (A') for some M < oo. Then lim C(.\) = C = B' = lim B*(N) N\ -X N -oo i.e. value convergence holds in both the primal and dual problem. 8

Proof: Since B'(N) = C'(N) for all N by ordinary finite dimensional duality, we have lim B'(N) - lim C'(N) N —+oo N —+oo < lim C(x';N) N- oo (since xa E X*(N)) C= " B* = lim B(y"; N) N-+oo < lim B(y'(N);N) N-4oo (since y* E N,=lY}(N) for some M! < oo) lim B'(N) N —+ow so that all of the above relations are satisfied as equalities. O In instances where strong duality holds, one is led to ask whether the standard economic interpretations of dual variables as prices continues to hold. The answer is affirmative as we prove in the next section. 4 Optimal dual solutions as shadow prices Much of the power of duality in linear programming derives from the insights provided from the economic interpretation of dual solutions as bounds on optimal buying and selling prices of resources or. as in this case. requirements. The precise statement of this property in the finite dimensional context is that the Lagrangean functional is a linear support to the optimal value function of the problem. We now state and prove the corresponding relationship in the infinite dimensional case. Define the Lagrangean function corresponding to (P) and (D) as follows: L - for (x.y) E S. where S is the subset of pairs in FRftl Rnt x fIli Rmt where L is welldefined on the extended reals. Note that X x {y: y > 0} C S, since x E. assures that C(.r) is well-defined and finite, while y > 0. together with feasibility of x, assures that the sequence of partial sums in the definition of L(x, y) is nondecreasing. We may also replace the right hand side vector b of (P) by the vector variable z and implicitly incorporate dependence of the Lagrangean on z so that, in an abuse of notation, we also write L(x.,y ) = c x- x yi(Ai.-i_lxi_ + Aix, i- z,) 1=1 i=1 9

for (x, y,) E T, where T is the subset of triplets of fil 'Rn x Hz IiRmt x H, RRnml for which L is well-defined. L(x, y; b) will simply be denoted by L(x, y). Now define, for all z E [I-: IRml, the optimal value function of (P) as v(z) = inf{C(x): x E X} where X, = {x E X: x > O, Ai,_li-_l + A;ixi > sz for all i} and inf 0 = oo. Lemma 4.1 The optimal value function is convex. Proof: Define the function V: X x fJi: Rm' -~ R U {+00} as follows: V{~' x-={ C(x) if E XX I( ) +00 otherwise. Then the value function can be written as: v(-) = inf V(z, x). xEX The result now follows immediately if V is jointly convex in x and z. First, note that the domain of 1 is convex. Now let x1, x2 E X, i1,2 E fJi Rm and O < A < 1. Then 1 (A' +(l-A)z —2Axl+(l-A)x2) { C(Ax' + (1 -)x2) if Ax1 + (1-A)x2 E X '!v /1J{ +00 otherwise. Now first suppose that x1 E Xz, x2 E X\2. Then, for all i, A,.iL(AxL + (1 - A )+ A)2(Ax 1 A)) = 2 2) = A(.4,-lx_ + A2xi) + (1 - A)(Ai-,_lx2 + Aiix) _> A +(1 A)-2. Hence Ax1 + (1 - A)x2 E XA. 1+(lX)2, and V(Az' + (1 - A)2, AX' + (1- A)2) = C(A(x + (1 - A)x2) < AC(x1) + (1 - A)C(x2) AV'(1 xl) + (1- A)V(2, 2). If x1 AX,i. then V(xl) = +oc. so we have V(A=' + (1- ~)_2., X + (A - A) 2) < V(z1(.lX1) + (1 - A)V(z2, 2) and similarly for x2 ' X.\2. Thus I' is jointly convex. and the desired result follows. o Theorem 4.2 Let x' E X, y' E Y, satisfy transversality and complementary slackness. Moreover, let assumption 3.1 be satisfied. Then the function L(x*, y*; z) is a linear support of the optimal value function t at = b, i.e. L(x',y Z ) < v(z) for all z such that (x, y', z) e T, and L(x' y; b) = v(b). 10

Proof: Fix z E l- i Rm' such that (x',y', ) E T, and fix x E X. We will first show that L(x', y';Z ) < C(x) for all x E X\. O0 L(x* y*; z) = b (cx7 - - y(,- i)) (by complementary slackness) i=1 k lim (cixi + y i-b)) k-+oo _ i=1 k < lim sup (cix7 + y' (Ai -isl + Aixi - bi)) k-+~oo i=l lim sup Z y' (A,,,-i_,-1 + Aiixi) k i= lim sup(Z( Ai~xi + y A/ k \ = lim sup ('YiAiixi + Yi+l/Ai+,ixi) - y+ nAk+lA,kXk k-+ooo - i= 1 k < lim sup E (yi'Aiixi + y- 'Ai+l,iXi) - lim inf y+,'Ak+l,kXk k-+oc. o i ^;'~)Ak lim sup E cxi k-+oo k-Cox Mi=1 = C(x). Thus. L(x',y*;z) < inf{C(x): x E A} = -v(~). Moreover, 00 L(x.,y; b) = c^ cx =C')C' = C(x*) = C* from the first equality above. O We can also extend the standard saddle point property to the infinite dimensional setting. Theorem 4.3 (Saddle point property) Let x' C X, y* E }', satisfy transversality and complementary slackness. Moreover. let assumption 3.1 be satisfied. Then (x*,y*) is a saddle point of the Lagrangean L. i.e. L(x',y) < L(x' y') < L(xy') for all (x*,y), (x,y) E S. Proof: By theorem 3.7 strong duality holds, so x* is optimal for (P), y* is optimal for (D). and C(x*) = B(y'). First observe that, by complementary slackness, L(x, y') = C(x') = B(y*) 11

and in particular (x2,ym) E S. Now, for (x, y) E S L(x, y) = (cx2i - y^' Aiil - yiAiixi + y* bi) i=1 k k-0 ' oo lim( ((c? t - A:+l,yy) -x + yr'bi) + i=1 Yk+l'Ak+likk) OC" 00 = (c( - Ati - Ai+liY+l )r + yfkbi + klm Y k+l.ki i=l i=1 oo 00 > y 'biz i=l B (y') L(x',y'). Similarly, it can be proven that. for all (x, y) E S, L(x, y) < C(x') = L(x',y*). 5 An application to infinite horizon production planning Consider the following linear programming infinite horizon production planning problem (see Denardo [4]): 0C' min ol~ (kPi + hii ) (P) 1=1 subject to Ii, + P, - I > d, (i- =12,...) P, > 0 ( =1.2,...) It > o (i..) (P.I) E X where X is the set of points where the objective function is well-defined. We assume that (d.O) E X. so that X 0. Furthermore. Ii is the net inventory ending period i, with lo = 0, Pi is the production in period i. di > 0 is the demand for production in period i. ki > 0 is the production cost and hi > 0 is the inventory holding cost for period 2 = 1,2,.... The factor a is the discount factor reflecting the time value of money, where 0 < o < 1. The dual (D) becomes max d i'i (D) 2=l 12

subject to W < ai-l (i = 1,2....) (6) -wi + w.+l a-h (i= 1,2,...) w > 0 (i 1,2....) w E W where W is the set of points where the dual objective function is well-defined. As (d. 0) E X, it is feasible to produce the demand in every period, and to never hold ant inventory. and its cost i-lkid (7) i=1 exists and is finite. This. together with (6), implies that the constraint w E W in (D) is redundant, i.e. the dual objective function is well-defined for all solutions satisfying the first inequality constraints in (D). In this section we will analvtically derive the optimal solution to (D). Next we will construct a feasible solution to (P) such that the pair of solutions satisfies the complementary slackness and transversalitv conditions. We then conclude from theorem 3.7 and the remark following that strong duality holds, and thus that the primal feasible solution thereby constructed is optimal. First observe that we can regroup and rewrite the dual constraints as follows: wI < kl and wi < min(oi-lk,.1'-2h;' + w-1) (i = 23...). Then the optimal solution to (D) is clearly given recursively by w = k' and w?' = min(o'-'k,.ot'-2h2l + t '_) (i = 2,3,... The objective is now to find a solution (PO. PI) E X which, together with w", satisfies the complementary slackness relations. That is. we wish to find a (P*, P1) satisfying (Qo-'ki- zc)P; = 0 (8) (o-'h, + '- T '+)l/ = 0 (9) and (I., +P'-It'-dI)w = 0 (10) for i = 1.2..... Since uw > 0 for all i (since kI > 0 for all i), by equation (10) (P', I') needs to satisfy '_,I + P;' - I= (11) 13

for all i. So (P', JI) will satisfy the primal inequalities as strict equalities. Hence if we solve equation (8) for P, then P is determined by (11). Now define NIV = N' = 1 and. recursively, Ne = arg min{i _> Ne-I + 1: w? < ai- hi_- + w}' (C =2.3....) (- =2.3,...). and NA' = arg min{i > N,_1 + 1: w; = -— lki} Furthermore, let and Clearly, from the definition of uw. N= {N, N2,...}. N'-{NyN;,...). N C N'. NowN define a production epoch associated with a feasible production schedule P as a period i where Pi > 0, i.e. a period where we produce. Theorem 5.1 Let M be the set of production epochs corresponding to a feasible production schedule P' where A C Al C N'. Then P* is an optimal production schedule. Moreover, all optimal production schedules to (P) can be characterized by such a set Al. Proof: Let,kt+i -1 beth= phe Md i=Mt be the production associated with production epoch Mkt ((- 12, ) and P' =0 (iCM). The corresponding inventories follow from equations (11): Alt+, - 1 I =t+l i=l4-1 (i,= 1....,A+1 - 1: - 1,2,...). Note that /^l,+,_1 = 0 for I = 1.2..... It is easy to check (since N C M) that I'* satisfies equations (9). Using AI C.V' we can also explicitly write the dual solution w*: -1 j= Al oh+oj=M// (i= Ae,,...AMe+i -; =1.2....). (12) What remains to be shown is that (a) (P', I') X, and (b) w' and (P", I*) satisfy transversalitv. We will show both of these simultaneously by rewriting the objective 14

function value of w- in such a way that it can easily be seen to be equal to the primal objective function value of (P'. I'): o00 oo (fM+l-1 / i-M \ ) Z diw = E E E j + a'i"'^ i=1 f=1 i=Al \j=AM=l = E Mt-lM+ E Ed -lh {=1 i=Alt i=Mt+l j=Mt 1 {oAIt-lM P] + P lt E-M +-1+ h di Q 00 Mt+l-1 A ~=1 Ij=Mt \ i=j+l i. E o'Al'-1 kM PM, + E j-l hjaI ~=1 I j=Mt 00 = 5 &o-l(kP,' + hI;). i=1 So we can conclude that (P'. I') E X. Furthermore, lemma 3.5 now says that w* and (P., I*) satisfy the transversalitv condition. Thus (P*, I*) is an optimal solution for (P). The second part of the claim follows easily from theorem 3.7, which states that any optimal solution of (P). together with wu, must satisfy the complementary slackness conditions. O As in the finite dimensional variant of the production planning problem studied in this section. the optimal primal solutions turn out to have the property that production only takes place when inventories are zero, and always takes place for an integer number of periods ahead. Theorem 4.2 allows us to interpret the ith component u7w of the optimal dual solution as an upper bound on the optimal cost of meeting one more unit of demand in period i. In the case that di > 0. it also represents a lower bound on the optimal cost reduction resulting from one less unit of demand in period i. References [1] J.M. Borwein. Direct theorems in semi-infinite convex programming. Mlathematical Programming. 21:301-:318. 1981. [2] J..l. Borwein. Semi-infinite programming: How special is it? In A.V. Fiacco and 1K.O. IKortanek. editors. Semli-Infinite Programming and Applications. Springer Verlag. Berlin. Germany. 1983. [3] A. Charnes. W.W. Cooper. and IK.O. Kortanek. Duality in semi-infinite programming and some works of Haar and Caratheodorv. Management Science. 9:209-228, 1963. 15

[4] E.V. Denardo. Dynamic Programming: Models and Applications. Prentice-Hall. Englewood Cliffs, New Jersey. 1982. [5] R.J. Duffin, R.G. Jeroslow, and L.A. Karlovitz. Duality in semi-infinite programming. In A.V. Fiacco and K.O. Kortanek, editors, Semi-Infinite Programming and Applications. Springer Verlag, Berlin, Germany, 1983. [6] J. Dugundji. Topology. Allyn and Bacon, Boston, Massachusetts, 1966. [7] R. Grinold. Infinite horizon programs. Management Science, 18:157-170. 1971. [8] R. Grinold. Finite horizon approximations of infinite horizon linear programs. Mathematical Programming. 12:1-17. 1977. [9] R. Grinold. Convex infinite horizon programs. Miathematical Programming, 25:64-892 1983. [10] R. Grinold and D.S.P. Hopkins. Computing optimal solutions for infinite horizon mathematical programs with a transient stage. Operations Research, 21:179-187, 1972. [11] R.C. Grinold and D.S.P. Hopkins. Duality overlap in infinite linear programs. Journal of Mathematical Analysis and Applications, 41:333-335, 1973. [12] P. Jones, J. Zydiak. and W. Hopp. Stationary dual prices and depreciation. Mathematical Programming. 41:357-366. 1988. [13] H.E. Romeijn, R.L. Smith. and J.C. Bean. Duality in infinite dimensional linear programming. MIathematical Programming, 53:79-97, 1992. [14] I.E. Schochetman and R.L. Smith. Infinite horizon optimization. Malthematics of Operations Research. 14:559-574. 1989. [15] I.E. Schochetman and R.L. Smith. Convergence of best approximations from unbounded sets. Journal of Mathematical Analysis and Applications, 166(1):112-128, May 1992. 16

Appendix Lemma A.1 Every countable constraint system having the property that every constraint contains at most a finite number of variables may be transformed into an equivalent lower staircase system of the form (1). Proof: First transform all inequality constraints to equality constraints by adding slack and surplus variables. Note that this does not destroy the property that every constraint contains at most a finite number of variables. Suppose that the constraints and variables are labelled by the positive integers. The following algorithm accomplishes the desired transformation (where index v denotes the current variable, index c the current constraint, index i the largest index of a variable having nonzero coefficient encountered so far, and index k is the number of variable blocks created so far): Transformation Step 0. Set v= 1, c= 1, i = 0, k =0. Step 1. Find the first constraint in the set {c, c + 1,..} such that the coefficient of xt, is nonzero. Without loss of generality, assume that this is constraint c. Step 2. If the last variable having nonzero coefficient in constraint c is xj, with j < i, continue with step 3. Otherwise, set k = k + 1 and define the kth variable block to contain variables i + 1..., j (and thus nk = j -i), and set i = j and mk = 0. Step 3. Use constraint c to eliminate x~, from constraints {c+ 1, c+2,...}. Set v = v + 1, c = c + 1. mk = mk + 1. and return to step 1. It is clear that all steps of the algorithm are well-defined. In particular, in step 2 there is a last variable having nonzero coefficient in constraint r, since every constraint contains at most a finite number of variables. Furthermore, since it is clear that the transformed constraint matrix is lower block triangular. and also upper triangular (with respect to the actual elements of the matrix). that the matrix is lower staircase. This can also be seen from the fact that in step 3. the variable xc is always in the last or one before last variable block. Finally, if desired. the equality system can be rewritten as an inequality constraint system of the form (1). ~ Lemma 3.2 For all x E X. y E Y C(x) > B(y) + limsupyk+lAk+l,kXk. k —+ cx, Proof: We have 0C C(x) = CC,? Z=1 17

k = lim CLx k-+oo - i=l k > limsup (A'yi + A'+ yi+)'x from (2) k —oo i=1 k = lim sup (xiAyi + XAi+lii+i) k-+oo i = lim sup ((x IA/ _yi + x\A'iyi) + X4A+lkYk+) i=1 = lim sup (Z (Aiixi. li +- AZiii)'yi xk k+lkYk+l > lim sup ( bYi + 4A +l~kyk+) from (1) k-+oo =lim sup ( A,-l- - Aiixi)yi k+lykkY+ k limwhich ps by + A k frolemma. (1) (i) x and y satisfy the complementary slackness relations, z.e. (Ai.._1xi2 + Atix2-bb)'yi = 0 for i = 1.2.,... and (c1 - A'y, - A'4+ 1y i+)'xi = 0 for = 1,2.... (ii) C(x) = B(y) + lim x..A+ tkyk+1 k- oc, (iii) C(X) = B(y) + liminfx A+lSkyk+,kYk ~~~~Proof:~~k-, which proves the lemma. o nsider the proof of lemma 3.. If the complementary slackness relations are satisfied (Ai.i-lxi-1 {- Aiixi -- bi)'yi -- 0 for i - 1. 2... both inequalities are satisfied as equalities. Moreover, all infinite sums converge, implying that 'limsup' can everywhere be replaced by 'lim, yielding (ii). (ii) = (i) Once again consider the proof of lemma 3.2. By (ii), both inequalities have to be satisfied (ii) C(x) = B(y) + lim x~,..4kyk+l. k —~ oc () C(x) B(y) liminf x (c, - +l.k - + = 0 'k —oc = 1 Proof: that Iim sup' can everywhere be replaced byr 'lim', yielding (ii). (ii) (i) Once again consider the proof of lemma 3.2. By (ii), both inequalities have to be satisfied as equalities. The first inequality is satisfied as an equality if lira inf (ci- A'iyi- A.i+lYi xi - - 0 z1= 18

which can only hold if (ci - A'y- - A'yi+l)'xi = 0 for all i = 1,2,... since each of these terms is nonnegative by (2) and the fact that xi > 0 for all i. We also have that 'lim sup' in the proof of lemma 3.2 may be replaced by 'lim', so that the second inequality is satisfied as an equality only if k lim y (Ai-.ix-1 + Aiix - bi)yi = 0 k-+oo i=1 which can only hold if (Aiilx;_ + Aix - b =)'yi 0 for all i = 1,2,..., i.e. the remaining complementary slackness relations are satisfied. (ii) ~ (iii) Obvious. (iii) (ii) (iii) is equivalent to lim infx.4A.+l. yk+ = C(x)- B(y). k A-c By lemma 3.2 C(x) - B(y) > lim supxkAk+lkyk+l k-+oo SO lim inf x.Ak+lkYk+i > lim sup xAk+lk+l ken^oc k-+oo and thus lim*infAk+l.kYik+l = lim sup k4k+.1kYk+l = lim Xk4A+l,kk+1. k-+oco k-cx k-+oo El 19