Using Sequential Approximations in the L-Shaped and Generalized Programming Algorithms for Stochastic Linear Programs John R. Birge Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 Technical Report No. 83-12

Using Sequential Approximations in the L-Shaped and Generalized Programming Algorithms for Stochastic Linear Programs John R. Birge Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 Abstract Stochastic programs with continuous random variables are usually approximated by programs with discrete valued random variables approximating the continuous distribution. Algorithms can be applied to these problems and bounds can be found on the value of the solution of the original problem. In order to obtain a more precise bound on the solution value, the problem is generally re-solved with a finer discrete approximation. In this paper, we present a method for achieving optimality within any pre-defined error bound that can be applied directly to the L-shaped and generalized programming algorithms for stochastic linear programs.

1. Introduction A variety of algorithms can be applied to stochastic linear programs with recourse and with discretely distributed random variables. When these variables are allowed continuous distributions, however, the methods are generally unable to solve the problem efficiently in a direct implementation. The standard procedure as in Kall [4] is to approximate the continuous random variables using a discrete distribution and solve the problem iteratively taking finer and finer partitions of the random variable in the approximation. We give below a method that solves the continuous problem in a single application of two algorithms. We consider the problem: minimize z = cx + Q(x) (1) subject to Ax = b x > 0, where Q(x) = E [Q(x, 0)] and Q(x, ) = min qy subj. to Wy = - Tx y > 0. This problem represents the standard stochastic linear program with recourse in which random variables are restricted to the right-hand-side coefficients E e E.

2 The difficulty in finding Q(x) is that the solutions of an infinite number of subproblems (2) may be required in order to calculate the expectation. Several methods have been suggested for approximating Q(x) (viz. [1], [2], [3], [5]). These methods may be incorporated into our modification to solve (1) in a single pass procedure. We assume that we can obtain successively better approximations of Q(x) by upper and lower bounding, We assume there exist sequences ({k(x)} and {(O(x)} such that l (Lx) > L(), Q )'k+l( < Qk(x), lim nQ(x) ko ke' u and lim nk(x) = O(x), k-oo for all x such that Ax = b, x > 0. These approximations.may be found by discretizations of 3. 2. The L-Shaped Algorithm We first consider the L-shaped algorithm of Van Slyke and Wets [7]. The kth master problem in this algorithm is:

3 (3) Minimize z = cx + 0 (3.1) Subject to: Ax = b Dx > d~Q = l,..,s, (3.2) EQx + 6 > e; = l,...,t, x > 0, where the constraints in (3.2) induce feasibility and constraints (3.3) form an outer envelope of 2(x). We adapt the construction of these constraints to the successive approximation. k Wk k We let (xk, 0k) be optimal in (3). We use x as an input to solve (2) for all i used in the lower approximation of Q(x). If any subproblem (2) is not feasible then we add a feasiblity constraint, Ds+1 > d+ let s = s+ 1, k = k + 1, and resolve (3). (This process does not require independent solutions of all subproblems (2) [8].)

4 We let the lower approximation found be Q k(x). If 0 < k (x) < Q(x), then x is not optimal in (1). We let F (S) be the distribution function used in this lower approximation of O(x) and let Trk() be the optimal dual multipliers for subproblem (2)o t+: = f k () T dFk(), and fi= f\ ^ e t+l J | k(~) k ' Et+lx + e > et+l is added as a constraint to (3.3), t is set to t+ 1, and k is set to k + 1l (3) is again solved. The constraints in (3.3) then represent an outer linearization or lower approximation of O(x). k L k If k > 'ok (x ), then we solve (2) for any additional values of 5 u k necessary to find.k(x ). Again, if an infeasibility is found, a constraint is added to (3.2) and we repeat the process. After finding (xk, is added to (3.2x) a rea optimal pair, and we check whether 0k > Q(xk)o If so, (x, k ) are an optimal pair, and we

5 are doneo If not, we have optimized cx + k (x), but are not sure if k / x optimizes cx + (x). In this case, we refine the approximations (xk) and 0L(xk) and update k, until either ek > (x1) or 0 < OL(x). Since lim 0L(x) = lim Q (x), one of these conditions must be obtained. k k In the former case, (x, k ) is optimal and in the latter case, another constraint is added to (3o3). The algorithm converges because of the convergence of the approximations. We, of course, have bounds for x* optimal in (1) of kk * k k (4) c xk+<k < cx* + (x*) < cxk + 0 (x), at every iteration0 The algorithm may be terminated whenever the interval in (4) is sufficiently small. 3. The Generalized Programming Method A new algorithm for (1) based on generalized linear programming has been proposed by Nazareth and Wets [6]. For this algorithm, it is more convenient to write (1) as (5) minimize z = cx + Y(X) subject to Ax = b Tx - X = x > 0,

6 where ' (X = E [,(X, S)] and P(.(, ) = min qy subjo to Wy = X - X y > 0. We assume again that converging approximations YTx) and k (x) are available. The generalized linear programming algorithm proceeds by noting that X can be found as a convex combinations of extreme points in its domain which we assume is bounded. In our modification of this algorithm, the restricted master problem of (5) becomes k k k (6) minimize z = c x + Z X. Tu (.) i=l subj. to (6.1) Ax = b, k (6.2) Tx- E X i 0, i=l1 k (6.3) Z X = 1, i=l i > 0, i = 1,..,,k, x > 0.

7 k k Let the optimal solution of (6) be (x, X ), and let the optimal dual k k k multipliers be (T, a, p ) corresponding to (6.1), (6.2) and (6.3) respectively. The subproblem associated with (6) is then (7) minimize (X) + aX. k uk kk k k Let X be optimal in (7). If k(X) + a k < p then x is not k k optimal in (5). Xk and T(Xk) are added to (6), k is updated, and (6) is solved again. If Ek(x) + aT > > p, then x is optimal relative to uk but k k (x) is then computed. If is not necessarily optimal relative to. (Xk) is then computed. If L(Xk) + k k k k L(Xk) + TY(X ) + a X >, then x is optimal in (5). If, however, k + k ) k k k L a X < p, then the approximations TYk and Yk are refined and (7) is k k u(?k k k I k k resolved if we still have yk(x ) + a p and T (x + a < P This refinement and re-optimization is repeated until ether Xk k k k L k k k k until either _u(xk) +a X < p or k (X + a X > P. In either case, we again update (6) and repeat the procedure. These conditions must be met because of the convergence of the approximations to the true distribution. The optimality condition must also be reached eventually because of our assumptions. Bounds may also be obtained in this problem. Clearly, for (x*, X*) optimal, k k k i) k (8) ex* + t (X*) < cx + EZ X U(X = z i=l By duality, we also have k k Lk k Xk (9) cx* + T(X*) > z - + YL(x + a X (8) and (9) may then be used to obtain sufficient bounds on z*.

5. Conclusion Procedures incorporated into two algorithms for stochastic linear programming have been presented that solve the stochastic l.p. with continuous random variables in a single pass. The procedures take advantage of the outer linearization of the L-shaped method and the inner linearization of the generalized programming method. They extend previous results on approximations for stochastic linear programs that converge monotonically to the true value.

9 References [1] J. Birge and R. Wets, "Approximations and Error Bounds in Stochastic Programming", Proceedings of the Symposium on Inequalities in Statistics and Probability 1983, to appear. [2] C. Huang, Io Vertinsky and W. Ziemba, "Bounds on the Expectation of a Convex Function of a Random Variable: With Application to Stochastic Programming", Operations Research 25, (1977), pp. 315-325. [3] P. Kall, "Approximations to Stochastic Programs with Complete Fixed Recourse", Numerische Math 22, (1974), pp. 333-339. [4] Po Kall and D. Stoyan, "Solving Stochastic Programming Problems with Recourse Including Error Bounds", Math. Operationsforsch. Statist. Ser. Optimization (1982). [5] E. Madansky, "Inequalities for Stochastic Linear Programming Problems", Management Science 6 (1960), pp. 197 - 204. [6] Lo Nazareth and R. Wets, "Algorithms for Stochastic Programs: The Case of Nonstochastic Tenders", IIASA Working Paper, WP-83-5, Laxenburg, Austria, 1983. [7] Ro Van Slyke and R. Wets, "L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming", SIAM Journal Appl. Matho 17 (1969), pp. 638-663. [8] Ro Wets, "Stochastic Programming: Solution Techniques and Appoximation Schemes", in Mathematical Programming: State of the Art 1982, Springer-Verlag, Berlin 1983.

UNIVERSITY OF MICHIGAN 3 9015 03994 85521111 1111111 II 3 9015 03994 8552