COMPUTING KARMARKAR'S PROJECTIONS IN STOCHASTIC LINEAR PROGRAMMING John R. Birge Hengyong Tang Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 93-23 August 1993

Computing Karmarkar's Projections in Stochastic Linear Programming John R. Birge Department of Industrial and Operations Engineering The University of Michigan Ann Arbor MI 48109 U.S.A. Hengyong Tang Mathematics Department Liaoning University Shenyang Liaoning 110036 P.R.China August 18, 1993 Abstract In this paper we compute Karmarkar's projections quickly using Moore-Penrose g-inverse and matrix factorization in stochastic linear programming. So computation work of (ATD2A)-' is decreased. Keywords: Stochastic Linear Programming, Two Stage Problem with Recourse, Karmarkar's Algorithm, Karmarkar's Projection, Moore-Penrose G-inverse, Matrix Factorization. 1 Introduction Since Karmarkar proposed a new polynomial time algorithm for linear programming in 1984 [1], some research workers have started applying it to special large size problems of linear programming. One of them is block angular linear programming, especially, two stage problems of linear programming with recourse [2] [3] [4] [5] [6] [7]. In all variations of Karmarkar's algorithm, the major work is repeated computation of (AD2AT)-l1 All Karmarkar's algorithms applied to stochastic linear programming either reduce the number of dense columns that are in coefficient matrix of the linear programming or separate them explicitly from the other (nondense) columns to decrease computation work. [2] has discussed them in detail. In this paper we compute (AD2AT)-1 using Moore-Penrose g-inverse and QR factorization and taking advantage of the sparseness of A to decrease computation work. 1

In section [2], we review structure of stochastic linear programming to be discussed. In section 3 we review some characters of Moore-Prenrose g-inverse and QR factorization. Karmarkar's algorithm is formulated in section 4. In section 5, we give numerical method for computing (AD)+ to compute (AD2AT)-1. The complexity of the algorithm is analyzed in section 6. 2 Stochastic linear programming We consider two stochastic linear programming with recourse that is defined over a discrete probability space [2] min COXO + Q(x) s.t Aoxo = bo (1) Xo > 0 where N Q(x) = piQ(xo, (wi)) i-= Q(xo, (wc)) = min dTxi s.t Wixi = b- TiXo (2) xi > 0 where ((w) is random vector defined on the discrete probability space (Q, F, P), pi is the probability of wi. XO is no-dimensional decision vector of the first stage. xi is ni-dimensional decision vector of the second stage. Ao,, co, Wi, Ti, bi and bi are matrices of size mo x no, mo x 1, no x 1, mi x ni, mi x no,mi x 1 and ni x 1 respectively. mi < ni,i = O,1,...,N. N is the number of the scenarios. For purposes of discussion, we assume Ao and Wi have full row rank. Deterministic equivalent formulation of problems (1) (2) is min 4XXoo + Esi- clTi s.t Aoxo = bo TiXo + Wixi = bi i=l,...,N Xi > 0 i = 0,...,N where c = pidi. Let Ao T1 Wi K TN WN 2

C=(bo, cl,..., CN)T, Problem (3) can be written as mn CTX s.t Ax=b (4) x> >O where A, b and c are matrices of size m x n, m x 1 and n x 1 respectively, where N N m=Zmi, n=Zn. i=O i=O Its dual problem is max bTy s.t ATy Lc where y is r-dimensional dual variable vector. 3 M~oore-Penrose g-inverse In this section, we review some characters of the Moore-Penrose generalized inverse (g-inverse) [8] [91 which are useful in following sections. We consider real m x n matrix A. Let A+ be the Moore-Penrose g-inverse of A. Character 1 a) A+AAT = ATAA+ = AT. 6) AAT(AT)+ - (AT)+ATA = A. c) (ATA)+ - A+(AT)+. d) (AAT)+ - (AT)+A+. Character 2 a) If rank(A) = n, then A+- (ATA)-'AT b) If rank(A) = m, then A+ = AT(AAT)1. 3

Character 3 Let A = BC, where A, B and C be m x n, m x r and r x n matrices respectively, and rank(A) =rank(B) =rank(C) = r, then A+ = C+B+. Let A be m x n matrix and rank(A) = r < n < m, utilizing Householder transform A can be factorized into A=Q( ) QrU, (6) where Q is m x m orthogonal matrix, U is an upper trapezoidal matrix and rank(U) = r, Qr is a matrix composed of the first r columns of Q. If r = n, U = R is an upper triangular matrix. Equality (6) is written as A=Q( ) =QnR. (7) If rank(A) = r < m < n, equality (6) and equality (7) become A= ( U, 0 )Q=UQr (8) and A=(R, o)Q=RQn, (9) where U and R are lower trapezoidal matrix and lower triangular matrix. Qr is a matrix composed of the first r rows of Q. 4 Karmarkar's algorithm We focus on a variation of Karmarkar's algorithm which is generally called the dual affine scaling method (11] [2]. This algorithm requires that dual problem (5) has an interior feasible solution y~. Algorithm 1 (A,b,c,y~,stopping criterion,O < 7 < 1) 1. k=O. 2. stop if optimality criterion is satisfied. 3. vk = c-ATk. 4. Dk = ding {1//,...,l/vk}. 5. hy = (A(Dk)2AT)-lb. 4

6. h_ - AThY. 7. a =7x mif-i1h)Jh~ <0, i = 1,...,,n} 8. ykc+1l=yk +ahy 9. Xk~1 = (Dk)2hv. 10. k= k +1, goto 2. The major computation work of the algorithm is computing (A(Dk)2AT)l1. Since Dk is m x m full rank diagonal matrix, then rank(ADk) = m, and (ADk)+ = (ADk)((ADk)(ADk)T)1, ((AD'k)+)T(ADk)+ - (A(Dk)2AT)1'(ADk) (ADk) ((ADk) (ADk)T) -1 - (A(Dk)2AT)1i. Thus step 5 of algorithm 1 can be written as hy= ((ADk)+)T(ADk)+b. In following section we will discuss computing (ADk)+ quickly. For simplicity, we write Dk as D. 5 Computing (AD)+ A=(1 Wi N Correspondingly we partition D into DNJ Thus AoDo AD T1D0 W1D1I TNDo WNDN where AoDo and WiDi are full row rank matrices. 5

Algorithm 2 (QR factorization) 1. Do QR factorization for AoDo AoDo = (Ro, O)Qo. Thus Ro AD = TON1 \ N 0 7,02 WlD1 N,2 N QW~ WNDN where Qo Qo = I I set k = 1. 2. Do QR factorization for (Tk-12, WkDk) (Tr-1'2, WkDk) = (Rk, )Qk. Thus AD... Rk '* 'k+l 0 Tkc,2 k+l Wk+ Dk+l QOQ1..Qk,. Tp1 T Nk2 *** 1x T 1 WNDN ) where }I I Qk = I \ 6

8.If k = N, terminate. otherwise k = k + 1 go to 2. The result of the algorithm is AD = (RIO)Q =R(Qm7 O) (10) where R 21 'l R1 TJ'Tk RN is a full rank lower triangular square matrix, Q QOQ1..QN j QNl is an orthogonal matrix, and Qo Ql Qm~~~~Q QSl is a matrix composed of the first m rows of Q. Q,+ is a matrix composed of first some rows of Qs9+i. LFRom (10) we can compute (AD)+. Theorem 1 (AD)+ =(mO)T ( I R-1. proof. By character 3 we have (AD)+ = RQ20) (M )R =(gQm2 O)T(Qm T)1RT(RRTY1l 7

Since Qi (i = 0,1, -...,s) are orthogonal matrices we have -1 =Q((Q Q0 I (QgT,;ZTr Qs+ i QS QT -1 Qs+lQs+l J QssQT (ag+lzf.g+l)-, I This completes the proof of the theorem. Theorem 1 shows that to compute (AD)+, we just have to do QR factorization of AD and - T -MT compute (Q,+1Q 81)-I and R'1. iU.+1jQ+1 is a very small matrix, R is fall rank lower triangular square matrix. So the major work of computing (AD)+ is QR factorization of AD. Since this factorization is implemented by block, computation work is greatly decreased. 6 Arithmetic complexity For the purpose of the discussion we suppose that Mo =Ml=0**=mN, nof = ni =... = N. Theorem 2 If computation of (AD9k)+ is implemented using algorithm 2, then the arithmetic complexity of computing Karmarkar's projections is O(Nma + N2(no -_mo) + N2m2). proof. The major operations of computing (A(Dk)2AT)-I are: The QR factorization of (Vr1'2, W Di) needs i-+ Z i-n(1 2(nm,2(ni +-(ni - imi)) - 1mi2(i + n. + E(N -1-m)) + -i k=O k=O 2(mg(no + i(no MO)) 12 1 2mo m ~1 + i(n0 _M)) + _In) 8

arithmetic operations [10]. The QR factorization of AD needs N 1 1 2(mo(no + i(no - mo)) - Zmo(mo + no + i(no - mo)) + -mo) 12 = 2N(m2(no + (no - mo)) - m(m no + no + (no - mo))+ m arithmetic operations. Computing R-' needs 2 (N + 1)2m arithmetic operations. Thus the arithmetic complexity of computing (A(Dk)2AT)-1 is O(Nm~ + N2(no - mo) + N2m2). This completes the proof of the theorem. Algorithm 1 finds an optimal solution to problem (4) and problem (5) in O(V/'miL) iterations [12] [13], where L is a measure of problem's size. So the arithmetic complexity of algorithm in which algorithm 2 is used to compute (A(Dk)2AT)-1 is O(V^m(Nmo + N2(no - mo) + N2m2)L). But the arithmetic complexity of the general Karmarkar's algorithm is O(m3.5L) = O(VMiN3m3L). References [1] N. Karmarkar, "A new polynomial time algorithm for linear programming," Combination, Vol. 4, 373-395, 1984. [2] J. R. Birge and D. F. Holmes, "Efficient solution of two-stage stochastic linear programming using interior point methods," Computational Optimization and Application, 1, 245-276, 1992. [3] J. R. Birge and L. Qi, "Computing block-angular Karmarkar projection with application to stochastic programming," Mgt. Sci., 34:12, 1472-1479, 1988. [4] J. Arantes and J. Birge, "Matrix structure and interior point methods in stochastic programming," Presentation at Fifth Int. Stochastic Programming Conf., Ann Arbor, MI, 1989. [5] J. R. Birge, R. M. Freund and R. J. Vanderbei, "Prior reduced fill-in in the solution of equations in interior point algorithms," Tech. Report, Sloan school of Management, M.I.T. Cambridge, MA, 1990. 9

[6] I. Lustig, R. Marsten and D. Shanno, "Computational experience with a primal-dual interior point method for linear programming," Linear Algebra and its Application, vol. 152, 191-222, 1991. [7] I. Lustig, R. Mulvey and T. Carpenter, "Formulating stochastic programs for interior point methods," Oper. Res. vol. 39:5, 757-770, 1991. [8] Xuchu He, Basic Theory and Algorithms of Generalized Inverse ( in Chinese ), Shanghei Science and Technology Press, Shanghei, 1985. [9] R. Pao, Matra S. K., Generalized Inverse of Matrices and Its Application, Wily, New York, 1971. [10] Gene H. Golub and Charles F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, Maryland, 1983. [11] I. Adler, N. Karmarkar, M. G. C. Resende, and G. Veiga, "An implementation of Karmarkar's algorithm for linear programming," Mathematical Programming, 44, 297-355, 1989. [12] Renato D. C. Monteiro and Ilan Adler, 'Interior path following primal-dual algorithms Part I: linear programming," Mathematical Programming, 44, 27-41, 1989. [13] Renato D. C. Monteiro and Ilan Adler, "Interior path following primal-dual algorithms Part II: Convex quadratic programming, Mathematical Programming, 44, 43-66, 1989. 10