COMPUTING KARMARKAR'S PROJECTIONS QUICKLY USING MATRIX FACTORIZATION John R. Birge Hengyong Tang Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 93-24 August 1993

Computing Karmarkar's Projections Quickly Using Matrix Factorization John R. Birge Department of Industrial and Operations Engineering The University of Michigan Ann Arbor MI 48109 U.S.A. Hengyong Tang Mathematics Department Liaoning University Shenyang Liaoning 110036 P.R.China August 18, 1993 Abstract In this paper we compute Karmarkar's projections quickly using Moore-Penrose g-inverse and matrix factorization. So computation work of (ATD2A)-1 is decreased. Keywords: Linear Programming, Karmarkar's Algorithm, Karmarkar's Projection, MoorePenrose G-inverse, Matrix Factorization. 1 Introduction In 1984, Karmarkar [1] proposed a new polynomial time algorithm for linear programming based on repeated projective transformation. This algorithm creates a series of interior points converging to the optimal solution. His work has sparked tremendous interest and inspired other to modify his algorithm or to investigate similar methods for linear programming. A lot of variations of Karmarkar's algorithm have been made. In all variations of Karmarkar's algorithm, the major work is repeated computation of (ATD2A)-l or solution of system of linear equations ATD2Ay = b. Since D changes at each iteration. To decrease computation work of Karmarkar's algorithm computing (ATD2A)-1 quickly is needed. Many papers do their best to compute (ATD2A)-1 quickly 1

using rank-one update, approximate scaling matrix, updated Cholesky factorization and so on [1] [2] [3] [4]. [5] [6] have given efficient solution for block-angular linear programming, especially for two-stage stochastic linear programming. In this paper we decrease the work of computing (ATD2A)-1 using Moore-Penrose g-inverse and matrix factorization. In section 2, we review some characters of the Moore-Prenrose g-inverse. Karmarkar's algorithm is formulated in section 3. In section 4, we give method for computation of (ATD2A)-1 using Moore-Penrose g-inverse and matrix factorization. The complexity of the algorithm is analysed in section 6. 2 Moore-Penrose g-inverse In this section, we review some characters of the Moore-Penrose generalized inverse ( g-inverse ) [7] [8] which are useful in following sections. We consider real m x n matrix A. Let A+ be the Moore-Penrose g-inverse of A. Character 1 a) A+AAT = ATAA+ = AT. b) AAT(AT)+ = (AT)+ATA = A. c) (ATA)+ = A+(AT)+. d) (AAT)+ = (AT)+A+. Character 2 a) If rank(A) = n, then A+ = (ATA)-AT. b) If rank(A) = m, then A+ = AT(AAT)-. Character 3 Let A = BC, where A, B and C be m x n, m x r and r x n matrices respectively, and rank(A) =rank(B) =rank(C) = r, then A+ = C+B+. 3 Karmarkar's algorithm For the purpose of discussion, we consider linear programming problem without generality max cTx (1) s.t Ax < b 2

where c and x are n-vectors, b is an m-vector and A is a full rank m x n matrix, where m > n and c ~ 0. Problem (1) has an interior feasible solution x~. We focus on a variation of Karmarkar's algorithm which is generally called the dual affine scaling method [2]. Algorithm 1 (A,b,c,x~,stopping criterion,O < y < 1) 1. k=O. 2. stop if optimality criterion is satisfied. 3. vk = b- Ax. 4. Dk = diag ({l/l,..., l/l). 5. h, = (AT(Dk)2A)-c. 6. h = -Ahx. 7. a = 7 x min{-v4/(h,)il(h,)i < 0, = 1,..., m}. 8. xk+l = xk + ahz. 9. k = k + 1, goto 2. The major computation work of the algorithm is computing (AT(Dk)2A)-' Since Dk is m x m full rank diagonal matrix, then rank(DkA) = n, and (DkA)+ = ((DkA)TDkA)-(DkA)T, (DkA)+((DkA))T = (AT(Dk)2A)-l(DkA)TIkA(((DkA)TDkA)T)-1 (AT(Dk)2A)-1. Thus step 5 of algorithm 1 can be written as h, = (DkA)+((DkA)+)T. In following section we will discuss computing (DkA)+ quickly. For simplicity, we write Dk as D. In general we let rank(A) = r < n. 4 Matrix factorization Proposition 1 Let D be a full rank diagonal matrix, A be a m x n matrix, and A = BC, where B is a full rank m x r matrix, C is a full rank r x n matrix, then (DA)+ = C+(DB)+ = CT(CCT)-1(BTD2B)-lBTD. 3

proof. Since D is a full rank square matrix, so rank(DA) = r and rank(B) = r. According Character 3 and Character 2, We get needed result. This completes the proof of the proposition. Using Gauss elimination method, we factorize A into A = LU, where L is a m x r unit lower trapezoidal matrix, U is a r x n full rank upper trapezoidal matrix.Thus (DA)+ = UT(UUT)-I(LTD2L)-lLTD. (UUT)-1 does not change at each iteration. If r = n, U is full rank upper triangular matrix, we have (DA)+ = V-1(LTD2L)-'LTD. So major work of computing (DA)+ is the computation of (LTD2L)-1. We partition L into L L1i L2 where L\ is a r x r unit lower triangular matrix, L2 is a (m - r) x r matrix. Correspondingly we partition D into D=( D1 0 0 D2 where D1 is r x r full rank diagonal matrix, D2 is (m - r) x (m - r) diagonal matrix. Thus LTD2L = (LTD, LTD2) D 1L = LD2L + LTD2L2 1 2 22 m-r = LTD2Li + dril,T (2) i=-1 where D2 = diag {dr+,..,dm}, LT = (11,*...* * lm-n). LTD2L1 is already Cholesky factorization form. A Cholesky factorization of LTDL can be computed using a series of rank-one updates by (2). To compute a Cholesky factorization of LTDL, m - r rank-one updates are needed. But m rank-one updates have to be done to compute a Cholesky factorization of ATD2A in general Karmarkar's algorithm [31. So computation work is decreased. As in [3], rank-one update can be done by Fletcher and Powell [9]. 4

5 Complexity of algorithm Proposition 2 If computation of (AT(Dk)2A)-' is done by LU factorization, then Algorithm 1 solves Problem (1) in no more than mn2 + O(m25(m - n)R) arithmetic operations, where R is a measure of the problem's size. proof. The major operation work of the algorithm is: LU factorization needs mn2 - (m + n)n2/2 + n3/3 = O(mn2) arithmetic operations [11]. Algorithm 1 finds an optimal solution to problem (1) in O(jiR) iterations [12] [13]. (m - n) rank-one updates need O(m2(m - n)R) arithmetic operations. So algorithm 1 solves problem (1) at most mn2 + O(n25(m -n)R) arithmetic operations. This completes the proof of the proposition. In general, R is very large. So O(m3R) >> m2n + O(m2(m - n)R), especially, when m - n is small. Using approximation ak, vk, the complexity of algorithm 1 can be reduced [12] [13]. References [1] N. Karmarkar, "A new polynomial time algorithm for linear programming," Combination, Vol. 4, 373-395, 1984. [2] I. Adler, N. Karmarkar, M. G. C. Resende, and G. Veiga, "An implementation of Karmarkar's algorithm for linear programming," Mathematical Programming, 44, 297-355, 1989. [3] David F. Shanno, "computing Karmarkar projections quickly," Mathematical Programming, 41, 61-71, 1988. [4] I. Adler, N. Karmarkar, M. G. C. Resende and G. Veiga, " Date structures and programming techniques for the implementation of Karmarkar's algorithm," ORSA Journal on Computing 1(2), 1989. [5] J. R. Birge and L. Qi, "Computing block-angular Karmarkar projection with application to stochastic programming," Mgt. Sci., 34:12, 1472-1479, 1988. 5

[6] J. R. Birge and D. F. Holmes, "Efficient solution of two-stage stochastic linear programming using interior point methods," Computational Optimization and Application, 1, 245-276, 1992. [7] Xuchu He, Basic Theory and Algorithms of Generalized Inverse ( in Chinese ), Shanghei Science and Technology Press, Shanghei, 1985. [8] R. Pao, Matra S. K., Generalized Inverse of Matrices and Its Application, Wily, New York, 1971. [9] R. Fletcher and M. J. D. Powell, "On the modification of LDLT factorizations," Mathematics of Computation, 28, 1067-1087, 1974. [10] Yinyu Ye and Masakazu Kojima, "Recovering optimal dual solution in Karmarkar's polynomial algorithm for linear programming," Mathematical Programming, 39, 305-317, 1987. [11] Gene H. Golub and Charles F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, Maryland, 1983. [12] Renato D. C. Monteiro and Ilan Adler, "Interior path following primal-dual algorithms Part I: linear programming," Mathematical Programming, 44, 27-41, 1989. [13] Renato D. C. Monteiro and lan Adler, "Interior path following primal-dual algorithms Part II: Convex quadratic programunming, Mathematical Programming, 44, 43-66, 1989. 6