AN INFEASIBLE START PREDICTOR CORRECTOR METHOD FOR SEMI-DEFINITE LINEAR PROGRAMMING Chih-Jen Lin and Romesh Saigal Department of Industrial & Operations Engineering The University of Michigan Ann Arbor, Michigan 48109-2117 Technical Report 95-31 December 1995

An Infeasible Start Predictor Corrector Method for Semi-definite Linear Programming* Chih-Jen LIN Romesh SAIGAL Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2117, USA December 21, 1995 Abstract In this paper we present an infeasible start path following predictor corrector method for semidefinite linear programming problem. This method does not assume that the dual pair of semidefinite programs have feasible solutions, and, in at most 0(|log(.6(AbC))|n) iterations of the predictor corrector method, finds either an approximate solution to the dual pair'or shows that there is no optimal solution with duality gap zero in a well defined bounded region. The nonexistence of optimal solutions is detected in a finite number of iterations. Here e is a measure of non-optimality and infeasibility of the pair of solutions found, and is generally chosen to be small; b(A, b, C) is a function of the data of the problem and p is a measure of the size of the region searched, and is generally large. The method we present generalizes a method for linear programming developed by Mizuno. We give some preliminary computational experience for this method, and compare its performance ( measured by the number of iterations ) with that of the code SP of Vandenberghe and Boyd which is *Supported by the grant number CCR-9321550 from the National Science Foundation. 0

based on a potential reduction strategy, and the code SDPA of Fujisawa and Kojima which is based on a path following and potential reduction strategy. Key words: Linear programming, Semidefinite programming, Interior point methods, Path following, Predictor corrector method, Infeasible start. Abbreviated title: - Semi-definite linear programming 1

1 Introduction This paper considers the dual pair of semidefinite linear programs: minimize C * X Ai X = bi foreveryi-=1,,m (1) X>-0 and, minimize Z1 biyi Em=1 Aiyi + S = C (2) 550 S >_ O where Ai for i = 1,*-,m and C are n x n symmetric matrices, A * B = trace(ATB) and X > 0 means that X is a symmetric and positive semidefinite matrix while X > 0 means that it is a symmetric and positive definite matrix. We assume that the matrices Ai, i = 1,..., m are linearly independent. The relationship between these pair of dual problems is now well understood. In case both the primal and the dual have interior feasible solutions, there exist optimal solutions to each problem which satisfy the strong duality theorem and thus the duality gap for such solutions is zero, Alizadeh [1]. Otherwise there are examples where the duality gap is not zero and the primal or the dual problem may not attain its respective optimal solution, see for example Alizadeh [1], Freund [5]. In this paper we consider an infeasible start predictor corrector method which will find, if one exists, an optimal solution with duality gap zero, within the set {(X, S):0 _ X -- pi, 0 _ S _ pi}, (3) when the method is started with the initial matrices X1 = pI, yl = 0 and S1 = pI. We do not assume that the primal arid dual have feasible solutions and detect the non-existence of such an optimal solution, in the set (3), in a finite number of iterations. The method we present here is a generalization of the method for linear programming presented by Mizuno [14] ( we generalize algorithm 2 of his paper ). See also section 5.10 of the book Saigal [21]. We show that in at most O( log((A, )p ) n) iterations of the predictor corrector method, where &(A, b, c) is a function of the data As, i = 1,..., m, b, and C, either the method will discover 2

that there is no optimal solution with duality gap zero in the set (3) or find a solution X(e), y(c), S(e) to the dual pair for which the duality gap X(e) * S(e) < ne, and the infeasibility measures (Ei(Ai.X(e)-bi)2) < 6 (A, b, C) and I Ei=1 Aiyi(e)+S(e)-C\IF < 6(A, b, C)e. In this method, the corrector step solves a linear system of equations that differs from that of the predictor only in the right hand side. Our method is similar to the one presented by Potra and Sheng [19], but differs in the predictor and the corrector directions used. The predictor direction used by Potra and Sheng is determined by the equation X- (XAS + AXS)X + X(/ASX + + SX)X-~ = -2X SX~ which generates, in the case of a feasible start method, one of the directions from a class introduced by Kojima, Shindo and Hara [10] and analyzed by Monteiro [16]. We use the direction generated by the equation AXS + XAS = atl - XS, where 0 < a < 1, and use the symmetric part of \X in the method. The corrector direction in [19] is determined by solving a similar system as ours, but requires a solution of a new linear system, while we solve the same linear system as the predictor, but with a different right hand side. Under the assumption that the primal and the dual have optimal solutions with no duality gap, they prove a similar polynomial bound as ours on the number of predictor and corrector steps required by their method. The predictor corrector strategy is designed to keep the iterates of the method in the following neighborhood of the central trajectory: N(t1, f) = {(X, y, S):X >- 0, S - 0, t < t\, IIS2XS2 - tII\F < ft} (4) where I\BII = E= Em= Bj is the Forbenius norm of the matrix B and d > 0 is a constant. This same neighborhood has been used in establishing polynomial convergence results for feasible start methods by Lin and Saigal [11], Monteiro [16] and Zhang [26]. There are several primal-dual algorithms based on the potential reduction strategy, see for example [1, 8, 10, 23], and codes based on some of these are available in the public 3

domain, for example Fujisawa and Kojima [6] and Vanderberghe and Boyd [24]. Using a MATLAB based code of fhe method of this paper, we call INPC, we present some preliminary computational results on the number of iterations taken by the three codes INPC, SP [24] and SDPA [6] to solve six different problems. One of these test problems has problem instances for which the primal-dual pair has no optimal solution. All three codes were able to detect these instances. Just as in the case of linear programming where the predictor corrector path following methods are seen to be the most effective, our results lend support to the statement that a similar situation may hold for the semidefinite linear programming as well. Besides the introduction this paper has 5 other sections. In section 2 we present the method investigated in this paper, in section 3 some basic results are presented. In section 4 we analyze the method and in section 5 we prove its global convergence. In section 6 we present the algorithm we implement, the six problems we solve, the numerical results and discuss some implementation issues. Finally in section 7 we present our conclusions. 2 The Method We now present the predictor corrector method we will discuss in this paper. Step 0 Let p > 0 be large, A > 1, 0 < < and 1 > al > a > 0 be given constants; 01 = 1, X1 = pi, yl = 0, S1 = pI and ti = XS = p2. Set k 1. Step 1 Predictor Step: Solve Ai ~ AXk -A(Ai ~ Xk - bi) i = 1,, m EiA1,AAyi + ASk -A(EL1 Aiyk + Sk - C) AXkSk + XkASk = atkI- XkSk (5) 4

for (AXk, AykASk), and define Xk = Xk + Ca(AXk + X) -k = yk+ yAyk Sk = k + aASk (6) tk = XkS k n 0k+l = (1 -A)Ok. Step 2 Corrector Step: Solve Ai AXk = 0 for all i = 1,,m Em=1 AiAy + ASk = 0 (7) AXkSk + XkASk - tk- XkSk for (AXk, ak, ASk), and define Xk+1 = Xk + (Ak +AX[) yk+l = k +A-k (8) Sk+l = Sk+ASk Xkk+lSk+l tk+1 = n' Step 3 Set k = + 1. If 3Xk * Sk < pOk(lXk F + 11 Sk F), stop. There is no optimal solution with duality gap zero in the set {(X, S): pi > X >- 0, pI > S > 0}. Otherwise, go to Step 1. Both the systems (5) and (7) have a unique solution, and differ only in the right hand side. Also, the solutions ASk and ASk are symmetric ( a consequence of our assumption on the matrices of the problem ), but AXk or AXk may not be symmetric. We use the symmetric part of these directions in the predictor or corrector steps. 3 Basic Results We present here the basic results we need about matrices and norms. Given an n x n matrix B we define its 2 - norm as ||B|[2 = max||j[=1 IIBx L112, and it is easily seen that if the matrix 5

B is symmetric, then j[BI 2 = maxjAil where Ai for i = 1,, n are the n real eigenvalues of B. For a given n x n matrix B, we define vec(B) = (BT., BT,..., BT)T and note that Ilvec(B)ll2 = IIBIIF, where B.j is the jth column of the matrix B. Given an m x m matrix A and an n x n matrix B, we define the Kronecker product, A ~ B, as the matrix A1,1B.. A1,mB Am,lB... Am,mB and the following are some well known properties of the Kronecker product, see for example Bellman [2]: 1. vec(AXB) = (BT ~ A)vec(X). 2. (A ( B)T = AT ( BT 3. (A B)-l = A-1 ~ B-1 4. (A ~ B)(C 0 D) = AC ~ BD. 5. A(A 0 B) = {ai/j: ai E A(A), %j E A(B)} where A(A) is the set of all eigenvalues of A. and the following are well known properties of trace: 1. A > 0. Then trace(A)'> 0. 2. A > B. Then trace(A) > trace(B). 3. C = A + B. Then trace(C) = trace(A)+ trace(B). 4. trace(A) = E Ai(A) where Ai(A) are the eigenvalues of A. 5. det(A) = nl A(A) where A)(A) are the eigenvalues of A. We state some lemmas needed in this paper. The proofs can be found in the cited references. Lemma 1 Let A and B be arbitrary n x n matrices, with B nonsingular. Then IIAB\I\ < IIAj IIBTB B2 and IIAB\\I < IIATA2\B\\IIjF. 6

Proof: Lemma 1, [11]. 1 For a given n x n matrix B which has all eigenvalues real, we define Ai(B) for i = 1, I, n the n real eigenvalues of B, and Amax(B) = maxiAi(B) and similarly Amin(B). We can prove Lemma 2 Let A, B and C be n x n symmetric matrices with A = B + C. Then Amin(A) > Amin(B) - IICIIF and Amax(A) < Amax(B) + IICIIF. Proof: Lemma 2, [11]. A Lemma 3 Let A and B be n x n matrices with A symmetric and B nonsingular. Then IIAIIF < 2I|BAB-1 + (BAB-1)TIIF. Proof: Lemma 3.3, Monteiro [16]. 1 Another technical lemma follows: Lemma 4 Let A, B and C be n x n matrices such that A = B + C, and trace(BTC) > 0. Then IIB\F _< \IIAF and IICIIF < IIAIF. Proof: Lemma 4, [11]. U 4 Analysis and Convergence of Method In this section we will prove that the method generates a sequence of points such that t goes to zero at a linear rate (1 - a(l - crl)) and a = O0(). We now prove a basic lemma which shows that the infeasibility decreases after each predictor and corrector step. Lemma 5 For each k = 12,... 1. Ai * Xk-bi- = k(Ai X -bi). 2. ri= ATyik + Sk -C = k(E=l ATy + S1 - C). 7

Proof: We will show part 1. The proof of part 2 is similar. Now Ai Xk+l -b = Ai * (X+ (AX AX + X)+ ((AXk A ))-b = Ai Xk+ caAi * aXk + Ai ~ Xk -bi = (1 - a)(A ~ Xk - bi) = (1 -aA)Ok(Ai ~ X -bi) and the result follows by an induction argument. Define the m x n2 matrix A whose ith row is the vector vec(Ai)T. Then the solution to the system (5) and (7) can be computed by solving the equivalent linear system A 0 0 vec(AXXk) -A(Avec(Xk) - b) 0 AT I Ay = -A(A y + vec(Sk) - vec(C)) (9) Fk 0 Gk vec(ASk) vec(Rk) where Fk = Sk 0 I, Gk ='I ~ Xk and an appropriate matrix Rk. 1 1 Lemma 6 Let Xk and Sk be symmetric and positive definite, and define Dk = Fk 2G. Then I. D> - FkGk. 2. Dk is symmetric. 1 1 1 1 Proof: Since Xk and Sk are symmetric and positive definite, Fk = Sk 0~ and Gk = I ~Xk are well defined, and thus, by property (3) of Kronecker product, Dk is well defined. Now part (1) follows from property (4), and, part (2) from property (2) of the Kronecker product. Lemma 7 There exists, (v, W) with U, W symmetric, such that Ai * U = b, i = 1,... m, EiZ Aiv + W = C. Let Pk = DkAT(ADkAT)l ADk with Dk in Lemma 6. Then, for each 8

k= 1,... Ay = -(ADkA AFk vec(-XkSk + atkI) - Ak(ADkA )-ADkvec(Sl - W) -AOk(ADkA )_ AFI7 (Fkvec(Xl - W)) - AOk(y1 - v), 1 1 Dkvec(ASk) = PkGk Fk2 vec(-XkSk + ttkI) - Ok(I - Pk)Dkvec(S - W) +A kPkDk vec(Xl - U), Dk vec(AXk) = (I- Pk)Gk2Fk vec(-XkSk + tkI) + AOk(I - Pk)Dkvec(Sl - W) -AOkPkDklvec(X1 - U). Proof: It is readily confirmed that there exist (v, W) such that Zi2l Aivi+W = C and, since Ai, i = 1,..., m are linearly independent there exists U such that Ai * U = bi, i = 1,..., m. Using simple algebra and the definition of U, v, W, system (9) can be re-written as: A 0 0 vec(A\Xk + )Ak(X1 - U)) 0 0 AT Ayk + AOk(y - v) 0 Fk 0 Gk vec( ASk + AOk(S1 -W)) Rk where R = vec(-XkSk + atkI) + AOkFkvec(X - U) + AOkGkvec(S - W). Then Ayk - (AFklGkAT)-(-A)FlRk - AOk(y - v) = -(AF GkA )<AFj vec(-XkSk + catkl) - Ak(AFTGkAT )ADvec(Si - W) -AOk(AF-1GkA T)-lAFkl(Fkvec(Xl - U)) - AOk(y1 - v), Dkvec(ASk) Dk(-AT(/yk + AO(y1 - v)) - Akvec(Sl - W)) Dk(A (AF-lGkAT)-AF^klvec(-XkSk + Ctkl) + A9kAT(AIF -GkA )-ADk vec(Si - W) + AOkA(AF cGkAT )AFI — (Fkvec(X1 - U)) - AOkvec(S1 - W)) 1 1 = PkGk Fk vec(-XkSk + rt k) - Ok (I - Pk) Dkvec(Sl - W) +SOkPkDklvec(X1 - U), 9

Dk vec(AXk) = Dk-(-AOkvec(Xl - U) + Fkl(Rk - Gkvec(ASk + Ak(Sl -W)))) D1 (-AOkvec(Xl - U)) + Dk1Fkl(vec(-XkSk + atkI) + AOkFkvec(Xl - U) +A0kGkvec(Sr - W)) - Dl F -1Gk(vec(ASk) + AOkvec(Sl - W)) = DIFk-lvec(-XkSk + atkI) - Dkvec(ASk) =D-1 ~Fvec(-XPk+tk)Pk1 1 -Dk Fk lvec(-XkSk rtk) - vec(-XkSk + attkI) - +XOk(I - Pk)Dkvec(S1 - W) - XkPkDklvec(Xl - U) 1 1 = (I- Pk)Gk2 Fk vec(-XkSk + tkl) + AOk(I - Pk)Dkvec(Si - W) -XOkPkDklvec(Xl - U), and we are done. U We now prove two important propositions. Proposition 8 Let U and W be as in Lemma 7, with -pI q U q pI and -pI - W - pI. 1 1 For some k, (Xk,Sk) E N(tk, /), Ek = S XkS -tkI, and Okp(\IXklIF+ ISkllF) < 3XkASk. Then 1 1 IIXk 2AXkS2 JIF < B(n)v/Tk 1 1 IIX ASkSk S II F < B(n)V/t _1 +, IXk 2AXk Sk IF < B(n) V where B(n) = =1(/ I + (1 - cr)n + 6An) is an increasing function of n. Proof: We obtain the following results by using the lemmas 1 and 2 and the fact Si - W > O, X1 - U 0. 1 1 1 1 [\PkGk F vec(-XkSk + tkI)112 < | 2Gk F 2vec(-XkSk + tkI)112 10 [IXk-2(-XkSk + Utkl)Sk J IIF 1 1 1 1 1 1 10

l\Dkvec(S -W)112 = IIXk(S1 -W)Sk 2 |F = IIX(S1 -W)XkXk Sk |IF 1 1 1 ~ I2SVX-lSk72H~Xk(Sl - W)IIF ~< IiS1XkS 112 2III - W II|XkIIF 1 1 1 < 2p S- Xk Sk 2 IIIIXkIIF, 1 1 IIDkvec(X1 - U)112 = IIGk2Fvec(X -U)112 1 _ _ _ _1 II (S o Xk )vec(X1 -U)112 = IIXk2 (X - U)S2IF < 2pI||Sk|FIISkXk1 Sk2 112' Therefore,. 1 1 IISk2ASkX2 1F = I|Dkvec(ASk) 12 IG2Fk- vec(-XkSk + tkI)112 + AOkllDkvec(Sl - W)12 +AOk IIDklvec(Xl - U)\I2 I I-Xk t -AXk S E\F = tkkF2(IIXIIF+ S)2 < I/ (/ + ( - 6a)- + 6)n) /T, IIXk /Xk S zJF = II(S ~Xk 2-)vec(AXk)112 = IIJD vec( X k) 11 w _ IIGk vec( XkSk + ctkI)112 + X0kllDkvec(Si - W) < tk - IIEk1)F1 +XOkIID lvec(X - )11 K 1+/3 < /i/( + (1 -)V/ + 6An)x/t, IIX Xx sk JIF -(Xk \ S\ -kXX-s(XlS~x)IsF < ~ ~tk + I\Ekl\F p(,,)V' _1 flB\l )B k

Corollary 9 For some k let tk > 0, (Xk, Sk) E N(tk, 3), and Okp(IIXk\l\F + \\Sk\ F) < 3Xk e Sk. Then B(n)2 2 B(n) 2 (1-a + aaa- ( )tk < tI (1-a + ca + -a2)tk. n n That is, B(n)2 2 < tk + a < a B(n) (110) 2- a )C2 < tk 1 + a-c < B(n)2 (1 ) n tk n Proof: Note that ntk = Xk * Sk, and ASk * AXk = ASk * AXT = AXk ~ ASk ( since ASk is symmetric ). Using (6), we see that ntk = Xk * Sk = Xk * Sk + a(AXk ~ Sk + ASk Xk) + a2Xk ASk. Also, from (5), AXk * Sk + ASk * Xk = nrtk - ntk. Now AXk * ASk = trace(/AXT ASk) = 1! 1 1 _ trace(S AXTXk 2X2 ASkSk 2) and the result follows from Proposition 8. U 1 1 Proposition 10 For some k let (Xk, Sk) C N(tkf ), Ek = S XkS -tkI, and Okp(IXkllF+ IISkIIF) < 3Xk ~ Sk. Then II Xk XkS jF < M(a, n)vtk, JIX ASkSk 2 F < M(a, n)Vk, H^A;YX;2! 1 J^~1 /3M(a^n)VJk k[SA AXkXk |iF < + IM (a n)/ where M(a,n) = (1-c+2co) + ((1) + c/-) + B(n ( + + 2) is an increasing function of a. Proof: We will only show this for IIXk 2 IaXkSk |iF. The first inequality below follows from Lemma 4, and using the same arguments as those of Proposition 8, and inequalities (10), 12

(11), we obtain the other inequalities below: 1 I I I I 1 IIXk 2AXkS2 JIF < |tkkXk Sk -Xk XkSkSk IIF 1 tk 1 1 1 tk = Xk 2 Sk 2 (-)(tkl- XkS) + Xk ((-)XSk - XkSk)Sk IF < iI1 + IIXk 2(()XkSk-tk ttkI-I\\Ek\\F k AXk + AXIT) (Xk + a 2 )(Sk + acASk))Sk 2 F Now, the second term in the above expression is the same as: II Xk2 (-XkSk - XkSk - a(AXkSk + XkxSk) T - a(AXk)Sk - tk 2 2 1 2(AXk + AX[)/XSk)Sk 2IF 1 tk t k a = IIXk (( - 1 +)(XkSk-tkl) + ( -1 + a - au)tkl - - AXk)Sk tk tk 2 a2 1 — 2 (Xk + AX[)Sk)Sk IF 2 ( B(n)2 P tk C 2B(n)2 ( + ) B(n) 1 - 0 +cr2B(n)2 /^t We obtain the result by substituting the above, and the definitions of a and M(a, n). U Lemma 11 If a*, al satisfy the following -2M(a*, n)B(n) n)_ > O, n n M(a*, n)B(n).B(n)2 u+2 +a* < < n n then for every a such thdt 0 < a < a*, and k with (Xk, Sk) e N(tk, /), and Okp(IIXk IF + IISklF) < 3Xk * Sk, (1 - a)Xk * Sk < Xk+l * Sk+l < (1 - a(l - o-l))Xk * Sk Proof: Xk+l * Sk+1 13

(X + AXk + A )(S = (Xk + -) (9k +XxXk) - - - AXk + AXk - Xk ~ Sk +AXk ~ (Sk + aASk) + (Xk + aC 2 ) + ASk Xk * Sk + c(Xk * ASk + AXk * Sk) (Note: AXk ~ Sk + Xk * ASk = 0) (1 - a + Caa)Xk * Sk + a2AXk. ASk + (AXfXk *~ Sk + AXk ASk). Since AXk ASk = vec(AXk) vec(ASk) = vec(AXk)DkDkve( Sk) < IIDlvec(AXk)l 2 IIDkvec(ASk) 12 1 1 1 1 = IIXk 2XkSk II FIXkZ SkSk IF < M(a, n)B(n)tk, AXk * Ak < M(a, n)B(n)tk, AXk *.ASk < B(n)2tk, we have M(a, n)B(n) 2 B(n)2 Xk+le Sk+1 ( 1 - a + CS + 2ak,'' + a -)X Si, n n M(a, n)B(n) B(n)2 Xk+l ~ Sk+l > (1 - a + -}- - 2c M(' a ))B(Xk Sk.2 )2 n n Since M(a, n) is an increasing function of a, we have (1 - a)Xk * Sk < Xk+l * Sk+l < (1 - a(1 - C1))Xk * Sk for 0 < a < a*. We now obtain a condition that guarantees that Xk+l and Sk+l are symmetric and positive definite. 1 1 Lemma 12 For some k let (Xk, Sk) e N(tk,/ ), Ek = Sk XkSk -tkI, Okp(IIXklF+IISkIIF) < 3Xk * Sk and 1 -(3 + /~3~ l(aB(n)+ M(a, n))) > 0. 14

Then Xk+1 and Sk+l are symmetric and positive definite. Proof: We now show this for Xk+1. It is symmetric by construction, and is positive definite 1 1 if and only if S Xk+l S is. We note that C2 JrS C2 C2/y, I (2ax + X/) -ST S Xk+lSf = Sk(Xk + 2 -(AXk+ k) + /(AXk +.AXk))S2. Then Amin(SkXk+lSk) > tk - \IEkIF - -IJSk ((a(AXk + AXk ) + (AXk + AXkT)S2 IF. 2 Using the result of Propositions 8 and 10 we obtain Anin(Sk Xk+lS2) >( (1 +/- (aB(n) + M(a, n)))tk 1 1 and our result follows. To show Sk+1 is positive definite, consider Xk Sk+1Xk2. Using the same argument as above, the lemma follows. 1 We will now obtain a lem'ma used for showing that Xk+1 and Sk+l are in the neighborhood N(tk+l, 3). Lemma 13 Let the conditions of Lemma 12 be satisfied. Then 1 1 I\Sk2+Xk+lSk+l - tk+lI\IF -21-a I +/3, 2aB(n)M(a,, n) tk+ < 2(1- ) (1 + )(2aB(n)M(a, n) + M(a, n)2)tk+l + 1-aB(n)M( n 2(1 - a) I - ) (I - a) tk/l. Proof: By simple algebra we see that AXT-AX....~Ak_ Sk a/ (AxVk + A+'k) -x )+ Xk+lSk+l = XkSk + (AXk Sk + XkASk) + 2A A +-(AXkA + A T)ASk, + ~ A +2 2 AXT - AXk = tkI+ Sk + Hk, 2 where Hk = ~(AXkT + AXk)ASk + ~ (AXk + AXVT)ASk + 2 X k Sk, and using the symmetry of Sk and ASk, we see that ntk+ = Xk+l * Sk+ = Xk * Sk + AXk Sk + Xk * ASk = Xk ~ Sk + AXk ~ Sk + Xk ~ ASk + a(AXk * ASk + AXk * ASk) =nt + a((AXk * ASk + AXk * ASk). 15

Then, from Lemma 3 II(Sk+1) Xk+l (Sk+l) - tk+l IIIF <1 1 12 1 < sk (Sk+)- ((+l) Xk+l(Sk+l) - tk+lI)(Sk+1) Sk 2 1 11 1 1 ~Sk [(Sk+l) ((Sk+l)2Xk+1(Sk+l)' - tk+ZI)(Sk+l)- SkIIF = 1 S(Xk+lSk+ - tk+ISk + Sk (Sk+lXk+l - tk+lI)Sk IF = IIS sHkS + SkH 2kS - (AXkA * Sk + AXk * ASk)I|F 2 n < IISHkSk IIF + -\IAXk ASk + AXk * ASkl. Now, for AX = AXk or AXk and AS = ASk or ASk we note that AX * AS = AXT' 1 1 1 1 1 1 AS =trace(AXTAS) =trace(S2 AXTASSk 2) < IIS2AXTXk 2 IIFXAS 2 IF. Also 1 1 1 1 1 1 _1 1 1 1 ^Sk 2^AXAS I 2 ||F lSkX 2Xk X2AXsk Sk 2SX 2X S S< 2AXAS 1F =" SSXXk 2XS IFS XkS2 IIF - 1- 3IIrk kiIIIC and the lemma follows from Propositions 8 and 10, and Lemma 11. 1 5 Main Convergence Theorem We now show that the infeasible start method presented in Section 2 converges to an optimal solution or discovers that there is no optimal solution with duality gap zero in a predefined region. We first show the later in the next theorem. Theorem 14 Let a* be as in Lemma 11, and a < a*. Also, for some k, let IIXk|IF + I\Sk\lF > 3XkSk but for I < k, IIXIIIF + IISIIF < X,,s Then there is no optimal solution with duality gap zero in the set {(X, S) pI P X > O, pI ~ S > 0}. Proof: From Lemmas 5, 11 and mathematical induction, for I < k we have O1+lp2n = (1 - aA)Oip2n < (1 -aA)XI * Si < (1-a)Xi ~ S (Note: X > 1) < Xl+1 * Sl+1. 16

Now, assume the contrary, and let optimal solution pI >_ X*, pI _> S* exist with X* S* = 0. Also, for X = 0kX + (1 - Ok)X* and y = Oky1 + (1 - k)y* and S = OkS + (1 -k)S* using Lemma 5, it can be seen that Ai (Xk-X) = Oforeveryi =1, *,m, n (yi -_i)Ai + Sk-S = 0 i=l and thus we have (X - Xk) * (S - Sk) = 0. Note that IIXkIIF = Atrace(X2) = A< Ai = I * Xk i=1 i=1 and we have 0kp(IIXkIIF + 11SkIIF) Okp(I * Xk + I Sk) (Note: X1 = pI, S1 = pi) < (0kX1 * Sk + (1 - Ok)X* Sk) + (0kS1 ~ Xk + (1 - k)S* ~Xk) = X Sk + S Xk = X S+ Xk ~ Sk = kX1 SI + 0k(l - 0k)(X1 S* + SI X*) + Xk Sk < 0np2 + 20k(l - k)np2 + Xk Sk < 20knp2 +Xk Sk < 3Xk Sk. This causes contradiction. 1 We are now ready to prove the main convergence theorem. Theorem 15 Let a < 1 /30 = 0.005, A = 1, a = 0.5 and a1 = 0.9. Then for every k = 1,2,..., (Xkyk, Sk) E N(tk, /). Thus, if Xi = pl, y = 0,S1 = pI,p > 1, for every e > 0, after at most O(| log(77(Ab4y)[)n) iterations of the predictor-corrector method either the method stops by detecting \IXkllF + ||Sk||F > 3XkSs and the problem has no optimal solution with duality gap zero in the set {(X, S)JpI - X >- 0, pI t S > 0}; or, a 17

solution (X(e), y(e), S(c)) is found with X(e) * S(e) < ne, ^/Ei=(Ai * X(e) - b-)2 < e, and 11 ET1 y()iAi + S() - CIIF <6. Proof: Let a* = 1 in Lemma 11. It's readily confirmed that this lemma holds and the Theorem 14 follows. Now assume the method continues without detecting infeasibility. It is readily confirmed that the above choice satisfies the properties required in the hypothesis of the Lemma 12, IIEk+lF < Ptk+l, tk+1 < (1 - c(1 - a1))tk, and thus the sequence belongs to the neighborhood N(tk, /3). Now let e > 0 be arbitrary, k sufficiently large and 6(A, b, C) so that (1 - a(l - T ))kt1 = (1 - al( - U1))kp2 ~ (1 - a(1 - l))kp26(A, bC) < 6, (1 - aA)kE=1 (A * -X b)2 < (1 - caX)kp l(Ai I - 2 < (1 - a)kp6(A, b, C)e, and (1 - aA)kI Em1= AjyI + S1 - CJIF < (1 - aA)kPIIJ - - F < (1 - aA)kp&(A, b, C)6. With a(1 - a,) < 1 and aA < 1 C_1F < (I aA P 400 -1400' log(p26(bi)) > klog(l-a(1-ai)) > -k (1- ) and log( p(AbC)) > klog(l-aA) > -k3a Thus, for k > max{3(1) log(2 b )), log((bC))} + 1, X * Sk n6, /E^=1(A * Xk - bi)2 < 6, and 11 E1i ykAi + Sk - C11F < e, and the required result is obtained. U 6 An Implementation 6.1 Introduction In this section we will present some preliminary results of an implementation of a path following predictor corrector method presented in this paper. In a linear programming situation, nearly all efficient codes ( for example, [12, 13, 25, 27]) implement a predictor corrector path following method, and these have been found to be very effective. For semidefinite linear programming, due to the nonlinear nature of the problem, the numerical behavior of these methods may be different from that for linear programming. We implement a variant of the method presented in the section 2 within the MATLAB environment and compare the number of its iterations with those attained by the two codes, SP of Vandenberghe and 18

Boyd [24] and SDPA of Fujisawa and Kojima [6]. Our results indicate that for small size problems, this strategy may be good for semidefinite linear programming as well. Semidefinite programming is used for solving some specific problems, for example [7, 9]. In [7], a version of a path following method is used to solve some semidefinite linear programs that arise as bounding problems in a branch and bound strategy for some combinatorial optimization problems. To date, no systematic tests on predictor corrector path following method for semidefinite programming have been performed. In this section we report numerical tests on 6 different types of problems and compare them with two semidefinite programming solvers SDPA [6] and SP [24]. These solvers use a potential reduction strategy, with SDPA basically a path-following method which utilizes the logarithmic barrier function as a merit function when a corrector step is performed. 6.2 Algorithm We now present the specific algorithm we will implement. For a given matrix A, we define IA = maxi,j Aij[. Step 0 p = 1000max{ AAlI,, IA,, [b, IC, }. XI = pI, y = 0, S = pI. e > 0, tl = AXS, = 1, A = 1.0,01 = 1 anda = 0. Step 1 Predictor Step: Solve m (yA5m E Ai * (SkIAjXk)/y = -A, * Sk-(ctkI - SkXk + k(a yA + Sk C)Xk) j=1 j=1 -\(Ai, Xk - bi), i = 1,..., m m m ASk = - AAi -A(yi Ai~ + Sk -C) i=l i=l AXkT = Sk (atk-S -SkX SkXk) Step 2 Step Selection: Set I = 1 and al = max{a: Xk + f(AXk + AXkT) _ 0, Sk + aASk _ 0, < 1}. If a1l = 1 a feasible solution has been found. Set a1 = 0.99ai. Step 3 Set Xk,l = Xk + ~ l(AXk + AX ) 19

yk, = yk+aiAyk Sk,l = Sk + alSk Step 4 Corrector Step: Set Xk = Xk,l, jk = ykl Sk = Sk,l, tk -= Sk and solve m Ai * ( S AjXk) yj- -Ai *Si (tkI- SkXk j=1 m ASk = -EYkAi i=1 AXi = l(tI - S kk - ASkXk) Step 5 Set Xk+l,l = Xk+ ( + Xk) yk+l,l yk +A-k 5k+l,l = Sk +ASk Xk+l * Sk+l tk+l,l = n Step 6 If (Xk+l,, y k+1,/ Sk+l,l) C N(tk+l,l, 3), then set Xk+1 = Xk+1,l, yk+l = yk+1l, and Sk+l = Sk+,l, tk tk+ l,, 6k+l = (1 - ai)Ok, and go to Step 7. Otherwise, if I < 2, set al+l = al2, / = 2/, 1 = + 1, and go to step 3. If I > 2, select /+1 = 0.99max{a: Xk2(AXk +AX +Ak +AX.) 0, Sk+Oa(ASk +ASk)' 0}, and set Xk+1 = Xk + (AXk + AXk + AXk + AXk) 2 yk+ = yk + at+l(Ay k + Ak) Sk+1 = Sk+al+l(ASk + ASk) Xk+l * Sk+l tk+l = n 0k+1 = (1 - al+1)^k Step 7 kc = k + 1. If 3Xk * Sk < pk(llXkllF + IISkIIF), stop. There is no optimal solution with duality gap zero in the set {(X, S): pi i X > 0, pI > S > 0}. Otherwise, go to step 1. 20

6.3 Implementation Issues The method presented in the Section 2 implements a step size that is too small to be practical. A practical algorithm is presented in the subsection 6.2. In step 2 of the algorithm presented there, step size chosen during the predictor step is.99 of the largest step size that preserves the positive definiteness of the resulting matrices. If, after the corrector step the resulting iterates lie in the required neighborhood, this step size is accepted. Otherwise, it is reduced to half its previous value and used again in a predictor step. This process is continued until the iterate after the corrector step lies in the required neighborhood. As can be confirmed, it follows from Theorem 15, that after at most O(log(n)) such reductions in the step size, the resulting iterate will necessarily lie in the required neighborhood. Also, for each corrector step after an adjustment in the step size, a linear system with a different right hand side is solved, and is thus relatively inexpensive. In this implementation we only allow I = 1,2, 3, and our computational experience suggests that this number for the small problems we tested, on the average, is very close to 1 and never greater than 2. This is one aspect of this implementation that needs further investigation for large scale problems. i 1 We choose the value of p = 1. Since IISYX1S1 - t1IlF = 0, (X1,S1) belongs to the required neighborhood, N(tl, 3). In Step 6, when 1 < 2 and (Xk+ll1, yk+1,1, Sk+l,l) t N(tk+l,l, /) then the predictor step size is reduced to al+l = al. To maintain a larger step size, we implemented a "two corrector step" strategy, in which we apply another corrector step, hoping that this will bring the resulting iterate into the required neighborhood, by solving: m Z A * (SkA jXk )Ag = -A * Sk1 (tk+1,l I-Sk+l,lXk+1,l) j=i ASk = -EAAi i=l AXkT = Sk+ll(tk+l,II- Sk+l,lXk+l,l - ASkXk) and then checking to see if (Xk+l,+2(A X+ A kT), yk+l l+Ak, Sk+l,l-+ ASk) N(tk+l,l, /). In our tests we discovered that almost always (Xk+l,1, yk+l,1, Sk+l,1) e N(tk+l,l, /), and we seldom resorted to the use of a second corrector step. 21

We have implemented the algorithm using MATLAB version 4.2, and all our test runs are on SUN SPARC 20 platform. 6.4 Test Problems 1. SLP(Randomly generated semidefinite programming problems): min C X s.t. A e X = bi, i = 1,..., m X O For this problem, we randomly generate X >- 0, S >- 0 and then use them to generate Ai, b, and C. The generated problems have feasible pairs of duals. 2. MNORM(Matrix norm minimization): min |IAo + xA1A + — + xkAk 12 where Ai is a p x q matrix. This problem appears in section 3.1 of [23]. For this problem, we randomly generate matrices Ao,... Ak. 3. MCUT(Max-cut problem).: max-L * X diag(X)= - X > 0 where (a) L,X are n by n matrices. (b) L = Diag(Ae)- A, where A is the weighted adjacency matrix. 22

The max-cut problem is to maximize the total weight of edges cut by the partition of the nodes of an edge-weighted undirected graph. This formulation is a relaxation of the max-cut problem and the details of this can be found in section 3.1 of [7]. Here, we randomly generate A and restrict the problem to be a connected graph. 4. ETP(Education testing problem): max E?=- d, A- diag(d) > 0 d>0 where A is a symmetric positive definite. This is a statistics problem which appears in section 2 of [23]. We randomly generate a symmetric positive definite matrix A. 5. MCN(Minimizing condition number of symmetric positive-definite matrix): minr s.t. yi > O,/ HI — < M(x) - rI where M(x,) = Mo+Z= xiMi, and, for each i, Mi are n x n symmetric matrices. This problem appears on page 38 of [3]. It can be seen that if MQ >- 0, the resulting semidefinite formulation has dual pairs with feasible interior solutions. Here we randomly generate MO >- 0, M1,..., Mm. 6. LTI(Linear time-invariant systems): min r -ATP- PA > 0 I - P - rI where A and P are n x n matrices. The problem we test here is a special linear time invariant system in control theory. This problem appears on page 65 of [3]. Note that the semidefinite formulation (1) and (2) for this problem has m = O(n2) > n. This makes it different from the five other test problems we have considered here. 23

In summary, MNORM, MCUT and ETP are feasible problems. We guarantee feasibility of SLP and MCN by generating data appropriately. However, for LTI, guaranteeing feasibility is difficult, thus 11 of 25 generated problems turned out to be infeasible. 6.5 Numerical Results 1. In Table 1 the stopping criteria for SDPA and INPC (INfeasible Predictor Corrector algorithm), a code implementing the algorithm presented in this section, is: C Xk - bTyk < 16 < 10-6 1 + IbTykI For SP, the stopping criteria is C X Xk - bTyk < 10-6C X, if C Xk > 0, by > O or C Xk- byk < 10-bTyk, if C Xk <, byk < O The row of the table labeled "original size" describes the problem size by the parameters of original problem described in previous subsection and the row labeled "semidefinite size" reports parameters m, n for the resulting semidefinite formulation (1) and (2). 2. From Table 1 we note that the solver SP takes the fewest iterations for the problems SLP, MNORM, MCUT, ETP and the number of iterations taken by INPC are close to this number. For MCN and LTI the solver INPC takes the fewest iterations. We believe that this is so since a phase 1 is not needed in infeasible start methods like INPC. 3. In Table 2, we use relative error 10-8 to attain higher accuracy for these same problems. Similar to Table 1, INPC takes fewest iteration on the three problems for which SP needs phase 1 iterations. We note that INPC takes fewer extra iterations then SP and SDPC to increase the accuracy. This suggests that INPC exhibits good local convergence in the final iterations of the method. As an example of this, consider the 24

following output showing iterations 10 through 15 of the algorithm: k CXkbTyk /= (A Xk -b)2 II E1A + Sk - CIIF 12 0.00889525 1 0.79691863 0.00543080 0.00085869 13 0.00076522 1 0.91367733 0.00046880 0.00007412 14 0.00001792 1 0.97655447 0.00001099 0.00000174 15 0.00000022 1 0.98780850 0.00000013 0.00000002 16 0.00000001 1 0.97607478 0.00000001 0.00000000 We note that, for this output, we move along the predictor direction up to 0.99 of the step to the boundary. To obtain a higher asymptotic convergence rate, we need to gradually increase this fraction to 1. 4. In both tables, the average I is very close to 1, and we observed that it was never greater than 2 for any of the 75 problems we solved. 6.6 Some Observations 1. When m _ n, O(m2n2) + O(mn3) = 0(n4) multiplications are needed to calculate AFklGkAT. In this case, the computational bottleneck is not the inversion of the matrix AF;klGkAT which requires 0(m3) multiplications, but is the number of multiplications required to generate it. Sometimes we encountered numerical difficulty while solving the predictor and corrector directions in Step 1 and 4 of the algorithm. During Step 4 we solve the following system of linear equations: m Ai * (S -AjXk)A = -Ai. Sk (tk I-SkXk) j=l m ASk = - E yiAi i=l AXf = S (tkI-SkXk -ASkXk). The computed direction AX, Ay, AS acceptably satisfied the second equality above but the computation of A; * AXk, i = 1,..., m was sometimes unacceptably large. We 25

TABLE 1. relative error =10-6. SLP MNORM MCUT ETP MCN LTI Original m = 30 k = 30 n = 50 n = 25 m = 20 n = 10 Size n = 40 p = 20 n = 40 q = 20 Semidefinite m = 30 m = 31 m = 50 m = 25 m = 22 m = 56 Size n = 40 n = 40 n = 50 n = 50 n = 80 n = 30 Problem tested 10 10 10 10 10 251 SP(phasel) 1 1.2 5.36 SP(phase2) 16.1 16.1 15.07 SP(total) 17.1 12.1 11.9 20.9 17.3 20.44 SDPA 19.7 23 21.5 27.7 25.4 24.93 INPC 17.9 15.1 14.7 24.2 15.9 17.36 Average 1 1.02 1.07 1.01 1.08 1.03 1.02. 1. 11 of the 25 problems are discovered infeasible by all solvers. This reports only the iterations of the 14 feasible problems. 26

TABLE 2. relative error = 10-8. SLP MNORM MCUT ETP MCN LTI Original m = 30 k =30 n = 50 n = 25 m = 20 n = 10 Size n = 40 p = 20 n = 40 q = 20 Semidefinite m = 30 m = 31 m = 50 m = 25 m = 22 m =56 Size n = 40 n = 40 n = 50 n = 50 n = 80 n = 30 Problem tested 10 10 10 10 10 251 SP(phasel) I 1.2 5.36 SP(phase2) 19.5 19.8 18.07 SP(total) 20.5 15.4 15.2 25.3 21 23.43 SDPA 22.1 25.7 24.6 2002 28.2 28 INPC 19.1 16.4 16.2 26.1 17.1 18.78 SP diff 3.4 3.3 3.3 4.4 3.7 2.99 SDPA diff 2.4 2.7 3.1 2.8 3.07 INPC diff 1.2 1.3 1.5 1.9 1.2 1.42 Average 1 1.02 1.06 1.01 1.15 1.02 1.04 1. Same as Table 1. 11 of 25 problems are infeasible and this reports iteration for the 14 feasible problems. 2. SDPA exceeds its maximal iteration number of 200. 27

noticed that sometimes calculating Ai * (Sk AjXk), i, j =,..., m by first computing Bj = (S-1Aj)Xk and then A * Bj gave a larger error then first computing S -IA, AjXk and then (S1 Ai) * (AXk). 2. Different values of p gave different results for INPC. In Table 2 the average INPC iterations for the MNORM reduced from 16.4 to 11 when using p = max(lAll,..., IAml, Ibl, IC). For this test we choose a large number p = 1000max(lAli, JAm I, jIb, ICI) to correctly detect infeasibility. For feasible problems like MNORM, MCUT, ETP, a smaller p can be used and we expect INPC to solve the problems with fewer iterations. 3. For some feasible problems, like ETP, where an initial feasible solution is readily available, the path following strategy may fail to converge unless the given solution is "close" to the central trajectory. The ability to exploit this may be important for infeasible start algorithms, like INPC, which start with a large p and the initial solution X1 = pI, yl = 0, S1 = pI, which is on the central trajectory, thus resulting in larger iterations than might be required if they were able to use effectively the given initial solution. 4. In Step 7, we use 3Xk * Sk < pOk(IIXkllF + IISkllF) to detect infeasibility. For LTI, 11 of the 25 problems instances were detected as infeasible by all three codes. SDPA took an average of 8.64 iterations while INPC took an average of 9.09 iterations to detect this infeasibility for the 11 problems. 7 Conclusions and Future Work In this paper we have presented an infeasible start predictor corrector path following method for semidefinite programming, which is a generalization of the method of Mizuno [14]. We do not assume that the dual pair of semidefinite linear programs have feasible solutions, and show that in time that is a low order polynomial of n, the data, and Ilog(Q)l, the method either detects that there is no optimal solution with duality gap zero, in the set 28

{(X,S): 0 - X - pI,0 -< S -< pI}, or an approximate solution X(e), y(e), S(e) is found such that X(e) * Se() < ne, ((EZ= Ai * X(e) - bi)2)- < ~(A, b, C)c and || Em=1 Aiy(6) - S(e) + CIF < &(A, b, C)e where &(A, b, C) is at most an exponential function of the data Ai, i = 1,, m, b and C. We also implement an algorithm based on this method in an MATLAB environment and show the potential of this approach by comparing the number of iterations taken by this method to solve six different type of problems with the number of iterations taken by the codes SP [24] and SDPA [6]. In the future, we will combine the MATLAB code with C subroutines and generate a front end like [4]. Also, we will consider special methods needed to exploit the sparse and block diagonal matrix structure of the many matrices of this problem, and investigate how these methods can effectively exploit these properties when solving the linear systems encountered during the predictor and corrector steps. We will also investigate the potential use of iterative methods presented in Saigal [22]. References [1] F. Alizadeh, "Interior point methods in semidefinite programming with application to combinatorial optimization", SIAM Journal on Optimization, 5(1995), 13-51. [2] R. Bellman, Introduction to matrix analysis, Society for Industrial and Applied Mathematics, Philadelphia, 1995. [3] S. Boyd, L. E. Ghaoui, E. Feron and V. Balakrishnan, Linear matrix inequalities in system and control theory, Society for Industrial and Applied Mathematics, Philadelphia, 1994. [4] S. Boyd and S. Wu, "sdpsol: a parser/solver for semidefinite programs with matrix structure: user's guide", Technical Report, Information System Laboratory, Stanford University, November 1995. [5] R. M. Freund, "Complexity of an algorithm for finding an approximate solution of a semi-definite program with no regularity assumption", Technical Report Number OR 29

302-94,0perations Research Center, Massachusetts Institute of Technology, Cambridge, MA, December 1994. [6] K. Fujisawa and M. Kojima, "SDPA(Semidefinite Programming Algorithm): User's manual", Research Reports on Information Sciences, Ser. B: Operations Research B308, Dept. of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, December 1995. [7] C. Helmberg, F. Rendl, B. Vanderbei, and H. Wolkowicz, "An interior-point method for semidefinite programming", Manuscript, Programming in Statistics and Operations Research, Princeton University, 1994. To appear in SIAM Journal on Optimization. [8] F. Jarre, "An interior-point method for minimizing and maximum eigenvalue of a linear combination of matrices," SIAM Journal on Control and Optimization, 31(1993), 1360-1377. [9] C. R. Johnson, B. Kroshel and H. Wolkowicz, "An interior-point method for approximate positive semidefinite completions", CORR Report 95-21, Department of Combinatorics and Optimization, University of Waterloo, Ontario N2L 3G1, Canada, July 1995. [10] M. Kojima, S. Shindoh, and S. Hara, "Interior-point methods for the monotone linear complementarity problem in symmetric matrices", Research Reports on Information Sciences, Ser. B: Operations Research B-282, Dept. of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, April 1994(revised October 1995). To appear in SIAM Journal on Optimization. [11] C. J. Lin and R. Saigal, "A Predictor Corrector Method for Semi-definite Linear Programming", Technical Report, Dept. of Industrial and Operations Engineering, University of Michigan, MI 48109-2117, October 1995(revised December 1995). [12] I. J. Lustig, R. E. Marsten, and D. F. Shanno, " Interior point methods for linear programming: computational state of the art", ORSA Journal on Computing, 6(1995), 1-14. 30

[13] S. Mehrotra, "On the implementation of a primal-dual interior point method", SIAM Journal on Optimization, 2(1992), 575-601. [14] S. Mizuno, "Polynomiality of infeasible-interior-point algorithm for linear programming", Mathematical Programming, 67(1994), 109-120. [15] S. Mizuno, M. J. Todd and Y. Ye, "On adaptive step primal-dual interior-point algorithms for linear programming," Mathematics of Operations Research 18 (1993), 964-981. [16] R. D. C. Monteiro, "Primal-dual path following algorithms for semidefinite programming", Manuscript, School of Industrial and System Engineering, Georgia Tech., Atlanta, GA 30332, September 1995. [17] Y.E. Nesterov and A. Nemirovskii, Interior Point Polynomial Methods in Convex Programming: Theory and Applications, Society for Industrial and Applied Mathematics, Philadelphia, 1994. [18] Y. E. Nesterov, and M. Todd, "Primal-dual interior-point methods for self-scaled cones", Technical Report 1125, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York, 18453-3801, 1995. [19] F.A. Potra and R. Sheng, "A superlinearly convergent primal-dual infeasible-interiorpoint algorithm for semidefinite programming", Technical Report, Department of Mathematics, University of Iowa, Iowa City, IA 52242, October 1995. [20] R. Saigal, "Tracing homotopy paths in polynomial time". Talk given at ORSA/TIMS Joint National Meeting in Washington, D. C., Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, 1988. [21] R. Saigal, Linear Programming: A Modern Integrated Analysis, Kluwer Academic Publishers, Norwell, MA, 1995. [22] R. Saigal, "An infinitely summable series implementation of interior point methods", Mathematics of Operations Research, 20(1995) 570-616. 31

[23] L. L. Vandenberghe, and S. Boyd, "Semidefinite programming," Technical Report, Information System Laboratory, Stanford University, 1994, to appear in SIAM Review. [24] L. L. Vandenberghe, and S. Boyd, "SP: software for semidefinite programming, user's guide", Technical Report, Information System Laboratory, Stanford University, November 1994. [25] R. J. Vanderbei, "LOQO: an interior point code for quadratic programming", Technical Report, Program in Statistics and Operations Research, Princeton University, Princeton, NJ 08544, February 1995. [26] Y. Zhang, "On extending primal-dual interior point algorithms from linear programming to semidefinite programming", Technical Report TR95-22, Dept. of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, MA 21228-5398, October 1995. [27] Y. Zhang, "User's guide to LIPSOL: linear programming interior point solvers v0.3", Technical Report, Dept. of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, MA 21228-5398, July 1995. 32