A SIMPLE PROOF OF PRIMAL AFFINE SCALING METHOD Romesh Saigal Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 92-60 December 1992 Revised January 1993

A Simple Proof of Primal Affine Scaling Method Romesh SAIGAL Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2117, USA December 17, 1992 ( revised January 5, 1993 ) Abstract In this paper we present a simpler proof of the result of Tsuchiya and Muramatsu on the convergence of the primal affine scaling method. We show that the primal sequence generated by the method converges to the interior of the optimum face and the dual sequence to the analytic center of the optimal dual face, when the step size implemented in the procedure is bounded by 3. This paper is in the spirit of Monterio, Tsuchiya and Wang [10]. Key words: Linear Programming, affine scaling methods, interior point methods. Abbreviated title: Proof of affine scaling method 0

1 Introduction We consider here the linear programming problem: minimize czx Ax =b LP x > 0 where A is a m x n matrix and b and c are appropriate vectors. We also assume that Assumption 1 The linear program has an interior solution. Assumption 2 The objective function is not constant on the feasible region. Assumption 3 The matrix A has rank m. On the basis of the global convergence analysis by the use of a local potential fuction, developed by Tsuchiya in the papers [13] and [14] and the analysis of reduction of potential function developed by Dikin [7], Dikin [6] proved the global convergence of the long step affine scaling method when the step size is restricted by -. Stimulated by [7], but independently, Tsuchiya and Muramatsu[15] proved that the method converges to the optimum solution if the step size is restricted by 2; and that, in this case, the primal sequence converges to the relative interior of the optimal face of the primal and that dual sequence to the analytic center of the optimal face of the dual. Hall and Vanderbei[8] present an example that shows the 2 is tight in the sense that with a larger step size the dual sequence may not converge. This note is written in the hope that it will bring out the basic idea of the proof of [15], and is written in the spirit of Monterio, Tsuchiya and Wang[10], and presents some of their arguments. 2 Primal Affine Scaling Method We refer the reader to Barnes [2], Vanderbei, Meketon and Freedman [16] for more on this method. The iterative scheme of the method is the following and is known as the long step version: 1

Step 0 Let x~ be an interior point solution, 0 < a < 1, and let k = 0. Step 1 Tentative Solution to the Dual: yk = (AXAT)-1 AXkc Step 2 Tentative Dual Slack: sk = C- ATyk If sk < 0 then STOP. The solution is unbounded. Step 3 Min-Ratio test: k = inn{lXkskkll s>0) 0k = minf j k' 3jm > k} Xj Sj x11s1 33 |jXkSkj1 q(Xkskk) where +(x) = maxjxj. If Ok = 1 set a = 1. Step 4 Next Interior Point: X+1 = Xk _ lXk k k IiXkskI Step 5 Iterative Step: If x1+ = 0 for some j, then STOP. xk+l is an optimal solution. Otherwise set k = k + 1 and go to step 1. It is shown in Vanderbei and Lagarius [17] that if the algorithm is finite, i. e., stops in Step 5, then an optimal solution has been found. 2.1 Ellipsoidal Approximating Problem Barnes [2] has shown that this algorithm is generated by solving the following ellipsoidal approximating problem: At the kth iterate, given an interior point xk > 0, the direction vk = X2sk to the next iterate xk+l, generated in Step 4, is obtained by scaling the solution to the following ellipsoidal approximating problem: maximize c v Av = 0 EAP Ix-lvlll2 < 1. 2

2.2 Least Squares Problem The dual estimate yk, calculated in Step 1, can be readily seen as generated by the following "least squares" problem: yk = argmin l|XksII LSP s = c - ATy and we note that this estimate, given a primal feasible solution xk, is chosen to minimize the complementary slackness violation without the nonnegativity constraint on the dual variable s. 3 Properties of Sequences Let {xk}, {yk}, {Sk} and {vk} = {Xk2sk} be the sequences generated by the primal affine scaling algorithm. If these sequences are finite, then the last point of the first sequence solves the linear program, or the problem has an unbounded solution. Thus, we henceforth assume that they are not finite. We now establish some important properties of these sequences. Theorem 1 xk > 0 and Ok > 1 for all k. Also, C V = cTX2sk I= |Xkskll2 = X,-Vk12 for all k. Proof The first part readily follows. To see the second part first observe that, from the definition cTvk = CTX2sk, and cTX,2 = cTXk(I - XkAT(AXkAT)-AXk)Xkc = IIPkXkCI 2 where Pk = I - XkAT(AXkAT)-AXk is the projection matrix into the null space Af(AXk) of the matrix AXk, with Pk = P2 = P-. Now PkXkc = Xk(c-AT(AXkAT)-lAXkc) = Xksk and we are done. [1 3

Theorem 2 {cTxk} is a strictly decreasing sequence with CTxk+1 = CTxk -a OkllXkSk l. Either this sequence diverges to -oo, or it is bounded, and thus converges with IIXkskll -+ 0 as k -O 0, and oo > Ek=i CT(xk - xk+l). Proof It is readily seen, from Step 4, and Theorem 1 that ccTX sk cTXk+1 = CTXk _ ak kS IIXkSkII = CTxk _ ekIIXkSkll From Assumption 2, IIXkskll # 0, and so {cTzk} is strictly decreasing. Hence, it either diverges to -oo or is bounded. In the later case, since every bounded monotone sequence converges and a0k > a > 0, Xkk -- 0. Now 00 E cT(xk _ k+1) = CTx _ C k=l where limk. o CTxk = c* and we are done. O We now prove some preliminary results, and then establish the convergence of sequences {xk}, {yk} and {sk}. 4 Preliminary Results In this section we prove some preliminary results and two theorems related to the ellipsoidal approximating problem discussed in section 2.1. Lemma 3 Let rj and sj > 0 for each j = 1,, I be arbitrary real numbers. Then IEj=iril _ max< maxj — j=1 Sj S Sj Proof Let I- = maxj l. Sk Si Then for each j = 1,., 1, sjlrkl > sklrjl. Adding all these 1 inequalities we have Irkl Esj > sk E Irj > Sk Erjl j=1 j=1 j=1 and we are done. DC 4

Theorem 4 For every x > 0, the function II(AX2AT)-'lAX2pll < q(A, p), where q(A, p) > 0. Proof By the Cauchy-Binet Theorem and Cramer's rule we can write the ith component AXAJ(j ~ ~i~ xjm)2det(Aj)det(A(j)) ((AX2AT)-'AX2p)i = ((J(Xjl.. x, )2(det(AJ))2 where AW') is the matrix obtained by replacing the ith row of A by pT, and the sum is over all possible J which are ordered sets of m elements 1 < j < *... < jm < n. Since a term in the denominator is zero only when the term in the numerator is zero, from Lemma 3 we can use Idet(A('))I q(A,p) = max det(A) and we are done. a Given a k x I matrix Q of rank k, a k vector c $ 0 and a I x I diagonal matrix D with positive diagonal entries, consider the problem: maximize CTX XTQDQTx < 1. (1) The following result establishes a relation between the objective function value and the solution vector of this problem. Theorem 5 Let x solve the problem 1. There exists a constant p(Q, c) > 0 such that ICT I > p(Q, c) 11 11 Proof Let c,B1,..,Bk-_ be a basis for Rk with B = (B1,, Bk-1) an orthnormal basis for the orthogonal complement of the one dimensional subspace spanned by c. Thus, cTB = 0 and BTB = I. Expressing the columns of Q in this basis we obtain Q = caT + BR where a is a I vector and R is a (kc- 1) x I matrix. Since Q has full rank kc, so does (a, R). Also, let x = uc + Bv 5

where u is a scalar. Rewriting the problem 1 in the variables u and v we get T = CT = UC xTQDQTx = (ucTc)2aTDa + 2(ucTc)aTDRTv + vTRDRTv and if w = cTcu, problem 1 is equivalent to maximize w w2aTDa + 2waTDRTv + vTRDRTv < 1. Let A be the multiplier on the constraint. Then the optimality conditions for this problem can be stated as follows: 1 + A(2waTDa + 2aTDRTv) = 0 (2) wRDa + RDRTv = 0 (3) and we note that Equation 3 has the solution v = -w(RDRT)l-RDa and let wt be the solution obtained by substituting v into Equation 2. Then, after computing x we note that T A w C- - Wt = ucc+ B = -c- 2B(RDRT)lRDa. cT c Thus c -= - B(RDRT)-'RDa, and, noting that IIBII < 1 we get XII < 1 I-(RD )RD IcT I - lcl (RDRl and if, from Theorem 4, q(R, a) bounds the second term of the right hand side above, since both R and a are only functions of Q and c, we get our result with p(Q, c) = 1/( I1+q(R, a)). We now establish an important corollary to the preceding theorem. Given an x > 0, consider the ellipsoidal approximating problem (EAP) of Section (2.1). 6

Corollary 6 There exists a constant p(A, c) > 0 such that for every x > 0, if vi solves the above problem, 1iMij < p(A, c)cTV. Proof Let the set of vectors {Q1,.., -Qn-m} be an orthonormal basis for the null space.A(A) of A; i.e., for Q = (QI,., Qn-m), QTQ = I. Since, in the above problem, v E /'(A), we note that cTv = cTv where cn is the projection of c into the null space of A. Define c and u such that Cn = Qc v = QU. Expressing the problem in variables u we obtain an equivalent problem: maximize CTU UTQTX'2QU < 1. Let uk solve this problem. From Theorem 5, there is a constant p(Q, c) > 0 such that T k -Tuk CTVk = CTk p(Q, c)|II|u p(Q, e)llv|ll since IIvkll2 = (uk)TQTQuk = Iluk12. Since Q, E are only functions of A, c, we are done. O Let R be an arbitrary r x I matrix, with rank < r and let b E 1(R). Also let z E R" be any vector. Consider the following problem: minimize lz - x112 Rx = b We can then prove: Theorem 7 There is a constant q(R) > O such that for every z, for the solution z(x) of the above problem we have I\z - z(x)ll ~ q(R)lIRz - bll 7

Proof Let R be a r x I submatrix of R which has full row rank. Since b E 1Z(R), {x: Rx = b} = {x: Rx = b for some b. Then, it is readily seen that z(x) solves the problem if and only if for some y, 2(z - z(x))- Ry = 0 Rz(x) = b and the result is readily seen by setting q(R) = RT(RRT)-1. E The following simple lemma on the natural logarithmic function is important. Lemma 8 Let w E Rq and O < A < 1 be such that wj < A. Then Wiog(1-w;)>-Tw -_ [T [| log(l - a) > -a 2 -- A Proof The result follows as a consequence of the following facts: When a < 0, by consida2 log(1 - a) > -a -2(1 - log(l-a) = -a —-- 2 3 la12 laj 3 > -a - 2 2 a2 2(1 -a) a2 > — a — - a 2(1 - A) The result now readily follows from the above two facts. O 5 Convergence of Primal Sequence We assume here that the sequence {cTZk} is bounded. We now prove that the primal sequence {xk} converges. We prove this in the next theorem. 8

Theorem 9 Let the assumptions 1 through 3 hold, {cTzk} be bounded, and the sequence {xk} be generated by the primal affine scaling algorithm. Then xk -, x*. Proof Consider the approximating problem of Section 2.1. Note that cVk k = Xk _ xk+1 aOk^ = -x where {k = mkj is the solution to the ellipsoidal approximating problem (EAP) of Section (2.1). We will now show that E= aO|kllIkII < oo and thus the sequence {xk} converges. But from Theorem 2 and Corollary 6 oo > CT - C* = E cT(X - xk+) = aCOkCV > p(A, c) E aOlv IIV k=l k=l k=l and thus {xk} converges, say to x*, and we are done. O 6 More on the Sequences We have seen in the previous section that the primal sequence converges. Let limk —.ox = x* Let N = {j: = 01 B = {j: x > 0} and let F be the face of P = {x: A: = b, x > 0} such that x* lies in the relative interior of the face F; i.e., it is the smallest face of P containing x*. We now prove an important property of the sequence {xk}. Theorem 10 There is a S > 0 such that for each k = 1,2, CTxk _ C* CTxk - C* > 6, and > 6. 9

Proof Note that using the results of Theorem 2 and Corollary 6 with q = INI, the number of elements in N, and M = q(A, c), 00 cTxk -c = E CT(k+J -- k++l) j=o 1 _ j k+' k+j+l > -E11l 3- I ll j=o 100 > II E(xk+j x-k+j+ 1) j=o -= -IIx - X*I Thus cTx - c* > II > j. Also cTxz - c* > > EjEand h - M -./M M - An-qM our result with a = min{Sj, 1 C M } a./~M. Vsn:-qM }-. The following is a well known result on the convergence rate of the sequence {cTxk - c*}. Theorem 11 There exists an L > 1 such that for all k > L CT k+ _ - < (1 -- )(CTk _ kC). V/n Proof Consider the problem (EAP) of section 2.1. We note that Xk kj solves this problem and that _X-.(-)l is feasible for it. Thus, from the above fact and Theorem 1, cTxk -- * cTX y k IIX1(zXk - x*)ll - IXkSkl IXks Now IIX-k (xk- x*)112 = lles- XBkxBli2+ lleN 12, and since X1kxg -- eB, there is an L > 1 such that for all k > L, l\eB- XB*kXIB2 < n-q. Thus, for each k > L, IIX-l(xk-x*)I ~< Vn. Now, as cTk+l - C = CTxk - c* - r CTXk2Sk we have O(Xksk) k CTxk+l -- C* IIXkskII CTXk - C* CTXk - C* and the result follows from the equation (4). 0 Now define D = {(y, s) Ay = CB, ANY + SN = CN SB = 0} to be the set of all tentative dual solutions that are complementary to each solution in F. We can readily establish the following: 10

Lemma 12 D 0 Proof From Step 1 and Theorem 4, the sequences {yk} and {sk} are bounded, and thus have cluster points y* and s* respectively. From Theorem 2, since Xksk -- 0, S* = 0. The result now follows since ATy* + s* = c. 2 Consider the sequence {uk} defined by k Xksk CTxk - c* We now state an important result about this sequence. Its proof is given in the appendix. Theorem 13 The sequence {uk} has the following properties: 1. It is bounded. 2. There is an L > 1 such that for each k > L, (a) I||ukil = Iluk 12 + rk and Ek~=L 7'kl < oo0 (b) eTuk = 1 + 6k and Ek=L 16kl < 0 (c) > 0 f(U) > 1 (d) O(Uk)= -O(U ). 7 Local Potential Function To show the convergence of the dual sequence, it is necessary to control the step size. In particular, the step size a < 2. This result is achieved by the use of a local version of the potential function, which we now introduce. For any x > 0 with cTx - c* > 0 and N C {1,, n} with q = INI, define FN(x) = qlog(cTx - c*) - log(). jEN For the sequence {xk} we can prove that Theorem 14 There exists an L > 1 such that for all k > L FN(xk+) - FN(xk) = qlog(1 - OIw112 + (2f + o()) - q jEN wh/ere wn = u =- a =, 0 = and ZIk=L [6k1 < A>, Zk=L l7kl < o. 11

Proof Since cTxk+l - c* = cTk - c* -,( cTXk2 k, from Theorem 1 and simple algebra we see that cTxk - c* a I c xk - 1 —$(k) i1ukll2 (5) and, for each j E N xk+l - - 1a k k = - <* U From Theorem 13 part 2(a), there is an L > 1 such that for all k > L, CTxk+1 - __ k a2 a-jk CTxk+ - C* O_ 1- k IICUNII - u ) c xkUN2 ( N) Now let k > L, and note that from Theorem 13 part 2(b), 2eTUk 1 1 25k IIW k 112 = U 112 + = 2 - - NIw =INUvl~ +- = IlUNII q q q q and we have our result by observing that -(1 - &I||I11 - &Yk) = 1 - ||W01|2 - (2 + k) q- a q and q (1-&u) = 1 -Ow q - a& and the constant q., cancels. 0 q-a 8 Convergence of Dual Sequence We are now ready to prove that with the step size of a < 2 the dual sequence converges to the analytic center of the optimal face of the dual polytope. Throughout this section we will assume that the sequence {cTzk} is bounded and we now define the analytic center of the optimal dual face as the solution (y*, s*) to the following problem: maximize ZjeN logsj (y, s)ED ACP SN > 0. 12

The solution to (ACP) must satisfy the following K.K.T. conditions: ABXB + ANXN = 0 sN1 + XN = 0 ABy CB AY + SN = CN SN > 0 SB = 0 We now establish a lemma. Lemma 15 If the analytic center defined by (ACP) exists, it is unique. Proof The uniqueness follows from the strict concavity of the log function. 2 We are now ready to prove our main theorem. Theorem 16 Let the sequence generated by the affine scaling algorithm be infinite, and the step size a < 2. Then 1. xk * 2. yk -* y 3. Sk —, s and s* > 0. The limits are optimal solutions of their respective problems, and satisfy the strict complementarity condition. In addition, the dual solution (y*, s*) converges to the analytic center of the optimal dual face, and the primal solution to the relative interior of the optimal primal face. Proof From Theorems 8, 13 and 14 and the fact that for any a < 1, log(l - a) < -a, we obtain an L such that for each k > L FN (xk+l) - FN(xk) < -qO( l WN2 + -+ -Yk) + O(eTwN + 02 1IIwI )) = O 2(-q + 2(1 - w))-(qk + *k) a13 = ~+2(-q+2(1 - a) P(uv)) - O(qy + 6k). 13

From Theorem 13 part 2(c) it can be shown that 0 > a2. Now, as l]W 11 = ||]u - qell, we note that 1 + JlIwI (Uk) > l N Thus, we have a q FN(xk+l) - FN(xk) < OI{W[112(-q + 2( 1 - 0(qk + 6k). (6) N - <) 1 + ||We|| For each a < 2(1 ) < 1. Hence FN(Xk+) —F-)q~llw FN(xk+l) - FN(Xk) < + 1JIw wjII - 0(qyk + Sk). From Theorem 10, Ek=L(FN(Xkl) - FN(Zk)) > -, and, from Theorem 13 parts 2(a) and (b), 0 < E Z=L(q lkl + lSckl) < oo Thus, we must have Ilwk I -) 0, and we get lim UN -e. (7) k —.oo q Now, consider the sequences {yk}, {8k}, {Tc7Tq.} for each j E N and { IcOc} for each j E B. Let sk - s* on some subsequence K' C {1,2,**..}. From Theorem 4, Steps 1 and 2 and Theorem 10, these are bounded for each 1 < j < n. Thus, on some common subsequence K C K' yk -y*, Sk S* qxf CTxk CT* - aj for each j E N ~T xk _ CT X* and q(? - x*) q(x- ) -t bj for each j e B. CTxk - CTx* ck cTxk —cTx* In view of (7), aj > 0 for each j E N. Also, since s = C c * we note that limnEK s = -1 aj. Now, ANXv + ABXk = ABX*, we see that ANa + ABb = 0 and thus sj = a71 for each j e N and ZN = -a, z =-b and 9B = O, sN = sv, Y = yl solve the K.K.T.conditions for the analytic center problem (ACP). Thus s4 converges to a1, the analytic center, on each convergent subsequence, and thus parts (2) and (3) of the 14

theorem are established by Lemma 15. We finish the proof by observing that x* is primal feasible, (y*, s*) is dual feasible and the pair satisfies the complementary slackness theorem ( by Theorem 2 ) and thus are optimal solutions for the respective problems. The strict complementarity holds as sA > 0. D The following is a sharp bound on the convergence rate of {cTxk - c*}. Theorem 17 Let a < 3. Then CTxk+l - C* lim a - +- = -a k-.oo CTxk - C* Proof Follows readily from Theorem 13 parts 2 (a), (c) and (d), equation (5) and equation (7). D We now prove another theorem which shows the convergence to optimality for a slightly larger stepsize than a. Theorem 18 Let 3 < a < 2+2. Then, there exists a subsequence K C {1,2,*.} such that 1. limkCK xk -- * 2. limkEK yk k y* 3. limkK s -- s*, and s* > 0. Thus x* and (y*,s*) are optimum solutions of the dual pair. Also, they satisfy the strict complementarity condition. Proof As a result of Theorem 10 liminf(F(xk+l) - F(xk)) > 0. Thus, either on some subsequence K', F(xk+')-F(xk) > 0 for each k E K' or lim(F(xk+l)F(xk)) -- 0. Thus, on some subsequence, from Equation 6 a q lim inf(-q + a) -> O. kEK' 2(1 - a) I + 11w Let K C K' be such that as k E K goes to infinity, xk -- *, yk > y*, sk.- s* UN - U* and IIw\k 11- 0 > 0. Then a q >0. q + 2(1 - a) 1 + a - Letting a < 2+2, it is readily confirmed that 0 < 1. Since Iu* - ell = a and eTu* = 1, we can conclude that ut > 0, and we are done. D 15

9 Appendix We now prove the Theorem 13. Proof To see (1), note that from Theorem 1, for any (y, ) E D, IIXksk12 = cTvk = (ATy + g)Tvk = TSNV = (XNkSN)T(XlkV ) ~< k(4)I.SNIIIIXkskl. From Theorem 10 we obtain that lUkl[ < MIISNII for M = p(A,c). To see 2(a), from Corollary 6 and for any (y, s) E D, 1|I11 < IIVkIl < McTvk = MSNVk < MIsNIIVNII < M( N)||N II ||||UN||. Also, II|uk < IX K |II|III < ||X |II|_.M()||SNI|II|INII. From Theorem 9, (x) - 0, and Ax - 4 > 0. Thus there exists an M and an L > 1 such that for every k > L 7k = IIBll|2 < (M IIXo()IISNII IIlU N)2 < 1,mMIIXBkIIIIlNIIIIuvl I < Since, I||ukl2 = I||uI| + 1411|| we are done if EkL (Zk ) < oo. But from Theorem 10 and Theorem 11, there is an L > 1 such that for all k > L, (x(4) < IlXk X*ll < M(cxk -c*) < M(1 — )k-L(CTXL - c*) and thus the sum is finite. To see part 2(b), let (yk, sk) solve the following problem: minimize Ilsk _ sI (y,s) ED From Theorem 7, there is a constant N > 0 such that 1Sik - Skil < I\llsBII\. Since {sk} is bounded, so is {k*}, and let 11sk11 < N and IIskl| < N for each k. Also, since sk solves the least squares problem (LSQ), we get IIXBkS,112 + |IXN,kSk 12 < IIXN,k 112. Now, i1,11 < IX IIIXBk~112 < BXkII2(II1XNkvlII2 - IIXNksvll 2) = I l (XNks - XN,kCs)T(XNX,kSN + XN, k4~) 16

< IIX-1W2W(xN)2llI - s4111S1 +411 _ ||,Bkl|(k ) I_22NNSk I. Thus, II - s | < 2NN2jIX-,I 112i(x)2. Now, cTXk = cT(k - X*) = ( + ATYk)T(xk _ x*) = S k Thus, from Theorem 10, Tl, k \TA; )T~ \r-* | |eTu 1k I II"N/ -N ~ {N) =N |e UN-l| 11 -CTZk - C* < |N - SNIIII"N | < II- 1111A11 - 2AM- 114k11 < 2MNN2IjX1 112<[(xk)2. Thus, there exists an L > 1 such that for all k > L, 6kl = leTk - 11 < 2MNNR2||Xj1|2(X)2 < 1. The required sum can be shown to be finite by the same argument as for the sum in 2(a). To see part 2(c), using the result of part 2(b), there exists an L* > L such that for all k > L* q 2q Also, since 1- r II|11ll2 > 0, we have;(u N lluii 1 (U) < H(U < - To see part 2(d), using the results of 2(a) and 2(c), we note that there is an L' > L* such that for all k > L', q(UB) ~< I1U1B < M(x) < 2 < < O(U) and we are done with L = max{L,L, L'}, since O(uk) = max{b(ux),k (u,)}. O References [1] I. Adler, N. K. Karmarkar, M. G. C. Resende and G. Veiga, "An implementation of Karmarkar's algorithm for linear programming," Mathematical Progamming 44 (1989) 297 - 335. 17

[2] E. R. Barnes, "A Variation on Karmarkar's Algorithm for solving linear programming problems," Mathematical Programming 36 (1986) 174 - 182. [3] J. R. Birge and C. Rosa, "A simplified proof of the convergence of affine scaling," Technical Report, Department of Industrial and Operations Engineering, University of Michigan, (Ann Arbor, Michigan 1991). [4] I. I. Dikin, "Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Doklady 8 (1967) 674 - 675. [5] I. I. Dikin, " On the convergence of an iterative process," Upravlyaemye Sistemi 12 (1974) 54-60. (In Russian). [6] I. I. Dikin, "The convergence of dual variables," Technical Report, Siberian Energy Institute, Irkutsk, Russia, December 1991. [7] I. I. Dikin, "Determination of interior point of one system of linear inequalities" Kibernetica and system analysis 1 (1992). [8] L. A. Hall and R. J. Vanderbei " Two-thirds is sharp for affine scaling," Technical Report SOR-92/9, Department of Civil Engineering and Operations Research, Princeton University, Princeton, New Jersey, 08544, September 1992. [9] N. Karmarkar, "A new polynomial-time algorithm for linear programming," Combinatorica 4 (1984) 373 - 395. [10] R. D. C. Monteiro, T. Tsuchiya, and Y. Wang, " A simplified global convergence proof of the affine scaling algorithm," Technical Report, Dept. of Systems and Industrial Engineering, University of Arizona, Tucson, AZ 85721, USA, October 1992. [11] M. Muramatsu and T. Tsuchiya, " A convergence analysis of a long-step variant of the projective scaling algorithm," Research Memorandum 454, The Institute of Statistical Mathematics, 4-6-7 Minami-Azabu, Minato-ku, Tokyo 106, Japan, November 1992. [12] P. Tseng and Z. Q. Luo, " On the convergence of the affine-scaling algorithm," Mathematical Programming 56 (1992) 301-319. 18

[13] T. Tsuchiya, " Global convergence of the affine-scaling methods for degenerate linear programming problems," Mathematical Programming 52 (1991) 377-404. [14] T. Tsuchiya, " Global convergence property of the affine scaling method for primal degenerate linear programming problems"' Mathematics of Operations Research 17 (1992) 527- 557. [15] T. Tsuchiya and M. Muramatsu, " Global convergence of a long-step affine scaling algorithm for degenerate linear programming problems'" Research Memorandum 423, The Institute of Statistical Mathematics, 4-6-7 Minami-Azabu, Minato-ku, Tokyo 106, Japan, January 1992 (revised, September, 1992). [16] R. J. Vanderbei, M. J. Meketon and B. A. Freedman, "A Modification of Karmarkar's linear programming algorithm," Algorithmica 1 (1986) 395 - 407. [17] R. J. Vanderbei and J. C. Lagarias, " I. I. Dikin's convergence result for the affine-scaling algorithm," In J. C. Lagarias and M. J. Todd, editors, Mathematical Developments Arising from Linear Programming: Proceedings of a Joint Summer Research Conference held at Bowdoin College, Brunswick, Maine, USA, June/July 1988 volume 114 of Contemporary Mathematics, pages 109-119. American Mathematical Society, Providence, Rhode Island, USA, 1990. 19