HIGHER ORDER SEPARATION THEOREMS AND A DESCENT ALGORITHM FOR P-MATRIX LCPS Katta G. Murty!? Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109, USA January 1984 Technical Report 84-5

1 ABSTRACT We discuss some generalizations of the strict separation property in the linear complementarity problem associated with a P-matrix. Using them, we develop a principal pivoting descent method for this problem, in which a distance measure decreases striclty in each step. KEYWORDS: Linear Complementarity Problem, Complementarity Cones, Separation Property, P-matrix, Nearest Point Problem, Principal Pivoting Method.

2 1. INTRODUCTION We consider the LCP (q,M) of order n, which is to find w = (w,...,w) n T z = (z,...,z) satisfying w -M z = q w, z =0 (1) T w z =0 where M is a given P-matrix of order n. If q = 0, (w=q,z=O) is a solution of (1), so we assume q 4 0. A square matrix is said to be a P-matrix if all its principal subdeterminants are strictly positive. In this case it is well known that (1) has a unique solution for each q(R [8,11]. When M is Positive definite (PD) or Positive semi-definite (PSD), Polynomially bounded ellipsoid algorithms have been developed for solving the LCP (q,M) [2,6]. These algorithms use the-convexity of the set E={z:z (Mz+q) 0} and the fact that any point in the intersection of E with the polyhedron {z: z = 0, Mz+q>O}is on the boundary of E and leads to the solution of the LCP T (q,M). When M is a P-matrix but not even PSD, the set {z: z (Mz+q) < O} is not in general convex, and because of this, the ellipoid algorithm cannot be used to solve these problems. Eventhough when M is a P-matrix, the LCP (q,M) is known to have a unique solution, no polynomially bounded algorithm for finding it is known (except when M is also PSD). The general LCP is known to be strongly NP-complete [1], but the complexity of the special case when M is a P-matrix is not.known. Developing a polynomially bounded algorithm for the LCP (q,M) when M is a general P-matrix is one of the important unresolved theoretical problems in lenear complementarity. Such an algorithm would seem to

3 require some type of comvexification. Here we discuss an algorithm for the LCP (q,M) when M is a P-matrix, that requires the solution of a finite sequence of LCPs each associated with a positive definite matrix. So each problem in the sequence can be solved in polynomial time using the ellipsoid algorithms of [2,6]. As we move from one problem in the sequence to the next, a distance measure decreases strictly and the method terminates with the solution of the LCP (q,M) when this distance measure is zero. Viewed in terms of this distance measure, this algorithm is therefore a descent algorithm. Its computational complexity is not known. It is being studied, and it is hoped that a polynomially bounded variant of this approach will be found. 2. HIGHER ORDER SEPERATION THEOREMS. For any matrix D, we denote its jth column by D,. Let I denote the unit matrix of order n. For j=l to n, {I.,-M.} is the jth complementary pair of column.3.J vectors in (1) and each vector in this pair is the complement of the other. A complementary set of vectors is an ordered set {A l,....A } where A.E{I,'' 1. *n. j.jJ -M.} for j=l to n, and the matrix A with column vectors A,..., A is called a complementary matrix for (1). Since M is a P-matrix, all the complementary sets of vectors are linearly independent, and every complementary matrix is a complementary basis for (1) [8,9]. Associated with a complementary matrix A is the complementary cone Pos (A) = {Ay: y = O}. Since M is a P-matrix, the class of complementary cones partitions Rn [8,11]. The complementary basis corresponding to a complementary cone containing q is a complementary feasible basis for (1), and the basic solution of (1) corresponding to that basis is

4 the solution of the LCP (q,M) [8,9]. A subcomplementary set associated with a nonempty proper subset J={ji j} C {l,...,n} is a set {Aj,..., A j } with A ~ E I.,-M. } for each r'J'J jeJ. Given any two sets E,F, let E\F denote the set of all elements in E not in F. If {A.: jiJ} is a subcomplementarity set associated with J, let B j be the complement of A j, then the subcomplementary set {B jcJ} is known as the complement of the subcomplementary set {A.:jeJ}. -J The well known strict separation property associated with a P-matrix M states that if {A.: iE{l,...,j-l,j+l,..,n}} is a subcomplementary set, then the unique hyperplane in R containing the points 0, and all the vectors in this subcomplementary set, strictly separates the left out complementary pair of vectors {I.,-M *} [8]. Here we will present a generalization of.J.J this result for a subcomplementary set associated with any nonempty proper subset of {l,...,n}. THEOREM 1: Let M be a P-matrix of order n and let J, J be a partition of {l,...,n} with J, J both being nonempty. Let {A.:jeJ}, {A j: jcJ} be the corresponding partition of a complementary set of vectors. Let {B.:jeJ} be the complement of the subcomplementary set {A j: jcJ}. If H is a hyperplane in R satisfying i) H contains the origin 0 and all the vectors in the subcomplementary sets {A.: jcJ} ii) All the vectors in the subcomplementary set {A j: jeJ} lie in one of the closed half-spaces, H, defined by H

5 then at least one of the vectors in {B.: jEJ} lies strictly on the other.3] side of H in the other open half-space H defined by H. PROOF: Consider the system (2) w-M z = (2) Perform principal pivot steps in (2) to transform the complementary set of vectors {A.: jsJUJ} into the set of unit vectors. This is a nonsingular linear transformation that preserves separation properties. If u denotes the variable in (2) associated with A. and v. denotes its comj.J.1 plement, this transforms (2) into u - M v =O (3) where M is also a P-matrix because it is a principal pivot transform of the P-matrix M [12]. Let M — denote the principal submatrix of M corresponding -JJ to the subset J. Let H = {x:.l a.x. =0} be the transform of H. Since A. j=l j j is transformed into I., by (i) we have a. = 0 for each jeJ, and by (ii) we have aj = (aj: jcJ)= 0. So a = (aj) 0 and since H is a hyperplane, a - 0, J 3 that is a- - 0. (A vector y = (yj) - 0 means that each yj is nonnegative and at least one yj is strictly positive). For jeJ, B j is now transformed 3.3 into -M j. The vector (a(-M j) j) = -a- M-. Since M- is itself a *J3.3 JJJ JJ P-matrix and a- - 0, by a theorem of D. Gale and H.Nikaido [4] at least one of the components of ajMj is strictly positive, that is a(-M.) < 0 for at least one jEJ. That is, at least one of the -M j for jeJ lies in the open m half-space H<= {x: Z a.x.<0} not containing the unit vectors. In terms of j=Jl J the original space this implies that at least one of the B j, jcJ is contained in the open half-sPace H defined by H not containing the complementary set of vectors {A j jzJUJ}

6 THEOREM 2: Let M be a P-matrix of order n, J a nonempty proper subset of {l,..., n} and let {A.: jEJ} be a subcomplementary set of vectors. Let H be a hyperplane in Rn that contains the origin 0 and all the vectors in the set {A.: j~J}. Then H strictly separates at least one pair of the left out. J complementary pairs of vectors {I j, -M.} for jeJ = {1,..., n} \ J. PROOF: Choose the subcomplementary set {A.: jsJ} arbitrarily and transform the system (2) into (3) as in the proof of Theorem 1. Using the notation in n the proof of Theorem 1, suppose this transforms H into H = {x: Z a. x. = O}. j=l J J Since A. is transformed into I and H contains A. for j J, H must contain.J.J.J j J I. for jcJ, that is aj = O for all jeJ. Since H is a hyperplane, we must have a + 0, that is a- = (aj: j J) + 0. Define M — as in the proof of Theorem 1, it is a P-matrix as noted there. By the sign nonreversal theorem for Pmatrices of D. Gale and H. Nikaido [4] if (yj: jeJ) = aj Mj, aj yj > 0 for at least one jEJ. Since a. = 0 for j~J, these facts imply that there exists at least one jEJ satisfying the property that aI. and a(-M j) have strictly opposite signs, that is H separates the complementary pair of vectors fI, -M.} strictly. In terms of the original space, this implies that H strictly separates the complementary pair of vectors {I j, -M.} for that jEJ. 3 THE ALGORITHM: Consider the LCP (1), where M is a P-matrix. The algorithm obtains a sequence of complementary bases AA each one differing from the previous in exactly of complementary bases A,,..., each one differing from the previous in exactly

7 one column vector. Because of this feature, the method belongs to the class of principal pivoting methods [3,5]. For der = {I.,..., I -M } -M let V(d) = ORn if dTq <0, or *.1 *.n.1' nT 2 T d(dTq)/| I|d| if d q > 0. V(d) is the nearest point on the ray of d, {d6: 6 > O}, to q. The nearest point in the complementary cone Pos {A 1' A to q is 0 iff V(A j) = 0 for each j _ 1 to n. Since q is contained in a complementary cone, there exists a complementary cone Pos (A) such that V(A j) + 0 for at least one j. So there exists a der for which V(d) + 0. Find the d that minimizes |IV(d)-q|| among der., Find any complementary basis A~ that contains this d as a column vector. Use A~ as the initial complementary basis in step 1 of the algorithm. GENERAL STEP r + 1 FOR r > 0: This step begins with the complementary basis Ar obtained at the end of the previous step (with A if r = 0). Find the nearest point (in terms of the usual Euclidean distance) to q in the complementary cone Pos (Ar). As shown in [10] this nearest point problem can be posed as an LCP associated with a positive definite matrix, and can be solved in polynomial time by the ellipsoid methods discussed in [2,6]. If q E Pos (Ar), this nearest point is q itself, Ar is a complementary feasible r basis for (1), and the method terminates. Otherwise let x be the nearest r r rr r r r point in Pos (Ar) to q. Let x = A a where a = (al,..., a) > 0. By the choice of A~, xr + 0 for any r and so a > 0, that is J = {j: a. > 0} + 0. -- r J r By the results in [7, 10], x is the orthogonal projection of q in the subspace {x: x = Z A j A real numbers for jeJ }. Define the distance in J6Jr j 2) jr

8 this step to be 6r = I xr -qj. Let Er be the ball {x: | lx-vil- 6 } and let r H be the tangent plane to E at its boundary point x, H = {x: (x-x ) (xr-q) = 0} Clearly Pos {A: jJ } C H. So H is a hyperplane containing the r r r origin, and it separates Pos (A ) from q. Let Ar = {j: JJ, A. and its comr ~r.j plement are strictly separated by H }. By theorems 1 and 2, Ar 0. Select r r+l r r a jeA and let A be the complementary basis whose columns are A 1,,A A, r *l'.j' B r Ar.j,.,An } where B is the complement of A.. By the results in [7,10].j'.j' n.J r+l the nearest point in the complementary cone Pos (A ) to q will be strictly r r+l closer to q then x With A, go to the next step. r The distance measure 6 decreases strictly in each step and since there are only a finite number of complementary bases the algorithm is clearly finite. One can get different variants of this algorithm by choosing j from A according to different rules. One can consider the least index rule in which the j chosen from A is always the least, or a cyclical rule like the least r recently considered rule popular in implementations of the simplex algorithm, or some other rule. We can also consider a block principal pivoting method in which Ar is obtained from A by replacing each A. jA' by its comJ j~A by its comj ) plement in a block principal pivot step. The computational complexity of each of these variants is currently under investigation.

References 1. S. J. Chung, "A Note on the Complexity of LCP: The LCP is Strongly NP-complete", Technical Reprot 79-2, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan, (1979) 2. S. J. Chung and K. G. Murty, "Polynomially Bounded Ellipsoid Algorithms for Convex Quadratic Programming", pp. 439-485, in Nonlinear Programming 4 (Editors) O. L. Mangasariun, R. R. Meyer and S. M. Robinson, Academic Press,(1981) 3. G. B. Dantzig and R. W. Cottle, "Positive (Semi-) Definite Matrices and Mathematical Programming", ORC 63-18, University of California, Berkeley, California, (1963) 4. D. Gale and H. Nikaido, "The Jacobian Matrix and the Global Univalence of Mappings", Mathematische Annalen, 159 (1965) pp. 81-93. 5. R. L. Graves, "A Principal Pivoting Simplex Algorithm for Linear and Quadratic Programming", Operations Research, 15 (May-June 1967) No. 3, pp. 482-494. 6. M. K. Kozlov, S. P. Taransov and L. G. Hachijan, "Polynomial Solvability of Convex Quadratic Programming", Soviet Mathematics Dokady, No. 5, 20, (1979) 7. K. G. Murty, "On the Linear Complementarity Problem", Operations Research Verfahren, 31 (1978) pp. 425-439. 8. K. G. Murty, "On the Number of Solutions to the Complementarity Problem and Spanning Properties of Complementary Cones", Linear Algebra and Its Applications, 5 (1972) pp. 65-108. 9. K. G. Murty, Linear Complementarity, manuscript, to appear. 10. K. G. Murty and Y. Fathi, "A Critical Index Algorithm for Nearest Point Problems in Simplicial Cones", Mathematical Programming, 23 (1982) 206-215. 11. H. Samelson, R. M. Thrall and 0. Besler, "A Partition Theorem for Euclidean n-space", Proceedings of the American Mathematical Society, 9 (1958) pp. 805-807. 12. A. W. Ticker, "Principal Pivotal Gransforms of Square Matrices", SIAM Review, 5.(1963) p. 305.