THE UN IVER SIT Y OF MI CHI GAN INFORMATION SYSTEMS LABORATORY Department of Electrical Engineering College of Engineering Technical Report ISL-65-3 COVERING PROBLEMS: DUALITY RELATIONS AND A NEW METHOD OF SOLUTION by Eugene L. Lawler May, 1965 This research was supported in part by NSF Grant GP-2778, and in part by Air Force Contract AF 30(602)-3546.

Table of Contents Section Page Abstract 1 1 Introduction 2 2 Definitions and Elementary Results 4 3 Dualization Properties of Cover Mapping 4 Computations with Minimal Sets; Enumeration of Minimal Covers 11 5 Computation of Optimal Covers: Equal Costs 14 6 Computation of Optimal Covers: Unequal Costs 19 7 A Refinement 24 8 Comparison with Other Methods of Solution 28 References 30 iii

Abstract This paper is concerned with "covering" problems of the following type: Let a strictly positive n-vector c and an mxn matrix A of O's and l's be given. Let E = (1,1,...,1) be an m-vector of l's. Find a solution vector Y = (Yl'Y2,...'Yn) which will minimize the inner product cy subject to Ay > c and yj = 0 or 1 (j=l,2,...,n). There is an interesting duality between the constraints of covering problems and their solutions. This duality is explored in this paper and is found to furnish the basis for a possible new method of solution which is described in detail. 1

1. Introduction Let a strictly positive n-vector c (the cost vector) and an mxn matrix A of O's and l's (the constraint matrix) be given. Let E = (1,1,...e1) be an m-vector of l's. A covering problem is formulated as follows: Find a solution vector y = (Y1,Y2,, n) which will n minimize cy = E c.y. subject to Ay > c and yj = 0 or 1 (j=,2...,n). Problems of this type arise in a variety of combinatorial contexts, e.g. in switching theory and in operations research. The following interpretation may be suggestive. Suppose a set S = SlS2J...,sm of m elements and an arbitrary collection of n subsets are given. If c = (1,1l...,1), and A = I aij fi is defined by the rule a.. = 1 if s.eS. = 0 otherwise, then an optimal solution to the covering problem indicates the smallest collection of subsets that can be chosen, subject to the restriction that each element of S is a member of at least one of the chosen subsets. This is under the interpretation that y. = 1 if Sj is chosen = 0 otherwise. 2

There is an interesting duality between the constraints of covering problems and their solutions. This duality is explored in this paper, and is found to furnish the basis for a possible new method of solution, which is described in detail. The solution method is iterative in nature, and requires a number of iterations no greater than the cost of an optimal solution. The amount of computation required at each iteration is variable, and may be impracticably large in some cases. Some conventions adopted in this paper are as follows. With the exception of cost vectors, all vectors and matrices are assumed to be composed of O's and l's. The vector E is composed entirely of l's, and is of whatever dimension is appropriate in context. The vector e is a vector with 1 in the k position and O's elsewhere: (Ek) = jAk =1 j=k. k -k th The complement of C, denoted E, is the vector with 0 in the k component and l's elsewhere. The inner product of the two n-vectors c and y is denoted cy: n cy = z c.y.. j=l Row i of matrix A is denoted A. and column j is denoted A. The vector a is said to be greater than or equal to the vector b if and only if a is greater than or equal to b, component by component: a > b< a. >b. (j=!,2,...n). The number of elements contained in set S is given by S | 3

2. Definitions and Elementary Results A covering problem is unaffected by a reordering of the rows of the constraint matrix A. Therefore, it is appropriate to conceive of A as an unordered set of n-vectors: A = (a(l), a(2) a(m)} We shall adopt this convention for most of the remainder of this paper. If A is a set of n-vectors, a cover of A is a (0,l)-vector y which has a nonzero scalar product with each vector in A: a(i) y > 1 (i=l12,...,m) yj = 0 or 1 (j=l,2,...,n). A cover y is minimal iff there exists no distinct cover y' such that y' < y. A cover y is optimal iff there exists no distinct cover y' such that cy' < cy. Every optimal cover is a minimal cover, and every minimal cover is an optimal cover with respect to some cost vector c. We let Cov A denote the set of covers of A. For example, if A = {(0,1,0,1), (0,1,1,1), (1,0,1,0), (1,1,0,0) (2.1) then Cov A = {(0,1,1,0), (01,1,1), (1,0,0,1), (1,0,1,1), (1, 1, 0,9 0), (1,1, O, 1), (1,1, 1 1), ( 1, 1, 0). We let the subset of minimal vectors of a set S be denoted Min S. Thus the set of minimal covers of A is denoted Min Cov A. For A in (2.1), Mi Cov A = {(0,1,1,0), (1,0,0,1), (1,1,0,0)). It is easily shown that the minimal elements of A determine the covers of A. Thus, for all A

Cov A = Cov Min A. (2o2) Cov A has the property that if y is contained in Cov A, and y' > y, then y' is also contained in Cov A. Any set which has this property will be said to be closed. We define the closure of A, denoted C1 A, to be the smallest closed set in which A is included. Or, equivalently, C1 A = a' a' > a, fcir)some acA), where a' is required to be a (O,1)-vector. For example, for A in (2.1), C1 A = ((0,1,0,1), (0,1,1,1), (1,0,1,0), (1,0,1,1) (1, 1,0,0), (1,1,0,1), (1,1,1,0), (1,1,1, 1)). We have already observed that, for all A Cov A = C1 Cov A. (2.3) To this we can add the identity Cov A = Cov C1 A, (2.4) and also C1 A = C1 Min A. (2.5) The closure operation is consistent with the topological notion of closure, in that C1 p = c (c the empty set) A C C1 A C1 C1 A = C1 A C1(A U B) = C1 A U C1 B. (2.6) Finally, note that every closed set forms a semilattice with respect to the l1u.b. (least upper bound) operation "v", defined as follows: (a v b)i = Max (ai bi}. (A semilattice satisfies the associative, commutative, and idempotent laws

with respect to its single operation. If finite, it has a unique largest element with respect to the partial ordering induced under the definition a <b iff a v b = b. In the case at hand, a <b iff a. <bo for j=l,2,.o.,n.) If vectors are represented by computer words, with a single bit devoted to each vector component, the join operation is easily effected by the LOGICAL OR operation.

3.:Dualization Properties of Cover Mapping The Cov operation defines a mapping:from sets of n-vectors to closed sets of n-vectors. Three important properties of this mapping are the following: (1) Cov A = Cov B > Cl A = Cl B > Min A = Min B (2) Cov Cov A = C1 A (3) Cov (Cl AU Cl B) =Cov Cl An Cov Cl B Cov (C1 A n Cl B) Cov Cl A U Cov C1 B. Suppose the domain of Cov is restricted to, the class of closed sets. Properties (1), (2), and (3) state that Cov: C -4 C is a one-to-one, self-inverse transformation which interchanges the set operations of union and intersection. In other words, Cov defines a "dual" automorphism or dualization transformation quite analogous to ordinary set complementat ion. Compare: (1) A = - <. A=B (2) (A) =A (3) (A U B) = A nm (A n B) = A B (Note, however, that the Boolean laws of operation with respect to 0 and 1 do not hold for Cov. I.e., it is not generally true for closed sets that A n Cov A = 0, A U Cov A = Cov. 0. 7

The reader may be interested to observe the effect of the Cov operation on a specific class of closed sets. The table below lists all closed sets of 2-vectors and their images under the Cov transformation: A Cov A 0_ — > {(oo)6 (o~), (lo)~ (1)} 0 (1,1)}'{ (0,),(0,1), (1,0)(,1 ( {((01), (1,1)} > {(0,1), (11)} (01) (1,o), (1,1)} -- {(1,1)) {((0,0), (0,1), (1,0), (1,1)}.0 The dualization properties are now stated and proved as theorems: In each case, the word "vector" is to be interpreted exclusively as an "n-vector of O's and l's." A and B are arbitrary (not necessarily closed) sets. Theorem 3.1 Cov A = Cov B < > C1 A = C1 B. PROOF (a) < is trivial (b). is proved by showing that C1 A, C1 B - > Cov A f Cov B. Let b be a vector such that bAC1 A, but beC1 Bo Define a vector y as follows: y. = 0 if b. = 1l j = y. =1 ifb. = 0. j i Clearly, by = 0, so y, Cov C1 B. But y C Cov C1 A. For consider any a E C1 A. It must be the case that there is some j such that aj=l, bj=O, and yj=l. If this were not so, it would be the case that a < b, and b

would be contained in C1 A. Hence ay > 1, for all a c C1 A, and so y c Cov C1 A. Theorem 3.2 Cov Cov A = C1 A. PROOF: (a) a E C1 A - > ay > 1, for all y c Cov A =::_:::> a E Coy (Cov A), by definition. Hence Cl A C Cov Cov A. (b) a c Cov Cov A - >ay > 1, for all y c Cov A a c C1 A, for if this were not so, there would be a set B = C1 A U (a) f C1 A, such that Cov -:B = Cov A, in violation of Theorem 53.1. Hence, Cov Cov A C C1 A. Theorem 3,3 (a) Cov (C1 A U C1.B) = Cov C1 A flCov C1 B (b) Cov (C1 A C1 B) = Cov C1 A U Cov C1 B. PROOF: (a) The first identity is quite simple, and in fact Cov (A U B) = Coy A q Cov B. Proof is left to the reader. (b) In the case of the second identity, it is easy to establish that Cov (Cl A n C1 B) D Cov C1 A U Cov C1 B. In order to prove the converse, suppose that there is a vector y such that y c Coy (C1 A l C1 B), but y A Cov C1 A and y, Cov C1 B. This means that there is an a c C1 A and a b c C1 B, such that ay = by = 0. 9

But a v b c C1 An C1 B, and (a v b)y > 1, by assumption. From this it follows that either ay > 1 or by > 1, which is a contradiction. Q E.D. 10

4. Computations with Minimal Sets; Enumeration of Minimal Covers Although it is advantageous to state theorems in terms of closed sets, it is very disadvantageous to carry out actual computations on them. A much better idea is to rely on the identity Min A = Min B < r:r — > Cl A = C1 B and to represent every set A by Min A. The most commonly required computations are those of the union and intersection of two sets. The following two theorems show how these may be computed, using minimal representations. Theorem 4.l Min (C1 A U C1 B) = Min (A U B). PROOF: Trivial. Theorem 4.1 states that the minimal elements of C1 A U C1 B can be computed by simply removing the nonminimal elements from A U B. Theorem 4 2 Min (C1 An( C1 B) = Min (A v B) (By definition, A v B = ( a v b |aeA, beB).) PROOF: Let C = Min (C1 A nC1l B) D = Min (A v B) (a) c~C C > c > a, c > b, for some a~A, bcB. - -- c > (a v b) > d, for some dcD. (b) dcD --— > d > a, d > b, for some acA, bcB ->- d C (C1 A r) Cl B) >d > c, for some ceC. (c) For every c(l)EC, there is, by (a), a d(l)ED such that c(l) > d(1) And, by (b), there is a c(2)EC, such that d() > c( ), so that 11

c(l) >i di) >( c2. But c(l) c( ) because C contains only minimal elements. Hence c(l) = d(l), and C C D. A similar argument, starting with an element of D shows that D C C. Hence C = D. Q.E.D. Theorem 4.2 states that the minimal elements of C1 A n C1 B can be found by forming the set of all vectors of the form a v b, and then removing all nonminimal vectors from the set. The minimal covers of a set A can be computed by applying Theorem 4.2 as follows: Let A a.(1), (2), a(m)) Then Min Cov A = Min Cov tj a(i)}, by definition i=l m Min Cov U (a(')}, by (2.4) and (25.9) i=l m - Min rICvO Cl (a(i)), by Theorem 3.3 i=l = Min (Cl Cov a(i)), by (2.3) and (2.4) ijl m - Min nC1 Min Cov {a(i)), by (2.5) i=l m = Min V Min Cov (a(i) by Theorem 4.2. i=l The method of computation is completely specified by the above except for the determination of Min Cov [a(i)], for i=l,2,o...,m. But this is quite simple. Suppose a(i) contains r l'so Then Min Cov {a(i)) contains r vectors, each containing a "1" in one of the r positions occupied by l's in a(i) 12

E.g., if a = (1,1,0,0,1,0), then Min Cov (a(i)) = t(1,0,0,0,0,0),(0,1,0,0,0,0),(0,0,0,0,1,0)). In prac-tice, the computations would be carried out as follows: Let A(k) = a(l), a(2),..., a(k) (k=1,2,...,m) (1) Compute Min Cov A(1) = Min Cov ya(1)a. (2) Set k=2. (3) Compute Min Cov ta(k)]. (4) Compute Min Cov A(k) = Min (y v' Y c Min Cov A(k-l) y' c Min Cov (a(k)}). (5) If k < m, increment k and return to step (3). If k=m, the computation is completed, and Min Cov A = Min Cov A(m). As an example, A = {(190,1,1), (0,1,0,1), (1,1,0,0)}' Min Cov (a(1)} = Min Cov A(1) = ((0,0,0,1),(0,0,1,0),(1,0,0,0)} Min Cov ta(2)} = ((0,0,0,1), (0,1,0,0)3, Min Cov A(2) = ((0,0,0,1), (0,1,1,0), (1,1,0,0)) Min Cov {a(3)} = {(0,1,0,0), (1,0,0,0)} Min Cov A = Min Cov A( = {(O,l,0,l),(0,1,l,0),(l,00,l),(l,l,0,o)). The reader may be able to recognize the equivalence of this algorithm and "Petrick's Method" [6]. 15

5. Computation of Optimal Covers: Equal Costs It is now possible to describe an algorithm for computing optimal covers, Initially, the discussion will be limited to the case of equal unit costs, i.e. c = C. Choose any vector UcA as a reducing vector. If Cov A is to be nonempty, this vector must be nonzero. Without loss of generality, assume that a contains exactly r l's, and that these are in the first r positions: Ox. = 1 (1 < j < r) 0 (r + 1 < j < n). Every cover y has the property that either y1=l or y2=l or... or yr=1; otherwise, it would be the case that ay=O. Suppose y is a cover and Yk=l, where 1 < k < r. This insures that cy > 1, and that ay > 1 for all aeA such that ak=l. The remaining components of y must be such that ay > 1 for all a cA such that ak=O. Let A(j) denote the subset of all vectors a in A such that a.=O: A() {aA I a1=O). (j) iaAja.=OJ. If y is a cover of A, y A eji (the vector obtained by setting yj to 0), is a cover of A(j). Conversely, if y is a cover of A(j), then y v ej (the vector obtained by setting yj to 1) is a cover of A. Moreover, if y is a minimal cover of A (from which it follows that yj=O), the cost y v ej is greater by exactly 1 than the cost of y: (y v c) = y + 1. Still assuming that the reducing vector a has l's in the first r positions, let A(1) be a set with property that Coy A( = j cov A(J). 0=~ (0

Any cover y of A () is a cover of at least one of the A(j) (1 < j < r). If it is a cover of A(k), then y v Ek is a cover of A. Moreover, if y is (k)' an optimal cover of A (1), it is an optimal (and hence minimal) cover of A(k), k and the cost of y is exactly 1 less than the cost of y v c. Every optimal cover of A(l) yields an optimal cover of A, and every optimal cover of A can be obtained from an optimal cover of A (1), whose cost is exactly 1 less. The strategy for a computational procedure now becomes apparent. From (0) (0) the set A A, a reducing vector a( is chosen. Based upon this choice, (1) (1) (1) a set A(1) is computed. From A a reducing vector a(1) is chosen, and a set A(2) is computed Sets A(3), A(4),... are computed, until finally a set A(N) is obtained for which the zero vector is the optimal cover. (A(N) is the null set.) Then an optimal cover of A(N-1) can be constructed from (N) (N-2) the optimal cover of A (N), an optimal cover of A(N2) can be constructed from an optimal cover of A(N-l)..and an optimal cover of A = A(0) can be constructed from an optimal cover of A(). The process is clearly finite, and requires exactly as many iterations as the cost of an optimal cover of A. (Note that not only does the cost of an optimal cover decrease at each iteration, but that Cov A() C Cov A(1)C... C Cov A(N) = Cov y and C1 A(0) ) C1 A(1) 2?... ]C1 A(N) = C1 0 = 0.) All that remains is to describe the computation of A(P ) from A Given any vector acA, A(P+l) should be such that 15

Cov A(P+) ) Cov A(, by definition ov C 0 - Cov (2} Cl(3) by Theorem 3.3 = Cov Min C1 A(P) by (2.2) where each union and intersection is over all j such that 5C.=l. Hence, let A(P+l) = Min m C1 A(p) (j) - Min V A() where "iI" and "\/" are again over all j such that 5.=l. Thus, A(P+l) can be computed by taking r-fold l.u.b.'s of vectors, one vector being drawn from each set A(P) for which a.=1. (j) J The following example may serve to illustrate the computations. For purposes of display, matrix representation rather than set representation is used. At each step, a row vector with as few l's as possible is chosen as the reducing vector, since this tends to reduce the number of l.u.b. operations. Let 0 1 0 0 1 0 010100 A(O) 1 0 0 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 1 0 0 0 0 16

Choosing (0) = (0,1,0,0,1,0), obtain 1 0 0 1 0 0 A(0) 0 01010 (2) 0 0 1 00 1 0 0 0 0 1 0 1 0 0 A(0.) _ 1 0 0 1 0 0 (5) - o 1.o'o. and 1 0 0 1 00 A() 0 0 1 0 0 1 1 0 0O0 0 1. Choosing o(l) = (1,0,0,1,0,0), obtain (1) 1o 1 o 1 (1) 21 1 1 10 A(1) _ 0 1 0 0 11 (4) 1 L_ ~ ~ ~ ~ and A(2) = o 1 0 0 Choosing (2) = (00,01,0,0,1), obtain (2) A() - [ ] (null matrix) A(3) - A(2) (6) and A(3)= [ ] (3) (3) The optimum cover of A is y(3) (0,0,0,0,0), which is a cover of A(2) Hence (3)' 17

y(2 = y(3) v of - (0,0,1,0,0,0) is an optimal cover of A(). Since y(2) is a cover of A(1), y(l) y(2) v1 = (1,0,1,0,0,0) is an optimal cover of A(1). Since y(1) is a cover of A(2), y(O) y(1) 2 y = y ye (1,1,1,0,0,0) is an optimum cover of A, and the problem is solved. Note that by carrying out the backward construction in all possible ways, one could compute all optimal covers of A. 18

6. Computation of Optimal Covers: Unequal Costs Let us consider the case in which the cost coefficients are not necessarily equal. As an example, suppose there is some vector aEA, such that a1 a2 1 a3 = 1 4 = " an = O' and that c1 = 1 C2 3. We obtain the sets A(1), A(2), and A(1) as described above. Then the fol(1) (2) lowing situation may occur: The vector y(l) is an optimal cover of A(1), and cy(1) = 12. The vector y(2) is an optimal cover of A(2) and cy(2) = 11; y(2) is therefore an optimal cover of A(1) But c(y(l) v 1) = cy(l) + 1 = 13 and c(y(2) v 2) = cy(2) + 3 = 14. Thus, the suboptimal cover y(1) of A(1) yields an optimal cover of A, and the optimal cover y(a) of A(1) yields a suboptimal cover of A. This situation can be corrected by defining certain auxiliary variables, constraints, and cost coefficients in such a way that an optimal cover of A(1) differs in cost from an optimal cover of A by a fixed amount. In the case at hand, let (in matrix notation), 19

A(1) = [ (1). A(2) 3Y = (YlY2,...Yyn,z1), where c = (C1C2,...,Cn, dl) d1 = 2 - c1 = 2. Let A() = Min (A(1) v A(2)). A vector y = (y;z) is a cover of A(1) iff y is a cover of A(1), but it is a cover of A(2) iff y is a cover of A(2) and zl=l. The cost coefficient dl of z1 exactly compensates for the difference in costs in converting these two types of covers into covers of A. It follows that an optimal cover of A(l) yields an optimal cover of A. Suppose, without loss of generality, 0a = 1 (1 < j < r) 0 (r + 1 < j < n), and Min (c. = c 1 <j <r r A general method of construction can be described as follows (again in matrix notation): 20

I Aj A -- - = — _(__ _ __ (j <r) A( ) = r ) y = (y.;- Z) (YlyY2' ~''Yn Z1 z2'' 9' Zr-l) c = (c;d) (C1 C2, *.c ndld2"...dr-l) where d. = c. - Min {cJ} ~J 1 <j <r C. - C~ j r Under this construction if y = (y~z) is a cover of A(i) and of A(j), then y v Ej is a cover of A, and c(y v Ej) = cy + dz + cr = cy+ C.. This is not, however, necessarily the most economical way to equalize costs. It is necessary to provide only one auxiliary variable for each dis tinct cost coefficient d., thereby reducing the size of A(l). The following example may clarify the ideas described above. Let 010100 A(O) 1 0 0 1 0 0 0 0 1 0 1 0 O01001 1 0 0 0 01 as before, and c = (6,5,4,3,2,1). Choosing c(0) = (0,1,0)0,1,0), obtain 21

[7: oooooooo --------------- (Mv ITOTOOOOTI- - 0 To 0o T 0 o 0 o F'i0 T 1 0 T 0 T uT.eqo' (O'o''o'o'T'o'o) = ()o ~u. sooqD' ( 5'1'T';'8t'*r'9) = (~) pue TO TOO O O O I iOIOT O I O (T)l'O TO TO TTO IT o.I o -o o o - (1)-' 0 0 0 0 0 0 0 Oi OTTTT O I O O = (T)O 0 T T T T 0 TOIOT- O T O I O IO, T 0 0 TOT Q ul~xqo'(O'O'O'T'O'O'T) = (T)2 =ulsooo -pul 7 000001 O T 00 00 T 0 T O0 T 0 0 0 00 T 0 0 T OO T O T O O' IO 0 I O O 0 I 0 0 0, -( )

A(3) and c(3) (6,5,4,3,2,1,3,3,3). (3) -(3) Now y(3) (0,0,0,0,0,0,0,0,0) is the optimum cover of A(, and is a -(2) cover of A (6) Hence (0,0,0,0,0,1,0,0) is an optimal cover of A(2) Since y(2) is a cover of A( (l)= y(2) v = (0,0,0,1,0,1,O) is a cover of A(1). Since y(1) is a cover of A(5), y(0) (1) 5 y(~) = y(l) v = (0,0,0,1,1,1) is an optimal cover of A, with a cost of 6. 23

7. A Refinement The algorithm can be made somewhat more efficient, and in such a way that the number of variables is reduced with each iteration. We restrict our discussion to the case of equal costs; the extension to unequal costs is immediate. It has been observed that if the reducing vector OC is such that cq - 1 (i < j < r) = 0 (r + 1 < j < n), then for any cover y, either (1) y1 = 1 or (2) Y2 = 1 or (r) y 1. These possibilities suggested covering problems with sets A(M) A(2)...A (r where A(j) is obtained from A by removing those vectors a for which aj = 1. The possibilities are not, however, mutually exclusive. I.e., a cover of A(1) can also be a cover of A(2). Now consider the following: 24

(1) Yl = 1, or (2) Y = 0o Y2 = 1, or (3) Y = O, Y2 = O, Y3 = 1, or (r) Y O Y2 = = *' r 1Y2 = 0,.Yr-l = Yr = These possibilities are disjoint and exhaustive. Moreover, for each one the value of y1 is fixed, thereby effectively removing it from the problem. We can, therefore, obtain a reduction in dimension from n for A(0) to n-1 for A(1) and from n-k for A(k) (kIl) A, and from n-k for A() to n-k-l for A. However, we find it simpler to retain n as the dimension, merely setting to O the vector components that correspond to variables whose values are fixed. Let A(j) be formed from A(j) by setting to O all vector components k for which k = 1 and k < j. For example, if in matrix representations (1,0,1,1 0) and 1 0 1 0 1 A 1 0 0 0 (4) _ 1 0 0 1 1 1 0 then A 0.10 00 01 0 0 and, of course,

Min A(4) = 1 0 0 0 ^ (1) " ^ A The set A can be computed from A(l)A(2))1'PA(r) exactly as before. And from an optimal cover of A, an optimal cover of A can be constructed exactly as before. This refinement not only reduces the effective number of variables; it also tends to reduce the size of minimal sets. Consider the following example: = (1,1,1, 090) 1 1 1 0 0 0 11 0101 A = 1 1 1 0 110110 0 00 1 1 A(1) [O 1 1 1 0] A 0 0 1 1 1 1 (2) _0 0 0 1 1 1 0 1 0 1 [ 1 0 0 1 A(1) _ 1 1 1 1 1 1 1 ~l A(1) = [O O 1 1 1 ] A= 0 0 1 1 1 0 A(2) = o o o 1 1 A (3) O O - 1 1 0 0 =0 0 11 (2)1 26 26

For any given problem, this refinement may not improve computations significantly. If the reader cares to rework the example of the previous section, he will note little change. Questions such as which vector to choose as a reducing vector, how r-fold lou.b.'s can be computed most effectively, and so forth, lie chiefly in the realm of heuristics. Anyone with experience programming a computer is likely to have a number of ideas of his own. 27

8. Comparison with Other Methods of Solution The method of solution described here has not as yet been programmed for a computer or tested in any way, although plans are being made to do this. The method does need to be compared in efficiency with other algorithms, including: (1) integer linear programming algorithms (2) "alternating path" algorithms (3) enumeration methods (4) "partitioning" or "branch and bound" methods. It is evident that covering problems as defined here are simply a special class of integer linear programming problems. Cobham, et al. [1,2], have attempted to apply integer programming algorithms, and most recently Balinski [3] has begun systematic experimentation. Results appear to be fairly encouraging. "Alternating path" methods stem from the theory of linear graphs, in which there exist fairly efficient algorithms by this name for covering the vertices of a graph with a subset of edges, or covering the edges with a subset of the vertices. Edmonds [4] and Chaudhuri [5] have attempted to extend these algorithms to more general types of covering problems. However, in the author's opinion, these algorithms show little promise of computational efficiency. Under the heading of enumerative methods, we might include the method of S. R. Petrick [6], which was given an unusual formulation in Section 4. Petrick's method is satisfactory for hand computation with small problems, but is inefficient for machine computation with large problems. This is 28

because the number of minimal covers that must be enumerated can become quite astronomical. Partitioning or'branching" methods are most closely associated with the names of J. P. Roth [7] and E. J. McCluskey [8]. This approach decomposes the original problem into two or more subproblems with separate constraint matrices. However, instead of combining these subproblems into a single problem, these problems are solved separately, with further decompositions and simplifications. The approach is very closely related to the method presented here. In fact, some combination of the two approaches might eventually prove to be-most fruitful. Finally, it is necessary to mention that the research reported here was stimulated by a recent paper of J. F. Gimpel [9]. In this paper, Gimpel presented a method for:.'reducing" the matrix of a covering problem in the case in which some row contained exactly two l's. The extension of this idea to a general algorithm, and the explication of the underlying duality relations are original to this author. 29

References [1] A. Cobham, R. Fridshal, and J. H. North, "An Application of Linear Programming to the Minimization of Boolean Functions," Proceedings Second Annual Symposium on Switching Circuit Theory and Logical Design, AIEE Publication S-134, pp. 3-9 (1961). [2] A. Cobham and J.o H. North, "Extensions of the Integer Programming Approach to the Minimization of Boolean Functions," IBM Research Report RC-915, Yorktown Heights, New York (April, 1963). [3] M. L. Balinski, private communication (January, 1965). [4] J. Edmonds, "On Minimum Element-Covering from a Finite Class of Weighted Sets," presented at April Meeting of American Mathematical Socie(ty, New York,'New York, Paper 578-42 (April, 1961). [5] D. K. Ray-Chaudhuri, "An Algorithm for a Maximum Cover of an Abstract Complex," Canadian Journal of Mathematics, vol. 15., pp. 11-24 (1963). [6] S. R. Petrick, "A Direct Determination of the Irredundant Forms of A Boolean Function from the Set of Prime Implicants," AFCRC-TR-56-110, Air Force Cambridge Research Center (April, 1956). [7] J. P. Roth and E. G. Wagner, "Algebraic Topological Methods for the Synthesis of Switching Systems, Part III: Minimization of Nonsingular Boolean Trees," IBM Journal of Research and Development, vol. 4, pp. 326-344 (October, 1959). [8] J. B. Pyne and E. J. McCluskey, Jr., "An Essay on Prime Implicant Tables," Soc. Industr. Appl. Math Journal, vol. 9, no. 4; pp. 604-631 (December, 1961). [9] J. F. Gimpel, "A Reduction Technique for Prime Implicant Tables," Technical Report number 40, Department of Electrical Engineering, Princeton University (August, 1964). 30

UNIVERSITY OF MICHIGAN 3 901 5 03466 55731111111 3 9015 03466 5573