RANDOM METHODS FOR IDENTIFYING NONREDUNDANT CONSTRAINTS by Robert L. Smith Jan Telgen Technical Report No. 81-4 1. Department of Industrial and Operations Engineering The University of Michigan, Ann Arbor, Michigan, 48109. 2. Center for Operations Research and Econometrics, 1348 Louvain-la-Neuve, Belgium. April 198.1

RANDOM METHODS FOR IDENTIFYING NONREDUNDANT CONSTRAINTS by Robert L. Smith and Jan Telgen Abstract A method to generate uniformly distributed points in a bounded region is used to develop a class of random methods to identify redundancy in systems of linear constraints. Nonredundant constraints are identified directly and the remaining constraints are labeled as redundant. Bounds on the probability that constraints are incorrectly labeled as redundant are derived. As a result, a formula is constructed giving the number of iterations of the method necessary to guarantee that within a given probability all constraints will be correctly labeled. Experimental results with some computationally attractive variants of this class of methods are presented.

Random Methods for Identifying Nonredundant Constraints by Robert L. Smith and Jan Telgen 1. INTRODUCTION Various methods to identify (non)redundant constraints in systems of linear (in)equalities have been suggested in the literature. Most of these are somehow based upon the simplex method, usually via the minimization of slack variables (e.g., Thompson, Tonge and Zionts [1966], Lisy [1971], Gal [1975, 1978], Telgen [1979]). Other methods invoke the simplex method to enumerate all extreme points of the set of feasible solutions (e.g., Balinsky [1961], Mattheis [1973]), and use this information to identify redundant constraints. Since all methods mentioned above require the complete updated simplex tableau at every iteration, their usefulness is limited to very small problems. Therefore preprocessors have been developed that work with the original problem data (e.g., Brearly, Mitra and Williams [1975], Bradley, Brown and Graves [1980]). One drawback of these preprocessors (e.g., the REDUCE option in many commerical mathematical programming packages) is that they generally do not identify all redundancy since they are restricted to very simple operations.

2 Here we consider methods to identify redundant constraints based on procedures to generate random interior points in a bounded region. Basically, from such an interior point the methods search in a randomly generated direction. The constraint hit first can be shown to be nonredundant. After a great number of trials (i.e., random directions) from different interior points, the constraints that have not been hit are labeled as redundan t. The first such method was suggested in Boneh and Golan [1979]. Independently Smith [1980] developed part of the same method as a procedure to generate uniformly distributed points in a bounded region. In this paper we build upon the latter results to give a class of random methods to identify nonredundant constraints. Section 2 contains a statement of the general method and the theorems required to establish its validity. Also in that section we prove that the hitpoints are uniformly distributed over the boundary of the polytope. This result enables us to conclude that the probability of hitting a given constraint is proportional to the area of the corresponding facet of the feasible polytope. As a consequence we construct stopping criteria for the method, guaranteeing (with a prespecified probability) identification of all nonredundant constraints with facets of a minimum prespecified size. In Section 3 we give some experimental results for computationally attractive variants of the method, such as

3 the one in which only coordinate directions are taken. The test problems are taken from Karwan, Telgen and Zionts [1981], an extensive computational study on redundancy identification, where results for other methods are readily available. The random methods perform relatively well; in only a fraction of the time required by the fastest of the other methods, our method identified more thah (80%) of all nonredundant constraints. Moreover its relative performance is much better on larger problems. In the conclusion we discuss possible extensions of this approach to other areas (e.g., nonlinear systems) and to other problems (e.g., optimization). 3. THE ALGORITHM AND ITS PROBABILISTIC ANALYSIS We start with a feasible region S defined by a system of inequality constraints (2.1) gi(x) < bi, i = 1, 2,...,m with x c Rn and b. c R for all i. We could require a regularity condition on the gi's so that the region S had content and surface area. However, to simplify the exposition we will restrict discussion here to the case where S is defined by a system of linear inequalities (2.2) Ax < b

4 with A C Rm, b E R, and x Rn. Rows of A will be denoted T as ai, i = 1, 2,...,m and without loss of generality we will assume that | ai = 1. We define the feasible region (2.3) S = {x c RnI Ax < b}. A point x c R is called an interior point of S if Ax0 < b. It is assumed that such a point exists; if some of the constraints form implicit equalities than a point in the relative interior of S will suffice. Now the algorithm consists of three parts: the generation of interior points of S (Steps 1 and 3), the identification of nonredundant constraints (Step 2), and a stopping criterion (Step 4). The algorithm proceeds by passing lines through interior points and identifiying constraints struck by the lines as nonredundant. The points of intersection of the lines with the boundary of S are called hit points. The lines generated serve the dual purpose of providing the next interior iteration points. Intialization: Find an interior point: X. Set k = 0. k Step 1: Generate a random Direction D uniformly distributed over a Uirection Set D (a centrally symmetric subset of the boundary of a unit hypersphere).

5 Step 2: Determine: T k b. - a.X (2.4) X. T. i = 1, 2,...,m T a Dk (2.5) + = min {X; |. > 0} + i (2.6) X- = max {X. X < 0} X, i i i Identify constraints i and i as nonredundant Step 3: Generate U c R from a uniform distribution on the interval (0, 1) and set: Xk+l = Xk + (X- + U(+ - X-)) D. Step 4: If a stopping criterion is satisfied: stop; otherwise set k = k + 1 and proceed with Step 1. Several comments are in order. (1) Determining a point uniformly distributed on a unit hypersphere (or a part thereof) in Step 1 may be accomplished by setting k = N/I IN I where N is a n-component vector of standard normal random variables (Knuth [1969]). (2) The proof that the constraints identified in Step 2 are nonredundant is very simple and therefore omitted (see also Boneh and Golan [1?79]). In fact it can be shown more generally that the constraint closest to an iteration point of S is nonredundant; then by considering the hit points as

6 limiting cases of iteration points the result used here follows immediately. Obviously uniqueness of the minima in (2.5) and (2.6) is required but that is true with probability one for the points generated by the random method. (3) The points XP X1, X2,..., correspond to the iteration points, while k Xk + D k k k -Dk Y = X + XD and Y X + X correspond to the hit points. Different choices for the direction set D lead to different versions of the algorithm. The version of Boneh and Golan [1979] corresponds to D equal to the entire hypersphere. Several versions are explored as to their relative computational merits in Section 3. In any case, so long as D is centrally symmetric (i.e., - d C D, whenever d e D), then the iteration points are asymptotically uniformly distributed within S [Smith [1981]). More important for our purposes however is that the hit points are asymptotically uniformly distributed over the boundary of S. A case where this is directly evident is where we restrict iteration points to a c-strip around the boundary of S. Letting c - 0, the iteration and hit points coincide and the uniformity of the hit points over the boundary follows. This algorithm informally referred to as "Shake and Bake" incidentally may have considerable computational merit. The result below will serve as the basis for establishing stopping criteria for the general class of random algorithms. THEOREM 1: The hit points Y+ and Y of the Algorithm are asymptotically uniformly distributed over the boundary of S.

PROOF: We shall argue the result separately for the two k k sequences < k > and < Y >. Without loss of generality, k k 0 1 2 consider the sequence k = kY,, Y,.. is a continuous state Markov chain and by a result in Smith [1981] it suffices to show that the transition probability density function f(Yl1 Y2) > 0 is symmetric, i.e., that f(p, q) = f(q, p) for all q, p on the boundary of S. Let W, U, and V be the iteration points giving rise to the two successive hit points p and q (see Figure 1). Now we go from hit point p to hit point q (in that order) iff we go from iteration point W to iteration point U to iteration point V (in that order). But the order W to U to V is just as likely as the order V to U to W since the direction set D is symmetric. Hence the likelihood of going from p to q is the same as the likelihood of goint from q to p, and the result follows. q P W Figure 1 It is immediate from Theorem 1 that the probability Pi of identifying constraint i as nonredundant during a given iteration'is proportional to the surface area ai of that facet (this presumes that the algorithm has iterated long enough to achieve uniformity). In fact, we have a. (2.7) Pi - i = 1, 2,...,2 c_.

8 where there are Q < m nonredundant constraints in all. It is argued in Smith [1981] that a random permutation of the indices of the points generated y(O), y(1), y(2), y() for a large sample size s will behave as if they were independent as well as identically distributed uniform. Since our algorithm for nonredundant constraint identification is permutation invariant, we may model the hit points yk for large s as i.i.d. random variables. In particular, the events corresponding to constraints identified as nonredundant are modeled as independent over the iterations. Let K be the number of independent and uniformly distributed hit points Yk over the boundary of S necessary to identify all I facets (nonredundant constraints) of S. (Since the two hit points Y+ and yx are strongly correlated, we will be conservative and identify only one hit point per iteration). Theorem 2: The cumulative distribution function of K is k k (2.8) F(k) = P(K < k) = 1 - Z (1-p.) + E (l-pi-pj) i=l i<j - (l-pi-pj-ph) +...+ (1-p 2 -... p)k i< j <h - (Note: the last term is zero) and the expected number of iterations E(j) necessary to identify all facets is:

9 9 -1 -1 (2.9) E(K) = Z (Pi) - Z (Pi + Pj) i=l 1 i<j1 J.-_1 + Z (Pi + - 0. i< j <h 1 Pj+ Proof: Let qi be the number of points that identify facet i out of a total of k points. Then K > k if and only if ql = 0 or q2 = 0 or... q = 0 thus: p(K > k) = p(ql = 0 or q2 = 0 or... q = 0) k Let Pi = P(q. = 0) = (l-pi) Pij = P(qi = 0 and qj = 0) = (l-pi-pj) k Pijh P(qi = 0 and qj = 0 and qh = 0) = (l-Pi-Pj-Ph) etc. Let Q1= Z Pi i=l Q2 = Pij i<j Q3 = Pijh i< j < h ijh Note that Q1 has Q terms in all, Q2 has ( terms Q3 ias ( ) terms, etc. \3

10 Then from the exclusion - inclusion theorem (see Feller [1968] Vol. 1, p. 99) we get p(ql = 0 or q2 = 0 or... q= 0) = Q1 Q2 + Q3 + Q From F(k) = 1 - Q1 + Q2 - Q3 +Q our first result (2.8) follows. As for the second result 00 E(K) = Z P(K > k) k=0 00 m k k = Z [ z (-pi) k- z (l-Pi-pj) + E (1-pi-pj-ph) k=0 i=l i<j i< j < h + (l-Pl - P2 -...- Pt) ]. Interchanging the order of summation, which is justified since all terms are nonnegative, yields ) m 00 koo E(K) = Z ~ (l-pi)- _ z (1-Pi-Pj) i=l k=0 i<j k=0 00 k + ~ Z (l-p.-p.-pk) - i< j < h k=0 co k + E (l-P1 - P2-.. - PQ) k=0

11 The result then follows from the fact that 00 x = for |Ix < 1. k=O l-x Theorem 3,: m (2.10) E(K) < Z 1/pi i=l Proof: Since the negative terms in (2.9) are larger than the next positive terms, we get an upper bound on E(K) by neglecting the second and all succeeding terms. Hence (2.10) follows. Note: this bound could be obtained more directly by considering E(K) for a specific sequence of identified facets, say i = 1 - 2 -+ 3 >+ * -+ %- Then E(K) is the sum of t (independent) geometric random variables and the result follows. We can also provide a lower bound on E(K) that is more restrictive than the obvious lower bound of t. Theorem 4:.9. (2.11) E(K) > B Z l- in g for large 9 i=l R Proof: First observe from (2.9) that E(K) will attain its minimal value if all Pi are equal, i.e., Pi = - i = 1, 2,...,9. We are then reduced to the so-called (coupon) Collector's Problem. See Feller [1968], Vol. 1, p. 225 for a proof that E(K) takes on the form stated.

12 Using Theorem 2 and with knowledge of p1, P2',..p, we could establish a minimum number of iterations k necessary to identify all nonredundant constraints with probability 1 - a. However, neither the Pi nor even t is typically known. An alternative is establishing such a k when all Pi > Pmin where Pmin is prespecified and small. The following lemma is a first step toward that goal. Lemma 5: k 9k k F(k) = 1 -il )< F(k) < 1 - E (1-pi) + Z (1-p-p. ) -i=l - -i=l i<j I = F(k). The error ~ involved in estimating F(k) by F(k) is at most ~ < (9) (1-2p min)k 2 min Proof: The first line of inequalities follows from the Bonferroni Inequalities (Feller [1968], p. 110). The bound for c = F(k) - F(k) < F(k) - F(k) = Z (l-p-p.) < ( - - i<j The error involved in the estimate F(k) = F(k) is by Lemma 5 negligibly small in most cases. For example, for Z = 100 constraints and Pmin = 0.01, ~ < 0.001 for k > 750. We shall employ this approximation in the theorem below. Theorem 6: Let k* be the number of iterations necessary to identify all t nonredundant constraints with probability 1 - a. Set r = ( - )/p.. Then, for large r, set r= 9, )/mm.

13 k* < y(r, a, Z) Z In Z where y(r, a,,) = r(l + In a/ln 2). k k PROOF: We want F(k) > 1 - (1-pi) > 1 - (1 - l/(rt)) > 1 - a. i=l - Hence k* < ln(a/t)/ln(l - l/(rZ)) ln(a/Z)/(-l/(r9)) for r large. Therefore k* < rZ(ln2 - in a) = r(l + ln(l/a)/ln Z)Z in Q. Z Theroem 6 provides a stopping rule on the number of iterations to perform k*. Note that y(r, a, Z) is the inflation factor of the number of iterations required for a regular polytope case of Pi = 1/k for all i = 1, 2,..,.,2 which we know to vary as Z In Z. Note also that y(r, a, 2) r for large Z; that is, if the smallest facet area is ith of the average for a regular polytope, then the number of iterations required is r times as great. The order of magnitude is however still Z in L. As an example, for Z = 100 nonredundant constraints, r = 10, and a = 0.05. we get k* < 7600 iterations. Finally, we should mention that the dimension of the space, n is of no influence on any of the results derived here. 4. EXPERIMENTAL RESULTS To apply the method described in the previous section we need an initial interior point in S. If not readily available, this may be obtained from a simplex Phase I procedure. Such a procedure is required as well if there are (explicit or implicit) equality constraints in the system.

14 The experimental results given below are taken from Karwan, Telgen and Zionts 11981]. The tests involved both randomly generated problems and structured problems taken from the literature. The random testproblems were generated using a modified version of LPGENR (Michaels and O'Neill [1975]) according to the following table. No. of No. of Degen- Planned Set Variables Constraints.. eracy Redundancy 1 10 20 None None 2 10 20 50% None 3 10 30 None 10 Constraints 4 10 30 50% 10 Constraints 5 10 30 None None 6 10 30 50% None Table 1: Randomly Generated Testproblems Notes: 50% degeneracy means that in two different extreme points only 5 variables have a nonzero value. sets 3 and 4 were obtained from sets 1 and 2 by the addition of linear combinations of constraints. - each set contains 5 problems. The characteristics of the structured problems are summarized in Table 2.

15 No. of No, of Problem Variables Constraints Source A 12 26 Disbursement problem, N9kamp and Spronk [1978] B 10 22 Disbursement problem, Nykamp and Spronk [1978] C 20 29 Diet problem, Dantzig [1963] F 15 37 Production planning Meyerman [1979] G 23 59 Production planning Meyerman [1956] I 5 29 Production planning, Tischer [1968] Table 2: Structured Testproblems Admittedly these testproblems are very small, but most redundancy identification methods are not efficient on much larger problems. In Karwan, Telgen and Zionts [1981] it is concluded that the method by Lotfi [1981] using components from the methods by Zionts, Gal and Telgen performs best in identifying all redundant (and nonredundant) constraints. We will use this method to compare with the following variants of tile random method developed in the preceding section: (a) "basic version": purely random directions (D equal to a unit hypersphere) determination of hit points along generated direction only.

16 (b) "simple coordinate version": random directions parallel to coordinate axes (D equal to the intersection of a unit hypersphere with the coordinate axes), determination of hit points along generated direction only. (c) "multiple coordinate version": random directions parallel to coordinate axes, determination of hit points in all coordinate directions. We started the random methods from an interior point obtained from perturbing the starting point of Lotfi's procedure (a feasible extreme point of S). In neither of the tables below where the results are summarized do we take into account the time and effort to obtain a starting point.

C 17 *-. * C CD o d CN * * * * * * O 0 ) o 0,; oH: cN H O r,- 4 o'- co o00 -4 C.-H k * - *c -4 * * * * * ZOO H H H H H H 0 0 r-i r- r-l r-4 r-q rl Z U -,_____ O4J ri'CS~~~~~~~~~~~~~~~~^i H. 0 0 Q 0 0 0 0 0 O Q R Z4- rH Ln uL %v9 o0 N 0 Hr-H Co Ln L V LO I'D U) t d C d,: t f ^ ^C I ^^~~~f~ ^o coI~ ~ ~ r~ < O n 0 ~...H.. H H k t k 00 00 o Od) k C) O a) 4- (. 0. 0.. 0 _> * * * * * * * Cd C OW r N H L c'0r- CN C L n 0 0 -- Zi. 0) C E 0 Cd'd$O O *1~ U * * * * * Cd Q).......,, 4 o k- 4 ~ _z — o C9 00 0 r C O a U P ~4ErU. O ~~0 ~ ~ Q)04 00 k)) * * * * H O 4 m r> co o o Cr d 0 0 0 te OO, -- 0,1r.,,.. H O) X 0fI

18 * dP cdP - _ _ HOr 0 0 P dP dP dP 0 4 * rd- O o o 0o 0 D o0 a 4-4 rd H r-H r- co r r- d 0 0) 4-' - O ).i ~ ( H C 0 H H 0 4J ~ o ) o o O o oW d H O H H H4 0 *CP 0 P P -H -H ro 0o a) o a) H o > >! i; > z z U *, i I c i Z O m — ----- o rd -- Co r' H O < O Eq O +) o X^ g a. ztn{J 0ZO' - r z z S H H < ^ (N N < 0 C ) a * ** H O O DO Cl " 0 _ _ Po a4 ) P 4 04-40 H 00 0) h 4 m N vq H O 1z n2 3Z ^ p ^

19 In addition to the results mentioned in Tables 3 and 4 we found that most of the nonredundancy identified by the random method was identified in the earlier stages of applying the method. As typical, we mention the percentages of nonreduncancy found in the first quartile (.25 seconds) of applying the random method to the randomly generated test-roblems: for variants (a), (b) and (c) these were 88.6%, 96.0% and 96.8% respectively (averaged over 30 problems). Also the last nonredundant constraint identified on the structured problems was identified after only.11 seconds on the average. In evaluating these results it is immediately clear that variant (a) is dominated by variants (b) and (c) as a consequence of the computational savings achieved by using coordinate directions. (See also Karwan, Telgen and Zionts [1981]). Variant (b) performed slightly better than variant (c), which may be caused by the generation of more iteration points in variant (b). In comparing the random method with Lotfi's method on the randomly generated problems, we see that at least 80% of all nonredundant constraints were identified in 0.25 seconds CPU time, whereas Lotfi's method took an average of 0.56 seconds to find all nonredundant constraints. On the structured problems the random method took an average of 0.11 seconds to identify 90% of all nonredundant constraints, whereas Lotfi's method took an average of 0.58 seconds. In conclusion therefore it is possible to identify a large part of the nonredundancy

20 (say 80%) in a small amount of time (20% of that required to find all redundancy). In these experiments we did not use any of the stopping criteria developed in Section 2, It is interesting to note however what their application would have implied. Since these stopping criteria require input on uncertain parameters, it is important to note the influence of approximating certain values. Therefore we used Theorem 6 to construct the following table of upper bounds on k* (the number of iterations required to identify all nonredundant constraints with probability 1 - a) for a =.05. To illustrate these bounds we used the characteristics of the randomly generated testproblems. r = 5 r = 10 Problem Set I = m True I 9 = m True 5 1 599 564 1198 1129 2 599 461 1198 923 3 960 564 1919 1129 4 960 461 1919 923 5 950 777 1919 1554 6 960 599 1919 1198 Table 5: Bounds on k*

21 From Table 5 it is seen that it is relatively easy to find all facets with a "large" area (small r). Also the estimated number of nonredundant constraints should not be too far away from the true value of Z (as in sets 3 and 4) since that induces too many unnecessary iterations. Finally note that it is possible to derive the magnitude of the facet areas that have been hit with probability 1 -a after k iterations: k r > - In -R For example if k = 360, a =.05, and 2 = 20, then r > 3. 5. CONCLUSION Thus far the random methods have been viewed as preprocessors that are capable of identifying nonredundant constraints. The nonidentified constraints can then be omitted in an optimization algorithm although the solution has to be checked for feasibility afterwards, since the omission of some constraints may not be justified. As an aid in this process we have developed stopping criteria and derived probability statements for the random methods. This approach can be extended further by the inclusion of Bayesian arguments; we hope to report on this in the near future. In addition to an application as a preprocessor, the random methods could also be used as integral parts of an

22 optimization procedure, especially in the Bayesian mode approach. This topic is currently under investigation. Although we described the random methods entirely in terms of linear constraints, they are applicable to nonlinear constraints as well. Boneh and Golan 11979] give an example of a problem with quadratic constraints that could be handled only after removal of the redundant constraints. Furthermore, with some minor modifications, nonconvex feasible regions (and even disconnected ones) can be treated by the same class of random methods.

23 REFERENCES 1. Balinsky, M. L. [1961], "An Algorithm for Finding all Vertices of Convex Polyhedral Set," J. Soc. Ind. Appl. Math 9 (1), 72-88. 2. Boneh, A. and Golan, A., [1979], "Constraint Redundancy and Feasible Points Generator," presented at EURO III, Amsterdam, also in Karwan, Telgen and Zionts [1981]. 3. Bradley, G. H., Brown, G. G. andGraves, G. W. [1980], "Structural Redundancy in Large-Scale Optimization Models," Technical Report, Naval Postgraduate School, Monterey, California 93940: also in Karwan, Telgen and Zionts [1981]. 4. Brearley, A. L., Mitra, G., and Williams, H. P. [1975], "Analysis of Mathematical Programming Problems Prior to Applying the Simplex Algorithm, Math. Progr. 8, 54-83. 5. Dantzig, G. B. [1963], Linear Programming and Extensions, Princeton University Press. 6. Feller, W. [1968], An Introduction to Probability Theory and its Applications, Vol. I, John Wiley & Sons, Inc. 7. Gal, T. [1975], "Zur Identifikation Redundanter Nebenbedengungen in Linearen Programmen," Reitschrift Fur Operations Research 19, 19-28. 8. Gal, T., [1978], Redundancy in Systems of Linear Inequalities Revisited, Discussion Paper 19, Fern Universitat, Magen BRD. 9. Karwan, M. H., Telgen, J. and Zionts, S. [1981], Redundancy in Mathematical Programming, Springer Verlag, New York-Hei delberg-Ber lin. 10. Knuth, D. E. [1969], The Art of Computer Programming, Addison-Wesley, Reading, Mass. 11. Li.y, J. [1971], "Metody Pro Nalezini Redundantnich Omereni v Ulohach Linearniho Programovani," Ekonomicho Mathematicky Obror, 7 (3), 285-298. 12. Lotfi, V. [1981], "Redundancy in Mathematical Programming," Dissertation, State University of New York, Buffalo, New York 14260. 13. Mattheis, T. H. [1973, "An Algorithm for Determining Irrelevant Constraints and all Vertices in Systems of Linear Inequalities," Operations Research 21, 247-260.

UNIVERSITY OF MICHIGAN 3 9015 04732 7450 14. Meyerman, G. L. [1966], "Betekenis Van Een Aantal Cultuurtechnische Jakboren Voor De Ontwikkelings Mogelykheden - Van Veenkoloniale Akkerbouw Bedryven Dissertation, Agricultural University Wageningen, The Netherlands. 15. Meyerman, B. G. 11979], "Private Communication," 16. Michaels, W. M. and O'Neill, R. P. 11975], "User's Guide for LPGENR," Dept, of Computer Science, Louisiana State University, Baton Rouge, LA, 17. Nykamp, P. and Spronk, J. 11978], "Three Cases in Multiple Criteria Decision Making, an Interactive Multiple Goal Programming Approach," report 7822, Centre for Research in Business Economics, Erasmus University, Rotterdam. 18. Smith, R. L. [1981], "Efficient Monte Carlo Procedures for Generating Points Uniformly Distributed Over Bounded Regions," report 81-1, Dept. of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI. 19. Telgen, J. [1979], Redundancy and Linear Programs, Mathematical Center, Amsterdam. 29. Thompson, G. L., Tonge, F. M. and Zionts, S. [1966], "Techniques for Removing Non-binding Constraints, and Extraneous Variables from Linear Programming Problems," Management Science 12 (7) 588-608. 21. Tischer, H. J. [1968], "Mathematische Verfahren zur Redusierung der Seiden - Und Spaltenzahl Linearer Optimierungsaufgaben," Dissertation, Central Institut fur Fertigungstechnik des Maschivenbaues, Karl Marx Stadt, DDR.