A Steepest Decent Multiplier Adjustment Method for the Generalized Assignment Problem Nejat Karabakal James C. Bean Department of Industrial & Operations Engineering University of Michigan Ann Arbor, MI 48109 Jack R. Lohmann School of Industrail and Systems Engineering Georgia Institute of Techology Atlanta, GA 30332 Technical Report 92-11 January 1992 Revised October 1993

A Steepest Descent Multiplier Adjustment Method for the Generalized Assignment Problem Nejat Karabakal Faculty of Business Administration Bilkent University, 06533 Ankara, Turkey James C. Bean Department of Industrial and Operations Engineering University of Michigan, Ann Arbor, Michigan 48109-2117 Jack R. Lohmann School of Industrial and Systems Engineering Georgia Institute of Technology, Atlanta, Georgia 30332 January 1992 (Revised October, 1993) Abstract Multiplier adjustment methods (MAMs) have been used to solve the Lagrangian dual of the generalized assignment problem. We improve the altraditional MAM by incorporating a post-optimality analysis for the 0-1 knapsack subproblems based on a dynamic programming formulation. Our approach guarantees a steepest descent improvement at each iteration and eliminates all intermediate knapsack solutions required by the traditional MIAM for calculating a step length. We also describe a branch-and-bound algorithm that incorporates the improved MAM as a bounding tool. Computational results obtained with the algorithm are reported.

1 Introduction The generalized assignment problem (GAP) seeks to optimally assign n tasks to m agents such that each task is assigned to exactly one agent but each agent is constrained only by the amount of a resource. Let cij be the profit of assigning task j to agent i, aij be the resource required by agent i to perform task j, and bi be the resource available to agent i. Suppose a 0-1 variable, xij, is set to one if task j is assigned to agent i, zero otherwise. Then, an integer programming formulation for the GAP is: m n maximize > 1 cijxij (1) i=1 j=1 n subject to aijxij < bi i =1,..., m (2) j=1 m xij =l j=l,...,n (3) i=1 x E {0,1} all i,j. (4) Aside from many practical problems that can be put into the above formulation directly, other applications can be reformulated to the GAP structure. Ross and Soland [17] show how to formulate a number of important facility location and location-allocation problems as GAPs. In more sophisticated applications, the GAP appears as a subproblem. Fisher and Jaikumar [4] describe a two-phase vehicle routing heuristic where, in the first phase, a GAP is solved to assign customers to vehicles and, in the second phase, traveling salesman heuristics are invoked to determine individual vehicle routes. The shopping mall design project in Bean et al. [2] could be formulated as a GAP. In that problem, xi = 1 if store j is located in area i of the mall. Both GAP and finding a feasible solution to (2)-(4) are NP-hard problems ([5] and [15]). A common solution technique for tihe CAP has been branch-and-bound with upper bounds obtained by various relaxations. lThe straightforward linear programming relaxation, given by replacing (4) with x, > 0. for all i and j, is rarely used in the literature since it fails to exploit the structure and results in large duality gaps. Ross and Soland [16] used a relaxation replacing (2) with aX,x,3 < b-, for all i and j. Martello and Toth [14] proposed another relaxation tby simply removing multiple-choice constraints (3). They reported improved performance over the Ross-and-Soland algorithm for problems where knapsack constraints (3) are tight. Later, several Lagrangian relaxation approaches were 1

proposed for the GAP. Chalmet and Gelders [3] compared the effects of relaxing either (2) or (3) with respect to duality gaps and computational times. A subgradient optimization is used in the search process. Jornsten and Nasberg [9] used a variable-splitting (Lagrangian decomposition) approach in which two subproblems are generated, one relaxing (2) and the other relaxing (3). Fisher, Jaikumar, and Van Wassenhove [5] obtained a Lagrangian relaxation by dualizing the multiple-choice constraints (3). Instead of using the more common subgradient optimization to solve the Lagrangian dual, they developed a multiplier adjustment method (MAM) for determining good multipliers. MAMs are heuristic algorithms for solving Lagrangian duals exploiting the special structure of a particular problem. They also reported significant improvements over Ross-and-Soland and Martello-and-Toth algorithms in their computational tests. Later, Guignard and Rosenwein [7] suggested several enhancements over this MAM but the basic structure remained the same. We will refer to Fisher-Jaikumar-Van Wassenhove version of the MAM that incorporates some of the enhancements of Guignard and Rosenwein as the traditional MAM in subsequent discussions. To construct the Lagrangian relaxation that dualizes (3), let Uj be the multiplier associated with multiple-choice constraint j. Then L(u)= y uj + max { V -(cij -uj)xij \ x satisfies (2) and (4). (5) For a given u, L(u) is an upper bound on the optimum GAP objective. The tightest bound is obtained by solving the Lagrangian dual problem: minZ L(u). Although it cannot guarantee bounds better than those obtained by a subgradient algorithm, the MAM usually provides faster bound improvements in solving the Lagrangian dual. It starts with given initial multipliers and solves the corresponding Lagrangian relaxation. If the Lagrangian solution, i.e. x satisfying only (2) and (4), is also feasible to (3), it is optimal; otherwise violated multiple-choice constraints give descent directions of L(u) over the multiplier space. The method then finds a step length along any descent direction, updates multipliers and iterates. Further discussion is given in Section 2. In this paper, we propose algorithmic improvements to increase the effectiveness of the traditional MAM as a bounding tool. These improvements are due to a post-optimality analysis for the 0-1 knapsack subproblems based on a dynamic programming (DP) formulation. In computing the value of L(u) for a given u, m knapsack problems are first solved using forward recursions. Subsequent backward recursions generate useful post-optimality 2

information that enables the MAM to achieve a guaranteed steepest descent improvement at each iteration. It also eliminates all intermediate knapsack solutions required by the traditional MAM for calculating a step length. The post-optimality analysis is explained in Section 3, followed by a description of its use in improving the traditional MAM in Section 4. In Section 5, we present a branch-and-bound algorithm that incorporates the improved MAM in upper bounding. Moreover, we describe a lower bounding heuristic that makes feasible an infeasible Lagrangian solution. Computational results obtained with the algorithm are also reported. 2 Traditional MAM In this section we describe the traditional MAM and discuss the improvements proposed. First, we introduce some further notation. If (dl,..., dm) is a vector of real numbers then min2i{di} is the second smallest element of d over i. Similarly, argmin2i{di} designates the index of the second smallest element. Define max2i{di} and argmax2i{di} analogously. Given some solution, x, let pj = ~ i xj for all j. The traditional MAM, as we define it, is stated in detail in [12]. Given a violated multiple choice constraint, j*, and a current optimal solution, x, the key calculation in the traditional MAM finds Aij., the least decrease (increase) in uJ* required for setting xj. = 1 (xrij = 0) in an optimum knapsack solution of row i if pi = 0 (ps. > 1). The traditional MAM uses the following approach in determining Ai*j quantities. Define cij = cj - uj, and zi = Xj cijxij. Suppose we removed variable xij. from knapsack i. We will call the remaining problem with (n - 1) variables a restricted knapsack. Let zR(3) be the optimum objective value of the restricted knapsack when its right hand side capacity is 3. Now. if p,. = 0 and xij. = 0, the least increment in cij* that would cause xj. = 1 in an alternative optimum solution is given by j = z, -. - -R(b - aij.). (6) Similarly, if pj. > 1 and xj. = 1, the least decrement in cij* that would cause xij* = 0 in anl optimum solution of knapsack i is given by A., = -. - zR(b). (7) In the Fisher-Jaikumar-Van \Wassenhove strategy it is never necessary to increase a multiplier; that is, for all j, pj < 1 is maintained throughout. Decreasing multiplier uj* for 3

pj* = 0 improves the bound by Ai.j at each iteration, where i* is argmini{Aij* }. However, Guignard-Rosenwein showed that, although such a strategy forces primal feasibility, it restricts the search over the multiplier space in solving the Lagrangian dual. Therefore, we choose not to impose any restriction on pj and allow multipliers to be both decreased and increased. Increasing multiplier uj3 for pj* > 1 in the Guignard-Rosenwein version improves the bound by (pj* - l)Ai.j (see also [8]). The MAM that serves as a basis for our improvements is a hybrid of the techniques of Fisher-Jaikumar-Van Wassenhove and Guignard-Rosenwein. Hence, it has a different bound improvement rate than either. The following result, which establishes the new bound improvement scheme, is used for selecting the steepest descent bound improvement in Section 4. Let A = min2i{A.j*} be the step length chosen by this hybrid, traditional algorithm, where the min operator ranges over all i, if pj = 0 {iI xo. = 1}, if pj. > 1. Proposition 1 Adjusting multipliers from uk to uk+l reduces the upper bound from L(uk) to L(uk+'), where L(uk+l) L(uk) - Ai;. if pj = 0 L(uk) - (pie - 2)A - Aj., if pj > 1. Proof: See [12] (refereed with paper). a Proposition 1 suggests that thie bound improvement is not strictly monotonic for the traditional MAM and hence, the nlet iod m{lax stall at a certain bound value for a number of iterations without making any progress. \Ve observe the following issues that impair the performance of the traditional MAM: 1. In computing a step length along a candidate descent direction, m restricted knapsack problems must be solvoed if p/' = 0. and pj. restricted knapsack problems must be solved if p,. > 1. 9. A candidate descent direction is selected by scanning the set {j I pj $ 1} in any given order. The descent direction selected is the first encountered that gives a nonzero step length. Once it is found, all remaining candidate directions are discarded 4

from further consideration. Although this strategy reduces the number of restricted knapsack solutions, it can result in slow convergence. This paper proposes improvements for both issues with the help of a 0-1 knapsack postoptimality analysis described in the next section. The post-optimality analysis directly generates Aij quantities for all i, j and hence, eliminates restricted knapsack solutions. Also, with Aij quantities readily available as a table, it is possible to select a direction/step length pair to guarantee a steepest descent improvement of L(u). 3 A Post-optimality Analysis for 0-1 Knapsack In this section, we concentrate on one of the m 0-1 knapsack problems and suppress the i-index in the variables for clarity. Consider the following problem defining sj as the complement of x3 (sj = 1 - xj): n z = max y c, x j=1 n subject to yaj Xjb < b J=l.rX + J =- 1 for j = 1,..., n xj E {0,1} for allj. We assume that b and a, are integers for all j. The post-optimality analysis is based on a DP solution of 0-1 knapsack probleims ([6], [18]). Let the state space consist of the set of ordered pairs (j, 3), where j enumerates variables (1 < j < n) and 3 enumerates the knapsack capacity (0 <:. < b). The knapsack function, fj(O), is defined to be the optimum value of a subproblecr which uses variables 1 through j with a knapsack capacity of d3. Consider a DP network in which nodes are states and arcs represent variable selection decisions. Arc lengths are assignedl the objective values of corresponding variables. A standard recursion is use(l to find.fj(3) = tihe longest path from state (0,0) to (j, 3) for each feasible state (j. \3) in the I)PI network. rThe longest path length from (0, O0) to (n, b) is denoted z. Similarly, we can make a backwIard spass to compute gj(p) defined as z minus the length of the longest path from state (j. 3) to (n, b). Specifically, 5

Boundary conditions: gn-l(P) = z for b-an</< b gn-(/3) = min{z, z-cn} for 0 </ < b - an Recursive equations: For j = n - 2,..., 1 gj(/3) = gj+l(/3) for b- aj+i </ < b gj(/3) = min{gj+i(/3), gj+i(/ + aj+) - Cj+i} for 0 < /P b - aj+ Theorem 1 Suppose xj = 0 in the current optimum solution. Assuming that all other objective coefficients remain unchanged, the minimum 6 such that cj + 6 would cause xj = 1 in an alternative optimum is given by Aj, where A1 = gi(ai)-c1 (8) Aj = min {g-j(/)-fji-(P/-aj)-cj} for j = 2,..,n-1 (9) aj <f<b An = Z-fn- i(b-a,)-Cn. (10) Proof: If xj = O then sj = 1 in the current solution. Consider the DP network in which the longest path from (0, 0) to (n, b) has an arc corresponding to sj = 1 (see Figure 1). Let one of the arcs representing xj be from (j - 1, / - aj) to (j, /). Define z1 and z2 as the longest path lengths from (0, O) to (j - 1, / - aj) and from (j, 3) to (n, b), respectively. Let ZR be the length of the longest path from (0, 0) to (n, b) constrained to include the arc from (j- 1,3 - aj) to (j,/3). Then ~R ~ = -z+ Cj + z2< z. In order for xj to be selected in an alternative optimum solution, zR should be at least as large as -. To enforce this selection, we have to increase Cj by at least z - zR, where z zR = z-(Z-+cj+z2) -- (~ - 22) - Z1 — Cj = g() - fj(/ - aj) - cj. Since there may be more than one arc corresponding to the variable xj in the DP network, it is necessary to look for the minimum increment of cj over all these arcs. min {z- } = mi {gj(/3)-fjl(/ -aj)-cj} a, <=<b A <<b = j 6

knapsack capacity (nb) 1, - a./ /22 1- - r -ajr p-aj- i (j,/- aj) /3 O -a/ 'S. = ' (0, ) j-1 J n variable Figure 1: Illustration of the post-optimality analysis. as given by equation (9). Consider the boundary conditions. Forj = 1, gi(al) = mingl(P) over al < 3 < b and z1 = 0; substitution gives equation (8). For j = n, fn-l(b - an) = max fn_ (l) over 0 < 3 <K b - an and -2 = 0; substitution gives equation (10). U Corollary 1 Suppose x3 = 1 in the current optimum solution. Assuming that all other objective coefficients remain unchanged, the minimum 6 such that cj- 6 would cause Xj = 0 in an optimum solution is given by Aj, where,A = g9(0) A = min{g,(3)-fjX1()} for j=2,..., n-1 O<'3<b An = z- fn-1(b). Proof: If x; = 1 then s- = 0 in the current solution. The result directly follows by interchanging xj and sj in the theorem. * It is interesting to note that the procedure implied by the above theorem and its corollary is similar to finding total floats in activity networks by the critical path method. The floats define Aj values. This idea is adopted from an algorithm for the capital 7

constrained equipment replacement problem (see [10] and [11]). It is similar to analyses in Lee [13] and Van Hoesel and Wagelmans [19]. 4 Improved MAM The results of 0-1 knapsack post-optimality analyses combined with Proposition 1 enable us to improve the traditional MAM. The improved MAM is given by the following algorithm: Step 0: Initialize multipliers by either u~ = max2{cij} ([5]) or u~ = maxi{cij} ([7]). Solve m knapsack problems to obtain Lagrangian solution, x~. Perform post-optimality analysis for each knapsack to compute A~z quantities. Set the iteration counter k - 0. Step 1: Define pj = E,, il(j) = argminj{Aj}, i2(j) = argmin2Z{A"} for all j. Let S ={ j \ pj 1 and A,2(j)j > 0} and go to step 2. Step 2: If S = X, stop; otherwise select index j* E S such that j= argmax max {A (j, max {(pj - 2)A 2(j),j A jES pj=0 {jlp3>l} ) Construct a descent direction <k = (k,...,k), where +1, ifj j* and p. = O '- = < -1, ifj = j* and pj. > 1 0, otherwise. Step 3: Letting the step lengthl = A j.)., adjust multipliers, uk+l k- u k k. Solve knapsack problems followed by post-optimality analyses to compute respective xk+l and A'j+ quantities. Set k - k +1, and go to step 1. In step 2, this method evaluates all possible descent directions and chooses the steepest for the next multiplier update. The extra computational effort introduced by postoptimality analyses is compensated by 1) eliminating all restricted knapsack solutions for determining step lengths, and 2) increasing the rate of bound reductions through a steepest descent improvement at each iteration. 8

5 Branch-and-Bound Enumeration If the Lagrangian solution does not provide an optimal GAP solution, one or more multiple-choice constraints are violated. We branch from such solutions using a strategy suggested by Bean [1]. First, we select a violated multiple-choice constraint with the largest multiplier value. Since any feasible solution includes exactly one out of m variables from this constraint, we create m branches. At each branch, one variable is fixed at one, and the rest are automatically set to zero. We also use the following heuristic in an attempt to attain feasibility given a Lagrangian solution x that violates (3): Step 1: Store in a list all the variables Xi; such that Xij = 1 and pj > 1 in the current solution. Sort the list in ascending Cij/aij order. Step 2: Starting from the top of the list, set x2j = 0 and increase the slack space of knapsack i by aij. Repeat until pj < 1 for all j. Step 3: Using only variables xij with pj = 0, solve all knapsack problems optimally. If the heuristic procedure stops with pj = 1 for all j, the final solution is feasible and the incumbent is checked for improvement. Otherwise, the attempt to make the Lagrangian solution feasible fails. Computational experiments showed that the heuristic is more effective with Lagrangian solutions returned by a subgradient algorithm. Therefore, following the termination of the MAMi we perform a few subgradient iterations with multipliers initialized by the output of the MAM. While subsequent subgradient iterations improve the MAM upper bound at the root node for certain types of problems, sim1ilar improvements were rarely observed for the desceindent nodes where output multipliers of the MAM are very close to optimum. Therefore. the primary advantage of subgradient iterations is to tighten the lower bound by enhancing the above heuristic, and thus, to improve the overall branch-and-bound enumeration. A study of the quality of lower bounds from the heuristic and upper bounds from the NMAM and subgradient iterations follows the experiment description. Our branch-and-bound algorithm was programmed in C and compared against the Nlartello-Toth branch-and-bound algorithm coded in FORTRAN (program MTG in [15]). The programs were compiled using cc and xlf compilers, respectively, on the IBM RS/6000, 9

with optimization switches turned on. Random problems were generated according to distributions suggested in [14] and [15]. The results are shown in Table 1. The statistics were obtained over a sample of five problems. The letters A, B, C, and D refer to the four different problem generations: A. Parameters aij and cij are integers uniformly random in [5, 25] and [1, 40], respectively. Let x be a solution of (1)-(4) with (2) replaced with aijxij < bi, for all i,j. Then, b, = 9(n/m) + maxi{j aijxij}. B. Same as A, but bi is set 70% of the value given in A. C. Same as A, but bi = 0.8 Ej aij/m. D1. Parameters a j uniformly random in [1, 100], bi = 0.8 Ej aij/m, and cij = aij + K, where K is uniformly random in [-10, 10]. D2. Same as D1, but cij is uniformly random in [aij, aij + 20]. Class A problems, first suggested by Ross-Soland [16], usually admit many feasible solutions and are easier than the others. Problems of B, C, and D have increasingly tighter capacity constraints and beconme increasingly difficult to solve. In addition, class D problems have a correlation between objective values and knapsack weights (often observed in real-world applications) and are the most difficult. The results in Table 1 show that our algorithm is comparable with Martello-Toth algorithm with easy problemlls of class A. However, our algorithm performs better with nlore difficult problems of, (C, alnd D. Note that performance will degrade if larger knapsacks are solved. See [5] for discussion of this issue. Turning to the quality of the liulpper and lower bounds, we ran the same set of problems disabling branching and ob)tained root linode bounds. Table 2 shows average deviation from optimal value of a) the lower boundinig heulristic, 1)) the MAM upper bound, and c) the MANM followed by 50 subgradlient iterations initialized by the final MAM multipliers. VFor all problem types and sizes. onCe can ol)serve that final upper bounds are closer to the true optimal value than lower t)ounlll(s. Moreover, the quality of MAM bound tends to deteriorate with increasing prolblel d(ifficultv, resulting in greater improvements by additional subgradient iterations to reach the final node bound. 10

6 Summary We describe a branch-and-bound algorithm featuring a more effective MAM than those of Fisher, Jaikumar, and Van Wassenhove [5] and Guignard and Rosenwein [7]. The MAM's effectiveness stems from a post-optimality analysis we developed for the 0-1 knapsack subproblems. This MAM is embedded in a branch-and-bound algorithm adapting the multiple choice branching strategy of Bean [1]. Computational tests indicate that the algorithm is capable of solving nontrivial problems with up to 500 variables in reasonable times. The code is available from the first author. Acknowledgment This research was supported in part by National Science Foundation grants PYI-G-DMC8352346, DDM-9018515 and DDM-9202849 to the University of Michigan. References [1] Bean, J. C., "A Lagrangian Algorithm for the Multiple Choice Integer Program," Operations Research. 32 (198-1). 1185-1193. [2] Bean, J. C., C. E. Noon. S..M. RIIan. and G. J. Salton, "Selecting Tenants in a Shopping Mall," lntcrftacs. 18 (1988). 1-9. [3] Chalmet, L. and C;elders L.. "Lagrangean Relaxations for a Generalized AssignmentType Problem." In NI. lRoubens (editor) Advances in Operations Research, NorthHolland, Amsterdarm (1977). 103-109. [4] Fisher M. L. and R. Jaikumar. 'A Generalized Assignment Heuristic for Vehicle Routing," Networks. 11 (1981). 109 121. [5] Fisher M. L., R. Jaikutlar. alnd I... Van Wassenhove, "A Multiplier Adjustment Method for the (Genieralized.Assiglnment Problem," Management Science, 32 (1986), 1095-1103. [6] Gilmore, P. C. and Rt. E. (;olory. T''he Theory and Computation of Knapsack Functions," Operations Realrhll. 1- (1966). 1045-1074. [7] Guignard, M. and MN. 1B. IRosenweill. "An Improved Dual Based Algorithm for the Generalized Assignment P}rotblell." Operations Research, 37 (1989), 658-663. 11

[8] Guignard, M. and M. B. Rosenwein, "An Application-Oriented Guide for Designing Lagrangean Dual Ascent Algorithms," European Journal of Operations Research, 43 (1989), 197-205. [9] J6rnsten, K. and M. Nasberg, "A New Lagrangian Relaxation Approach to the Generalized Assignment Problem," European Journal of Operations Research, 27 (1986), 313-323. [10] Karabakal, N., "Capital Rationing Replacement Problem," Ph.D. Dissertation, Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, 1991. [11] Karabakal, N., J. R. Lohmann, and J. C. Bean, "Parallel Replacement Under Capital Rationing Constraints." Forthcoming in Management Science. [12] Karabakal, N., J. C. Bean and J. R. Lohmann, "A Steepest Descent Multiplier Adjustment Method for the Generalized Assignment Problem: Technical Report," Technical Report 92-11, Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, 1992. [13] Lee, I.-S., "On Sensitivity Analysis for Shortest Path Dynamic Programming," Working Paper, University of California, Los Angeles, (1986). [14] Martello, S. and P. Toth, "An Algorithm for the Generalized Assignment Problem," In J. P. Brans (editor) Operational Research '81, North-Holland, Amsterdam (1981), 589-603. [15] Martello, S. and P. Toth, Anapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, 1990. [16] Ross, G. T. and R. M. Soland, "A Branch and Bound Algorithm for the Generalized Assignment Problem," Mathematical Programming, 8 (1975), 91-103. [1 7] Ross, G. T. and R. M. Soland, "Modeling Facility Location Problems as Generalized Assignment Problems," Mlanagement Science, 24 (1977), 345-357. [18] Toth, P., "Dynamic Progranlmilg Algorithms for the Zero-One Knapsack Problem," Computing, 25 (1980). 29-45. [19] Van Iloesel, C.P.M. and A.P.M.. \\agelnans, "Sensitivity Analysis of the Economic Lot Sizing Problem," Report 9165/A of The Erasmus University Rotterdam, The Netherlands, (1991). 12

Table 1: Computational experience with random problems. Times are in CPU seconds on IBM RS/6000. Five problems for each line. Martello-Toth Karabakal-Bean-Lohmann m A _ x _1:_- — _ l Jr?_ ~ ]__ i 11' _ __ m n I ype Illn Vlea(lan ivlax IVlln ivleaan iVlax 5 20 A 0.03 0.04 0.06 0.10 0.14 0.17 B 0.26 1.36 6.17 0.25 0.61 0.86 C 0.19 1.35 5.48 0.25 0.68 1.11 D1 4.62 7.84 12.74 6.26 12.79 16.42 D2 88.44 452.21 515.81 0.87 4.33 5.10 5 30 A 0.04 0.06 0.09 0.15 0.27 1.31 B 3.73 64.40 293.73 0.56 1.88 3.85 C 8.51 141.51 367.59 0.53 4.14 8.11 D1 45.78 569.40 780.23 19.17 69.01 95.41 D2 TIME TIME TIME 30.18 65.35 95.25 5 40 A 0.04 0.07 0.53 0.26 0.39 3.41 B 181.36 814.87 1000* 1.21 7.46 23.64 C 226.25 1000* 1000* 4.49 6.55 57.58 D1 973.40 1000* 1000* 287.96 1000* 1000* D2 - - - 203.43 462.26 642.53 5 50 A 0.05 0.08 30.61 0.27 0.81 5.39 B 45.92 1000* 1000* 0.59 20.01 158.85 C TIME TIME TIME 16.24 30.77 42.92 D1 TIME TIME TIME 474.50 999.14 1000* D2 -- 543.59 1000* 1000* 10 20 A 0.03 0.07 0.10 0.16 0.49 1.82 B 0.06 22.24 50.80 0.32 0.50 2.45 C 0.38 7.63 29.27 0.12 0.62 1.32 D1 12.29 65.40 153.91 3.80 7.85 12.10 D2 TINIE TIME TIME 3.19 6.11 14.19 10 30 A 0.07 0.10 11.57 0.26 0.30 6.12 B 0.15 202.67 1000* 1.91 2.66 6.45 C TIM E TIMIE TIME 3.43 12.25 20.55 D 1 TIME TIM E TIME 94.57 156.75 437.52 D2 - - 76.91 125.61 510.90 10 40 A 0.07 0.10 0.11 0.48 1.92 4.40 B 0.1- 1000 1000* 4.75 7.39 153.94 C -- -9.16 29.31 41.92 D1 - - 987.25 1000* 1000* D2 - -- TIME TIME TIME 10 50 A 0.09 0.95 4.46 0.55 1.74 12.63 B TIME TINIE TIME 7.07 25.70 75.32 C -- - 23.69 70.43 355.64 TIME indicates that none of the problems in the sample could be solved within 1000 seconds. * Problem exceeded 1000 second time limit. 13

Table 2: Average bound deviations from for each line. the optimum at the root node. Five problems Average Deviation of Average Deviation of Average Deviation of Problem Lower Bound Upper Bound Upper Bound m n Type (Heuristic) (MAM) (MAM + Subgradient) (%) (%) (%) 5 20 A 0.00 0.00 B 0.93 1.68 0.69 C 1.79 1.16 1.02 D1 2.53 1.40 0.60 D2 1.82 3.06 1.46 5 30 A 0.08 0.00 B 1.83 0.87 0.39 C 2.26 2.17 0.98 D1 2.30 3.15 0.51 D2 1.69 5.26 0.40 5 40 A 0.06 0.05 0.00 B 2.95 0.93 0.47 C 3.13 1.71 0.72 D1 1.53 6.03 0.33 D2 2.31 12.13 0.54 5 50 A 0.29 0.02 0.00 B 2.79 1.41 0.37 C 1.80 1.61 0.61 D1 1.80 8.89 0.28 D2 1.17 16.76 0.27 10 20 A 0.84 0.00 B 1.61 1.06 0.51 C 0.09 1.20 0.28 D1 1.92 1.88 0.87 D2 3.29 2.31 0.99 10 30 A 0.56 0.00 B 1.47 0.38 0.12 C 3.74 1.30 1.16 D1 2.79 3.95 0.49 D2 2.47 11.24 0.67 10 40 A 0.49 0.00 - B 1.30 0.57 0.29 C 1.96 1.37 0.34 D1 4.14 19.19 1.43 D2 3.57 20.07 0.48 10 50 A 0.37 0.00 B 0.70 0.58 0.10 C 1.77 1.05 0.33 14