Division of Research Graduate School of Business Administration The University of Michigan GENERATING OPTIMAL DUAL SOLUTIONS WITH SIMPLE PRIMAL HEURISTICS Working Paper No. 422 Hans M. Schneeberger FOR DISCUSSION PURPOSES ONLY None of this material is to be quoted or reproduced without the expressed permission of the Division of Research. March 1985

Abstract This paper develops results to solve the Lagrangean dual problem with simple primal heuristics. The analysis is motivated by the fact that current solution procedures for the dual problem can become computationally burdensome. The results are used on the "nearest unvisited city heuristic" to solve the travelling salesman dual problem and on the "bang for a buck ratio" to solve the knapsack dual problem.

i i II I I I i I I

1. Introduction This paper investigates procedures to generate optimal dual solutions with simple primal heuristics. Research to date indicates that it is unlikely that NP-hard optimization problems can be solved without explicitly or implicitly enumerating all possible feasible solutions [2]. At the same time any implicit enumeration such as branch and bound or dynamic programming with branch and bound requires the use of upper and lower bounds to the optimal solution. The performance of any algorithm using upper and lower bounds depends almost exclusively on the bounding procedures used. The important criteria for these bounding procedures is the computational effort used to generate the bounds and the tightness of the bounds generated. In order to generate upper bounds one can apply a heuristic to the original optimization problem to get a feasible solution. The objective function value of such a solution can then serve as an upper bound. Successful upper bounding strategies have been such that upper bounds are computed only a very few times in the course of an optimization. Hence the computational effort to generate an upper bound has not been critical to the performance of implicit enumeration algorithms. Lower bounds, on the other hand, are often computed at every node in a branch and bound tree or at every state in a dynamic program. The computational efficiency of lower bounding procedures therefore becomes a major factor for the performance of implicit enumeration algorithms. The fine tuning of such enumeration algorithms often involves tradeoffs between the time spent to compute lower bounds and work with a smaller problem and the time spent to work with larger problems in the main routine [4']. 1

2 Lower bounds which have been used successfully are the objective function value of an LP-relaxation of the original problem and the objective function value of the Lagrangean dual to the original problem. It is well known that the solution to the dual problem provides a lower bound which is at least as tight as the objective function value of the LP relaxation [3]. However, solving the dual problem can involve considerably more computational effort than solving the LP-relaxed problem. This is often the case if one needs to resort to a subgradient optimization to obtain the dual solution, as noted by Fisher [1]. Fisher [1] concludes that the development of multiplier adjustment methods to circumvent the subgradient optimization holds great promise for future research. In order to avoid the computational burden of solving the dual problem without abandoning the tight bounds provided by a dual solution, recent work has been focused on procedures which approximate the optimal solution to the dual problem [5, 6]. In general such approximation procedures exploit well known properties of a given solution by adjusting the Lagrangean multipliers such that the solution under consideration is optimal for the computed multipliers. It should also be noted that such multiplier adjustment methods provide an update of the upper bound each time a lower bound is computed. Potts and van Wassenhoven [5] reported on a successful application of such a multiplier adjustment method in a branch and bound algorithm developed to solve the single machine scheduling problem to minimize weighted completion time. Schneeberger [6] showed that the multiplier adjustment method used does not just approximate the dual solution but actually provides the optimal solution to the dual problem.

3 This paper is an extension of [6] and develops results and procedures to solve the dual of NP-hard problems, using multiplier adjustment methods. We illustrate the use of these results on two well-known problems, the travelling salesman problem and the knapsack problem. In the next section we develop results, characterizing optimal solutions to the integer programming dual problem. In section three we illustrate the use of the results established in the previous section, by developing an optimal multiplier adjustment method for the travelling salesman problem. In section four we illustrate the use of the same results, by developing an optimal multiplier adjustment method for the knapsack salesman problem. The paper concludes in section five. 2. Sufficient saddlepoint conditions In this section we develop conditions sufficient to verify a saddlepoint, that is, sufficient conditions to verify an optimal dual solution. Consider the following optimization problem where x is a vector of integers. min f(x) x s.t. g(x) < 0. The Lagrangean dual to this problem can be stated as: max min {f(x) + ug(x)}. u x Let h(u) be the objective function value of the Lagrangean dual evaluated at u, that is: h(u) = min {f(x) + ug(x)}. x Note that x which solves h(u) may not be unique, let X(u) be the set of all X which solve h(u).

4 Theorem 1: Let h(u) = max h(u). Then there exists a x~ X(U) such that g( ) < 0. Proof: Consider the theorem is not true, that is h(u) = max h(u) and there does not exist a 7~ X(u) such that g(3) < O. This implies that for all xs X(u) there exists an A such that ug(x) < (u + A )g(x) which implies h(u) < max h(u). This contradicts the assumption and hence completes the proof. Theorem 1 leads immediately to the following Lemma: Lemma: The following two conditions are sufficient to verify a saddle0 0 point at x,u. i) g(x~) < 0 ii) for any A 0, x X(u~ - A ) + g(x) O. Proof: Since g(x ) < 0, h(u~ + A ) < h(u~). In addition, from Theorem 1 we know that the second condition implies h(u~ - A ) < h(u ), which completes the proof. The preceding observations facilitate the development of optimal multiplier adjustment methods as illustrated in the next two sections. We will generate a feasible solution x0 SX(u~), that is, a solution which belongs to the set of solutions solving h(u ). Once we have found such a solution we compute f(x~), g(x~) and find u~ such that f(x~) + u~ g(x~) = max {f(x ) +ug(x~)}. 3. Travelling salesman problem In this section we show that the nearest unvisited city heuristic can be used to compute the optimal solution to the travelling salesman dual problem. Let i=1,2,...,N be the cities to be visited and let C be the N x N distance

5 matrix with element cij denoting the distance between city i and city j. The travelling salesman problem can then be formulated as follows: N N min Z c.. x.. i=l j=l 1i 13 N (1) (2) s.t. E i=l X. 1 nj N j= j=l X.. = 1 1] (3) Note that xij is a zero/one vector, indicating which links are to be selected in a tour. For the remaining part of our discussion we relax constraint (3) such that the sum over all j's in xij is less than or equal to one, rather than equal to one. This relaxation does not effect the optimal solution or any of the solution procedures. However, it does simplify our presentation since the dual variables of this constraint can now only take on values greater than or equal to zero. The Lagrangean dual problem to [(1), (2), (3)] can then be stated as follows: max { min u x N N Z [ i=l j=l s.t. (2) (ci. + u.) x.. - Ui] 1J I~ 1J] 1 (4) For any given vector of multipliers u problem (4) can be stated as: h(u) = r N tin Z [ x i=l j s.t. (2) N =1 (Cij + ui) xij - Ui] 1J 1 1J 1 (5)

6 The objective function value h(u) can easily be obtained by scanning all the columns j=1,2,...,N, and setting the x.j.'s which correspond to the lowest (cij + ui)-value in a given column equal to one and all other x ij's in the same column equal to zero, and then computing N N i t C cij + U.) Xij -u.]. j=l j=1l Consider the following multiplier adjustment scheme. Let k. be the 3 difference between the lowest and the second lowest c.j value in column j. Start the nearest unvisited city heuristic with the column which corresponds to the largest k.-value. Next consider the column with the next largest k.3 J value, and so on. That is, set xij which corresponds to the lowest cij value in the column with the largest k.-value equal to one and all other xij's in the same column equal to zero. Next set x.. which corresponds to the lowest cij value in the column with the second largest k.-value equal to one and all other xij's in the same columns equal to zero, and so on. If at any stage the nearest unvisited city does not correspond to the nearest city (cities) we increase the necessary ui-values until (cij + u.), corresponding to the nearest unvisited city, is equal to (cij + ui), corresponding to the nearest city (cities). We then recompute k. for the whole column, using the (c.. + u.)-values rather than the cij-values, and start the nearest unvisited city heuristic again with the column corresponding to the largest k.-value. Note that the particular sequence in which the columns are chosen for evaluating the nearest city guarantees starting at most N times. Let u denote the multipliers obtained by this scheme.

7 Theorem 2: u solves max h(u), that is, u solves the dual to the travelling salesman problem [(1), (2), (3)]. N Proof: First note that Z x. < 1 for all i=1,2,...,N, and hence condition -j=l 3 - 1 of theorem 1 is satisfied. Further note that any decrease in any of the N multipliers will cause Z x..>l and hence condition two of Theorem one is j=l 13 satisfied as well,which completes the proof. 4. Knapsack Problem In this section we show that a simple heuristic to the knapsack problem can be used to compute the optimal solution to the knapsack dual problem. Let i=1,2,...,N be the elements to be considered for loading into the knapsack and let a. and Pi denote the return and the size, respectively, for element i. In addition let the elements be labeled according to decreasing a/p-ratios and let C be the capacity of the knapsack. The problem can then be stated as follows: max Z a. (6) S iES s.t. Z pi < C C7) iES S S is the subset of all N elements which are loaded into the knapsack. The Lagrangean dual problem to [(6), (7)] can be stated as follows: min {max Z (ai - up) + uC] } (8) u S iES

8 For any given multiplier, problem (8) can be stated as follows: h(u) - max [ Z (ai -upi) + uC] (9) iS 1 - The objective function value h(u) can be obtained simply by defining = {i [ (a - upi) > O }, and computing Z (a. - upi Let job K be such that k-1 i) p < C i=l k ii) Z pi > C i=1 Theorem 3: u = ak/Pk solves min h(u), that is, u solves the dual to the knapsack problem [(6),(7)]. Proof: The proof can easily be established using Theorem 1. It can now easily be established that for any knapsack problem the bound obtained using a Lagrangean relaxation is always equal to the bound obtained using a LP-relaxation. Lemma: The objective function value of the knapsack dual problem is always equal to the objective function value of anLP-relaxation to the knapsack problem. Proof: Consider the dual problem h(u) at u. The objective function value at this point equals:: k=l k-l Z a. + (c - Z p) (10) i=l 1 i=l 1

9 k-1 Let (c - i pi) = A. Note that A p indicates the amount of "space" left in i=1 the knapsack. Further u = ak/Pk. Hence equation (10) can be rewritten as k-1 a. + ak (Ap/p). i=l k k By definition of k, 0 < Ap/pk <1. It is easy to see that applying an LPrelaxation to the knapsack problem will yield a solution such that all the elements in the knapsack have a/p-ratios higher than the elements not loaded into the knapsack. In addition the element which is only partially loaded has an a/p ratio which is lower than the ratio of any other element in the knapsack, but higher than the a/p-ratio of any elements not loaded into the knapsack. 5. Conclusion In this paper we developed results to solve the dual of NP-hard optimization problems with relatively simple procedures. The research was motivated by the fact that current methods to solve the dual problems, e.g. subgradient optimization, can become computationally burdensome. The results are applied to two problems, the travelling salesman problem and the knapsack problem. A further example can be found in [6]. It appears that the ideas developed in this paper can be used to solve any dual problem, which would imply that the computationally burdensome subgradient methods can be avoided for the most part.

10 REFERENCES 1. Fisher, M.L., "The Lagrangean Relaxation Methods for Solving Integer Programming Problems," Management Science, Vol. 27, No. 1 (1981), pp. 1-18. 2. Garey, M.R. and D.S. Johnson, "Computers and Intractability," San Francisco, CA: W.H. Freeman and Company, 1979. 3. Lasdon, L.S., "Optimization Theory for Large Systems," New York, NY: MacMillan Publishing Co., 1970. 4. Morin, T.L. and R.E. Marsten, "Branch and Bound Strategies for Dynamic Programming," Operations Research, Vol. 24, No. 4 (1976), pp. 611-27. 5. Potts, C.N. and L.N. van Wassenhoven, "An Algorithm for Single Machine Sequencing with Deadlines to Minimize Total Weighted Completion Time," European Journal of Operations Research, No. 12 (1983), pp. 379-87. 6. Schneeberger, H.M., "On Potts' and van Wassenhoven's Algorithm for Single Machine Sequencing with Deadlines to Minimize Total Weighted Completion Time," Working Paper, The University of Michigan, Graduate School of Business Administration, 1985.