Division of Research Graduate School of Business Administration The University of Michigan December 1982 k An Integer Programming Algorithm With Network Cuts for Solving the Assembly Line Balancing Problem Working Paper No. 321 F. Brian Talbot The University of Michigan James H{. Patterson The University of Missouri-Columbia FOR DISCUSSION PURPOSES ONLY None of this material is to be quoted or reproduced without the express permission of the Division of Research.

Abstract In this paper, we describe an integer programming algorithm for assigning tasks on an assembly line to work stations in such a way that the number of work stations is minimal for the rate of production desired. The procedure insures that no task is assigned to a work station before all tasks which technologically must be performed before it have been assigned (precedence restrictions are not violated), and that the total time required at each work station performing the tasks assigned to it does not exceed the time available (cycle time restrictions are not violated). The procedure is based on a systematic evaluation (enumeration) of all possible task assignments to work stations. Significant portions of the enumeration process are performed implicitly, however, by utilizing tests described in the paper which are based on the specific structure of the line balancing problem. An artifice termed a network cut is also developed which eliminates from explicit consideration the assignment of tasks to work stations where such assignments would not lead to improved line balances. Results reported demonstrate that the procedure can obtain optimal balances for assembly lines with between fifty and one-hundred tasks in a reasonable amount of computation time and with modest computer storage requirements. ' c`~ _~~11_ --- —-~1111 —s-~ ----I -- ~-U-W —*-I

1. Introduction An integer programming algorithm is presented for determining the minimum number of work stations required to balance a paced assembly line, subject to constraints on the sequence in which tasks may be performed, and subject to a limit (the cycle time) on the time which can be expended in performing tasks at each work station. The cycle time in these problems is based on a production plan or an output rate specified to meet forecasted demand. If the demand is 120 units per hour, for example, then the cycle time is thirty seconds, or one-half minute per work station. A related balancing problem, which is not considered in this paper, involves the determination of a minimal cycle time (or maximum output rate) for a fixed line length or number of work stations. In this problem, precedence constraints must not be violated, and the given number of work stations must not be exceeded in determining the minimum cycle time balance. Heuristic techniques for solving the first problem can be generalized to solve the second by successively increasing the cycle time until a balance is achieved for the stipulated number of work stations. Our procedure can also be applied in an analogous fashion to solve this latter problem. Indicative of the type of problem encountered in balancing an assembly line is the one given in Figure I, adapted from [3]. In this network representation of the assembly line balancing (ALB) problem, nodes indicate tasks to be performed, and arcs denote the order in which the tasks must be performed. For example, in Figure I, task six cannot be begun until tasks three and four have both been completed. Associated with each task is the time required for its performance, Ti. The cycle time C gives thle maximum amount of time allowed at a work station to perform the tasks assigned to it. We will assume in the development of the procedure that the nodes of the network are numbered such -1 -

-2 - Figure I Precedence Diagram of Bowman's [3] Problem with Dummy Terminal Node Appended C= Cycle Time = 20 Key ( ) = Task Number Ti = Time Required For Performonce of Tosk i

-3 - that if task m precedes task n, m < n. Furthermore, we will assume that a unique terminal dummy task with label N and time zero is appended to the network as shown in Figure I (task 9). We assume that this is the only task with zero time. Such node numbering schemes in a network are not unique, and in fact determine the order in which tasks are considered for assignment to a work station. As is the case with many of the recent attempts to solve this problem with the use of integer programming, our approach is based on the Balas implicit enumeration algorithm [2]. Our procedure differs from these other approaches, however, in that we use integer variables rather than 0-1 (binary) variables. This results in a significant reduction in computer storage requirements vis-a-vis these other methods. Other optimization approaches for solving this problem are described in [5, 6, 8, and 13]. Notation which will be used to describe the procedure is given in Table I. In Section 2 of the paper, we state the integer programming formulation of this problem, and contrast it to the existing binary programming formulations. In Section 3 the basic solution procedure is presented, and in Section 4 we describe several techniques for exploiting problem structure and accelerating the convergence of our approach. Computational experience in solving a series of test problems found in the open literature is found in Section 5. Concluding remarks are given in Section 6.

-4 - Table I Definition of Terms Symbol (Listed Alphabetically) Definition Ai A nonnegative integer variable which is equal to the number of the work station in which task i is assigned. (Ai=O indicates that task i has not been assigned to a work station.) Bi The station assignment for task i in the current best solution. Initially, BN is set equal to H. C Cycle time. H The number of stations in a heuristic solution to the ALB problem. (H equals N if a heuristic solution is not known.) I Idle time at station j: i.e., C minus the sum of the task times for tasks currently assigned to station j. i Task number: (1 < i < N). j Station number: (1 < j < H). M A lower bound on the number of work stations required. N Number of tasks to be assigned. (Also equal to the identifying number of the unique dummy terminal task without successors.) Pi (Si) The set of tasks which must precede (succeed) task i. Pi (S) The set of tasks which must immediately precede (succeed) task i. Ti Time required to perform task i. T* Maximum task time in a given problem. T* = max {Ti}. i Ui (Li) An upper (lower) bound specifying the highest (lowest) numbered station into which task i may be assigned based on cycle time and precedence restrictions. Wj The set of all tasks which can be assigned to station j by virtue of task precedence constraints. W' The subset of tasks in Wj that are actually J assigned to station j.

-5 - 2. Integer Programming Formulation Conceptually, the assembly line balancing (ALB) problem is formulated as follows: Minimize: AN Subject to: Z Ti < C for j=l,...,H (1) ieW; Am < An mP.for n=l,...,N (2) The objective is to minimize the number of work stations, or equivalently, AN which is the station to which the unique terminal task N is assigned. Constraints (1) insure that the sum of the task times for tasks assigned to a particular station does not exceed the cycle time, C. Constraints (2) insure that all technological predecessors of a given task are performed before it is performed. The primary obstacle to implementing this formulation directly is the difficulty in operationally specifying (1). Prior integer programming attempts [3, 15, 20, 22] have led to formulations using 0-1 variables, which do permit the explicit representation of cycle, occurrence, and precedence constraints. The consequence of using 0-1 variables, however, is that the resulting formulation contains a large number of variables and constraints. This is demonstrated in [15], where four 0-1 formulations of the ALB problem are compared by means of an eight-task example problem. The approach involving the fewest number of variables and constraints requires 29 variables and 23 constraints for the given eight-task problem. For much larger industrial size problems — say, 50 to 100 tasks —this conservatively translates into formulations involving literally hundreds of variables and constraints. Furthermore, if the

-6 - search procedure for problem solution works "down" from a heuristic line length, H, both the number of variables and the number of constraints needed in these formulations are a function of the best known heuristic solution. The larger H is relative to the lower bound number of work stations, M, the larger is the resulting formulation. The proposed solution method eliminates the need to explicitly formulate (1) and (2), which significantly reduces the computer storage requirements relative to prior integer programming approaches. The number of variables, Ai, is simply equal to the number of assembly tasks, N. Constraints.are not explicitly formulated at all in our procedure. Precedence relationships are maintained by directly testing an immediate predecessor array, P.. Furthermore, cycle time restrictions are invoked through the evaluation of an idle time vector, I. Unique task assignment to a work station is insured by assigning tasks in numerical order, starting with task one and terminating with task N. Thus, the basic proposed formulation requires computer storage of only four vectors, each of dimension no greater than lxN, and one NxK matrix, where K is the number of immediate predecessors for the task with the maximum number of immediate predecessors. The vectors are A, I, T, and U, and the matrix is P*. Depending upon the type of heuristic rules employed, a few additional lxN vectors may also be required. 3. The Solution Procedure The basic solution methodology will now be described. First, upper (Ui) and lower (Li) bounds on the stations into which a task may be assigned are determined. The problem is then solved by using our modified Balas algorithm, which guarantees that an optimal solution will be found.

-7 - The Determination of Upper and Lower Bounds on Task Assignment In order to restrict the enumeration of possible work task assignments, an upper bound on the latest feasible station into which a task can be assigned is determined as follows: Ui = H - [(Ti + I Tk) / C] for i=l,...,N-l (3) keS i and, UN = H-1 where [x] denotes the smallest integer > x. This states that the upper bound on the terminal task is one station less than the current best known solution H. Analogously, (4) identifies the earliest possible station to which a task can be assigned by virtue of cycle time and precedence restrictions: Li = [(Ti + T Tk) / C] for i=l,...,N (4) kePi The bounds provided by (3) and (4) are similar to those given in [15]. We use the bounds somewhat differently, however. In [15], the bounds are used to reduce the number of 0-1 variables required in problem formulation. We use these bounds directly to specify the range of integer values Ai can assume in assigning task i to a work station. Stronger bounds than those given by (3) and (4) may be found when the magnitude of C approaches the maximum task time, T'k. This is fortunate, since our experience indicates that a problem becomes increasingly difficult to solve when the number of stations in which assignments can be made increases. This, of course, is precisely what happens when C decreases, approaching T*. When C approaches T*, many of the tasks with performance times in magnitude close to T* typically must be assigned to a station alone. To identify such tasks, calculate Ui and Li using (3) and (4). Then, taking each task i in order, from 1 to N, determine if there exists a task k with Tk > 0 such that Ti + Tk < C, where i t k. Note that the only k's that need to be

-8 - considered are those where Lk < Ui or Uk > Li, and either {k E P or k e S }, or {k t Pi and k j Si}. If k does not exist, then set Ti = C. When all tasks have been evaluated in this fashion, recalculate Ui and Li using (3) and (4). For the example problem in Figure I, the sets of lower and upper bounds as calculated by (4) and (3) are {1,2,2,2,3,3,3,3,4} and {4,4,5,7,7,7,7,7,7}, when H = 8. The corresponding improved lower and upper bounds obtained by using the procedure described are {1,2,3,3,3,4,4,4,5} and {3,4,5,7,7,7,7,7,7}. It is interesting to note that for the example problem, use of this improved procedure increases the lower bound number of work stations from four to five. Since a heuristic solution of AN = 5 is easily obtained, the revised lower bound verifies that the heuristic solution is optimal. This improved bounding method can thus aid the solution procedure by providing an improved bound on the minimum number of work stations required, as well as by reducing the number of station assignments which have to be.explicitly evaluated for tasks when C approaches T*. Optimizing Procedure Our adaptation of the Balas algorithm can be summarized as follows: (1) nonnegative integer variables Ai are used, rather than 0-1 variables; (2) the order in which variables are considered for augmentation is prespecified by the node numbering scheme, rather than by using binary infeasibility tests; (3) we exploit problem structure to expedite fathoming and backtracking. Initially, a unique terminal dummy task with T = 0 is appended to the network. Lower and upper bounds are then computed, and the idle time vector, I, is initialized to C for all stations j, j = 1,2,...,(H-1). In developing the upper bounds, Ui, H is set equal to N if a heuristic solution is not known. B is initialized to H regardless of how H is determined. N

-9 -Augmentation then begins. Task 1 is assigned to station 1: A1 = 1, and I1 is reduced by T1. Next, task 2 is assigned to its earliest precedent and cycle-time feasible station. The earliest precedent feasible station is j* = max{Ai iEsP}; or j* = 1, if P = {). The earliest cycle-time feasible station is determined by scanning I from j* to U2 for the lowest numbered station j' such that Ijt > T2. Task 2 is then assigned to station j' by setting A2 = j' and subtracting T2 from Iji. Tasks 3 through N are assigned in similar fashion. If augmentation proceeds to assign task N to a work station (i.e., an improved solution is found), Ui are redefined by subtracting (BN - A) from Ui, for i = 1,...,N. The incumbent best solution then becomes Bi = Ai, i = 1,...,N. Ij is initialized to C, j = 1,...,A-1 and augmentation again begins with task 1. However, augmentation may not proceed to task N. As augmentation progresses, for some task i there may not exist a station j in the interval [max{AklkeP*},Ui] where I > Ti, because of the tightened upper bounds. Consequently, backtracking occurs: IA is increased by Ti_1, and stations (Ail + 1) to Ui_l are scanned for the lowest numbered station j* such that Ti_1 < Ij*. If j* doesn't exist, backtracking proceeds to task i-2, and so on. If j* exists, then Ai_ = j* and Ij* is reduced by Til. Optimality is then assured either when AN = LN, or when an attempt is made to backtrack below job one. In the terminology of implicit enumeration, the latter is equivalent to failing to complete a partial solution in which the left-most variable has been complemented and underlined [4]. 4. Exploiting Problem Structure The ALB problem has a well-defined structure which can be exploited to im prove the performance of the basic algorithm. In addition, our formulation of

-10 - the problem imposes further structure which can lead to improvements. We now describe four such improvements which reduce the computation time required to solve a given problem. Backtracking Using Chains Backtracking can be expedited when chains exist in an assembly network. A chain is an ordered set of consecutively numbered tasks such that each task is an immediate predecessor of the task which follows it in the set. As an example, Figure I has two chains, {1,2,3} and {8,9}. If the search for a feasible station assignment fails for task n, ordinarily backtracking proceeds to task n - 1: task n - 1 is reassigned to a station beyond Anl, and the search for a feasible assignment for n begins anew. But if (n - I)EPn, then the reassignment of n - 1 to a station later than An-1 only reduces the search interval for task n. Task n, in this case, can only fail again to be assigned earlier than An. When chains exist, a considerable amount of searching can be avoided by directly backtracking to the highest numbered task not in the chain. Using the chain from Figure I, it can be seen that if task 3 fails to be assigned, then backtracking can proceed immediately to the fictitious task number 0; i.e., if task 3 cannot be assigned, then the incumbent solution BN is optimal. Backtracking Using Network Cuts In order to describe additional means for improving algorithm efficiency, an artifice called a network cut will be introduced. This is a notion that has proven to be useful in resource-constrained project scheduling [18, 19], and draws its name from the manner in which a problem is portrayed on a Gantt chart. The notation used in describing a cut is given in Table II.

-11 - Table II Notation Used to Describe Network Cuts Symbol Description A(i) The current partial schedule of task assignments. A(i) = {A1,A2,...,Ai}, when task i + 1 is being considered for station assignment. is is = max{ilLi < s}. i* is = max{i|liQs and where Ai was < s at least once since task i - 1 was last assigned}. ~Qs ~ The ordered set of all tasks 1 through is. That is, QSQs Qs = {1,2,3,...,is}. s Cut s: a station number in the interval [1,LN), which which meets conditions C1 and C2, as specified. YS(z) The zth saved partial schedule at cut s, where z = 1,2,...,Z. That is, Ys(z) = A(is) for some z.

-12 - A cut s is a station number that (a) is used to identify when certain fathoming and backtracking rules may be applied; and (b) is a parameter used in the application of the rules themselves. Station s is a cut if there exists a task i such that: C1. Li = s + 1. C2. For all k > i, Lk > s. Figure II illustrates the application of C1 and C2 to the example problem. Open bars extend from Li to Ui for each task i, representing all possible station assignments as defined by the improved bounds. Cuts are identified by the dashed lines at stations 1, 2, 3, and 4. Figure II also illustrates the computation of is and Qs. For each s, s=1,2,3,4, the set Qs consists of all tasks 1 through is where is is the largest task number such that the earliest station into which the task can be assigned is less than or equal to s. For example, ii = 1, Q1 = {1}; i2 = 2, Q2 = {1,2,}; i3 = 5,.Q3 = {1,2,3,4,5}; i4 = 8, Q8 = {1,2,3,4,5,6,7,8}. Although not shown in Figure II, it is possible that some task k in the interval [1,is] will not have the earliest station into which it can be assigned less than or equal to s. That is, Lk > s. Such tasks are included in Qs even though their earliest station assignments are greater than s. Including all tasks 1 through is in Qs is required so that optimum solutions are not excluded when employing the cut rules for fathoming partial solutions. We shall now explore several ways in which cuts may be used to expedite backtracking and fathoming. First, we will consider backtracking. Suppose backtracking has progressed to task is for some s. It is then possible to immediately backtrack to task i' = max{i| iesQ and Ai > s}, because backtracking

-13 - Figure II Gantt Chart of Bowman's Problem Illustrating Network Cuts 1 2 S 2 3 4 - 5 n 6 7 8 9 = 3 4 I 1 I I III r I r II I ill I I 1 II I 2: 345678 STATION J

-14 - to any task i"eQs, where i" > i', could only result in reassignments for i",i" + 1,...,is, that either have no effect on idle time Ij for j > s, or else decrease Ij for j > s, from that which was available before backtracking to is. (See Appendix for proofs.) We note that this backtracking procedure is a function of the current partial schedule of task assignments A(), whereas backtracking utilizing chains is dependent only on the task numbering scheme. Hence, these procedures complement one another. Fathoming Using Cuts Fathoming rules are considered next. Ordinarily, A(i) is fathomed when task i + 1 fails to find an Ij > Ti+l, where j is in the interval [max{AkIkeP*+l}, Ui+1]. If i = is, then two additional fathoming techniques may be applied. The first makes use of the fact that if A is to lead to an improved solution, AN < BN, then (5) must hold. Ij < CUN- Tm (5) j=l m=l This simply states that the idle time incurred prior to cut s cannot exceed the total idle time permitted by the current upper bound on the problem. Rule (5) is applied immediately following the feasible assignment of task is. If it fails, A(iS) is fathomed and backtracking proceeds to is - 1, since the reassignment of is to a later station will not satisfy (5). If (5) is satisfied, then augmentation continues with is + 1. In either event, the computational cost of invoking (5) is minimal, since the right-hand side of (5) is a constant, and the left is a simple sum.

-15 - A second fathoming technique relies on the generation and comparison of partial schedules in a manner analogous to their use in [19].. Basically, the current schedule A(is) is compared to a previously saved schedule at is, Ys(z) for some z, z=l,...,Z. If it is determined that A(iS) could potentially result in an improved solution for the problem, then A(is) is saved in Ys(z), for some z, and augmentation continues with is + 1. In this case, A(is) is called a good partial schedule. However, in many cases it can be shown that A(is) cannot lead to an improved solution. When this occurs, A( s) is called an inferior partial schedule: backtracking progresses to task is. Following the development in [19], a partial schedule A(is) is inferior to a saved schedule Ys(z), if (6) holds. Am = A~ for all m~Qs 3A > s where AmeA is) and AAeYs(z). (6) This simply indicates that if the only difference between the partial schedule A(iS) and the saved schedule Ys(z) is that tasks in Ys(z) that were assigned to stations s' < s are now assigned in A() to stations s' where s' < s or s' > s, then an improved solution to the problem cannot result. The reason is that the rearrangement of task assignments to stations s and before cannot provide improved assignment opportunities for tasks is+l, is+2,...,N over those which were available when Ys(z) was fathomed. And, reassignments later than s. merely reduce the station time that tasks is+l, etc., compete for. The number of good partial schedules generated for most problems is very large. Hence, there is a need to find a mechanism for specifying Z for each cut s. The larger Z is, the more likely it is to find inferior schedules. But storage can become a limiting factor, and there is a computational expense involved in evaluating many schedules. On the basis of experience gained with

-16 - project scheduling problems, we have set Z = 10 for all s. Operationally, the ten most recently generated good schedules are saved. Rule (6) is invoked immediately following the feasible assignment of task is. If A(is) is inferior, backtracking commences with is. If A(is) is good, it is saved in Ys(z), and augmentation continues with is + 1. When a schedule A(is) is found where Ai < s for all ieQs, the ALB problem may be partitioned. We call As a perfect partial schedule in this instance since it is impossible to find an improved schedule for tasks icQs, in the sense that it could lead to an improved solution for the problem. Hence, when a perfect partial schedule is found, optimality is assured when an attempt is made to backtrack to task is. Thus, the problem has been partitioned into two subproblems: a solved problem for tasks ieQs, and an unsolved problem for tasks itQs. Figure III illustrates.partitioning for the example problem. Here, A(13) = {1}, and A(23) = {1,2}. Once A(23) is specified, for example, enumeration is restricted to tasks 3,...,9. If an attempt is made to backtrack to task 2, optimality is assured. (For this trivial problem, optimality is actually verified by comparison to the improved lower bound, i.e., when A9 = Lg = 5.) Additional Tests Using Idle Time In this section we introduce two additional expediting rules which are based on a knowledge of the minimum amount of idle time which must be present in each work station. The first rule discussed is a "look ahead" rule which permits the algorithm to fathom a partial solution when it can be shown that the idle time incurred in the partial solution A(s) exceeds permissible levels for an overall improved solution. This rule is thus a stronger version of (5). The second rule permits improved backtracking from a cut when it can be shown that the maximum level for the partial solution idle time has been exceeded.

-1 7 - Figure III Illustration of a Perfect Partial Schedule S 1 2 3 '- 4 (n 6 5 7 8 9; = 1 2 3 4 I, I _ _ 1 _ i I I ^~=z1: 1 2345 STATION J 6 7 8

-18 - By solving the following knapsack problem it is possible to determine the minimum amount of idle time, Mj, which must be present in each station j. Minimize: C - TiXi Mj (7) ieWS Subject to: ) TiXi < C (8) iEWj where, = 1 If task i is assigned to station j Xi = 0 Otherwise Here, Wj can be obtained by simply identifying tasks.i with Li < j and Ui > j. For example, from Figure II, W3 = {1,2,3,4,5}. Equation (5) can now be replaced with (9). s N N I Ij < CUN - I T - I Mj (9) j=l m=l j=s+l The stronger test provided by (9) obviously comes at some computational expense. However, the knapsack problem itself is highly structured and usually yields solutions very quickly using a simple enumeration procedure. (Tasks in Wj are relabeled by non-increasing task time. Then Xi are augmented to 1 in order of task number. Backtracking is in reverse order of,task number.) On our test problems, this idle time test has been very effective in general, but specifically when the interaction of large task times and strong precedence relationships in a problem "force" the total idle time to be distributed into certain work stations. If the partial solution A(is) is fathomed due to excessive idle time by (5) or (9), then instead of backtracking to task is - 1, it is possible to backtrack to is. The proof of this assertion follows from the observation that it is impossible to reassign any task i'e{ili > i* and iEQs} to a station before s + 1, given the partial solution A( s) (If it were possible, it would

-19 - already have occurred during the enumeration process, given the convention of assigning tasks to the lowest numbered work station that is precedent and cycletime feasible). Hence, backtracking to any task i' cannot reduce the excessive idle time occurring in stations 1 to s, which was the original reason why fathoming by (5) or (9) took place. The effect of this backtracking procedure is most pronounced when the knapsack routine identifies a relatively uniform distribution of idle time across all stations, or when total idle time is close to zero. Under these conditions, it often eliminates from explicit evaluation hundreds of partial solutions which would not have led to improved solutions to the problem. 5. Computational Experience The algorithm described in Section 3 and each of the problem structure exploiting techniques described in Section 4 were programmed in FORTRAN IV and tested on an Amdahl 470/V8 computer (H compiler, OPT = 2). The results from applying this program to a series of test problems appearing in the open literature are given in Table III. The cycle times selected include both those that have previously appeared in the open literature, and additional times which were selected in an effort to obtain problems where the optimal solution is not equal to the theoretical lower bound. Problems with the latter characteristic were sought because without a proven lower bound a problem is generally more difficult to solve optimally. It is, of course, not always possible to select a cycle time which will give an optimal solution greater than the theoretical minimum number of stations. The Kilbridge and Wester problem, in fact, has an optimal solution equal to the theoretical minimum number of stations for the complete range of cycle times, T* < C < ~Ti.

-20 - Table III Computational Results Number Cycle Optimal Number CPU Time** Problem* of Tasks Time of Stations With Cuts Basic Approach Heur. Start __________ _____________Without Cuts With Cuts.1~~~~~~~~~~~~~~~~~~. I. Merten Jaeschke Jackson Mitchell Heskia Sawyer Kilbridge & Wester Tonge 7 9 11 21 28 30 45 70 6 7 L 8 10 L 15 18 L 6 7 L 8 10 L 18 L 7 9 10 L 13 14 L 21 L 14 L 15 21 26 L 35 39 L 1.38 L 2.05 2.16 L 2.56 3.24 L 3.42 25 27 L 30 36 41 L 54 75 L 57 L 79 92 110 L 138 184 L 176 L 364 L 410 468 L 527 6+ 5 5+ 3 2 2 8+ 7+ 6+ 4 3 8+ 6 5 4 4 3 8 8+ 5 5 3 3 8 5 5 4 4 3 14+ 13+ 12+ 10+ 8 7+ 5 10 7 6 6 4 3 21+ 11 9 8 7.022 (.022).024.022.022.016.017.028 (.028).027.023 (.026).024.020.030.025.026.027.022.025.055.039.045.039.037.033.059.867.041.046.041.039 1.106.524 1.555.048.292.048.043 5.828.221.216.063.089.071 > 8.100.101.100.098.021.023.020.020.016.016.026.025.020.020.020.028.022.026.021.022.020.048.076.044.035.038.033.058 6.881.058.050.040.041 > 8 > 8 > 8 > 8 6.052 > 8.047 > 8 > 8 > 8.062.473.067 > 8.100.099.097.099.021 H.021 H1.022 H.019 I1.017 11.015 H.025 H.024 H.022 H.020 H.021 H.026 H.021 H.026.024 1H.021 H.022 H.049 H.038 It.042.039 H.039.032 Il.046 H.044 HI.041 H.043 H.040 H.036 H 1.100.522.169 H1.046 H1.293.050 H.042 II.082 H.218.215.063 H.086.064 > 8.102.098 11.097 11.094 H I I 11 I I I - - - - - I_ _

-21 - Table III (continued) Computational Results Number Cycle Optimal Number CPU Time** Problem* of Tasks Time of Stations With Cuts Basic Approach Heur. Start _Without Cuts With Cuts Arcus 83 50.48 16+.470 > 8.281 H 58.53 L 14.127.122.125 H 68.42 L 12.126.125.123 H 75.71. 11+.126 (.527).122.126 H 84.12 10+.750 > 8.774 H 88.98 9.126.118.121 H Arcus 111 108.16 8+.254 > 8.251 H 57.55 L 27 > 8 > 8.180 H 88.47 18+ 1.921 > 8.171 H 100.27 16+ 6.101 > 8 3.142 H 107.43 L 15+ 2.883 > 8 2. 880 H 113.78 L 14.166.163.160 H 170.67 L 9.162.161.159 H * Problem sources are given in the of the Mitchell problem. references. See the Tonge reference for a description ** Amdahl 470/V8 CPU time in seconds, including optimal solution. input and output, to find and verify the + The optimal number of For the four problems improved lower bound. the improved lower is stations is greater than the theoretical minimum number of stations. reporting a time in parentheses, the optimal solution equals the The number in the parentheses indicates the solution time when not used. L The cycle time previously reported in the literature. 11 The heuristic solution is also optimal.

-22 - The cycle times previously appearing in the literature are indicated by an "L" in Table III. Optimal solutions greater than the theoretical minimum number of stations are noted by a "+". Four of the sixty problems have optimal solutions equal to our improved lower bound. These problems are identified as having three solution times, with the number in parentheses the time required when this improved bound is not used by the optimizing procedure. The solution times reported are total CPU times in seconds for reading the problem data, finding and verifying the optimal solution, and printing solution results. Each problem-cycle time combination was solved with and without the cut related exploiting rules as indicated by the nomenclature "With Cuts" and "Basic Approach Without Cuts." In all cases, no initializing heuristic solution was employed: H was initially set equal to N and the algorithm worked "down" from this solution to the optimal solution. With the exception of the Mitchell (C = 15) problem, there is little to differentiate the basic approach results from the basic approach with the cut features appended for the Merten, Jaeschke, Mitchell and Heskia problems. This is consistent with our experience in general: The basic approach provides cost-effective optimal solutions to problems with up to about 25 tasks. For problems in this range, the computational overhead involved in invoking the cut-related rules usually balances the benefits derived from the reduction in enumeration the rules afford. Hence, the solution times with and without cuts are usually roughly equivalent for problems of this size. (Several runs of the program on the same set of problems yielded time variations of +.003 seconds. Hence, care should be taken in inferring algorithmic efficiency differences when solution time differences are less than.006 seconds.) For problems containing more than 25 tasks, the power of cut-based rules becomes more evident. For many of the larger test problems the optimal

-23 - solution could not be found and verified without the rules within the maximum CPU time permitted of 8 seconds, although they could be solved well within this time when cut rules were invoked. The extreme example of this is the Sawyer problem (C = 36 and C = 54). Within the eight second imposed time limit optimal solutions could not be found and verified for only two problems, the Tonge-70 (C = 176) and Arcus-lll (C = 57.55). In the former case the optimal solution of 21 was actually found within.072 seconds, but it could not be verified as optimal within 8 seconds. In the latter case, the best solution found was 29 stations before the program was terminated. These results are very encouraging and compare favorably with results obtained from the branch and bound algorithm recently developed by Magazine and Wee [11, 12, 13]'. In Table 1 of [13] they report solution times (using an IBM (VM) 370), for 27 of the 60 problems noted by an "L" in our Table III. (They did not solve the Arcus-83 problem.) They obtained optimal solutions to all 27 problems, but did not verify optimality for the Sawyer (C = 27) and Tonge (C = 176) problems. They solved the Sawyer and Tonge problems heuristically in.316 and 3.013 seconds, respectively. The Arcus-ll (C = 57.55), problem, which we were not able to solve optimally in 8 seconds, was solved in.396 seconds with their branch and bound algorithm. They also solved three other problems using heuristic rules, Mitchell (C = 14), Sawyer (C = 41) and Kilbridge and Wester (C = 57), but were able to verify optimality in each case, because the heuristic solution is equal to the theoretical minimum number of stations. The total reported times for the 26 problems (i.e., without Arcus C = 57.55) for which both approaches obtained (but may not have verified) optimal solutions is 7.053 seconds from [13] and is 7.824 seconds from Table III, plus the Tonge (C = 176) time of.072.

-24 - These time comparisons are not intended to demonstrate that one approach is faster (better) than another. Such an inference would be unfounded since the times are reported for different computers and timing routines. Rather, it is to demonstrate that both are effective procedures for obtaining optimal solutions for many realistically sized ALB problems. Other attractive features of our approach are not evident from the data presented in Table III. First, our approach requires only a modest amount of computer storage. For example, the program used in this paper requires less than 140 K bytes of storage and is dimensioned to solve problems with up to 120 tasks where each task may have up to 25 immediate predecessor tasks. (If necessary, storage requirements could further be reduced by the elimination of such user-convenience items such as Gantt charts.) Second, our procedure does not require the user to set a critical limit or implement other rules for eliminating portions of the search as is frequently the case in employing dynamic programming or branch and bound methods to solve this problem. This is because the exact storage requirements are known in advance with our procedure, and this is typically not the case with these other approaches. Furthermore, the performance of our approach is frequently improved through the use of well-known heuristics such as rank positional weight, maximum task time first (called IUFFD in [13]), and others [17]. Initializing our procedure with a simple heuristic solution, rather than setting H = N, reduces the upper bounds U., which in turn reduces the feasible search intervals for each task. Using a heuristic start typically results in faster solution times because the intervals are smaller, or because the heuristic solution can be verified as optimal directly through a bounds test. Also, since a heuristic such as maximum task time first requires very little computational effort and little computer core to program, the costs of using such a technique are negligible.

-25 - In Table III under the column "Heur. Start With Cuts," total CPU solution times are given when the procedure is initialized with a heuristic solution provided by a rule called "upper bound first" [17]. This rule assigns precedent and cycle time feasible tasks to work stations in order of nonincreasing Ui (instead of nonincreasing T. as does maximum task time first rule). 1 1 The CPU times shown include the additional time required to obtain the heuristic solution. Using this heuristic initializing procedure, our routine with cut-based rules found and verified optimal solutions to 59 of the 60 problems in a total time of 12.660 seconds. The optimal solution to the Tonge (C = 176) problem was found in.072 seconds as before, but was not verified optimal within 8 seconds. 6. Conclusions An integer programming algorithm for determining the minimum number of work stations to assemble a product subject to precedence and cycle time constraints is described. The algorithm is based on a systematic evaluation of all possible assignments of work tasks to a work station. Network cuts, idle time tests, and the use of network chains greatly enhance the efficacy of the approach described. Computational results on a series of test problems found in the open literature are reported and are quite favorable. The general conclusion to be reached is that the procedure can obtain optimal balances in a reasonable amount of time for assembly lines consisting of fifty or fewer tasks. While the algorithm is capable in certain instances of optimally solving assembly lines with one hundred or more tasks, such problems are generally characterized by a large cycle time in relation to the maximum task time. For those problems in which the cycle time is greater than 125 to 150 percent of the maximum task time, for example, the procedure is likely to produce an optimal solution in a reasonable amount of computation time.

-26 -Acknowledgements The authors express their appreciation to the referees for providing a number of constructive comments on this paper. Our special thanks go to Michael Magazine, University of Waterloo, for his careful reading of the drafts and his insightful suggestions for improving the final manuscript.

-27 - Appendix Theorem 1 If A(is)is inferior to any saved schedule in Ys(z), z = 1,...,Z, then it is impossible to assign tasks is + 1,..,N such that AN < BN. Proof We first note that to be saved in Ys(z), a schedule must have been considered good earlier in the enumeration process. Following the saving of Ys(z) the enumeration procedure continued augmenting with tasks is + 1, etc., until Ys(z) was fathomed either because an improved solution to the problem was found or backtracking took place. Ultimately a new schedule A(is) for tasks mcQs was calculated. If (6) holds, then there are identical assignments in A ) and Ys(z) for all tasks m that had assignments A' > s in Ys(z). Thus the only tasks that are of interest, that is those that could potentially provide improved scheduling opportunities for jobs is + 1,...,N, are tasks m*eQs with assignments A;* < s. These tasks can be reassigned to stations before (Am* < s) or after (Am* > s) station s. In the former case, given the definitions of s and is, there does not exist a task i > is such that Li < s. Therefore the reassignment of any m* to any Am* < s cannot affect the scheduling of tasks is + 1,...N. In the latter case, the assignment of any m* to any Am* > s would only decrease the station time in a station s + 1, s + 2, etc. from what it was when Ys(z) was fathomed. But this would simply make it more difficult for at least one task is + 1,...,N to be assigned. Thus, A(is) cannot lead to a solution better than that which was found with the completion of Ys(z).

-28 - Theorem 2 When a (perfect partial) schedule A(iS) is found where Am < s for all meQs, optimality is assured when an attempt is made to backtrack to task isProof Given the definitions of Li, is and s, there does not exist a task i > is that can be assigned to a station less than s + 1. If all tasks meQs have been assigned stations Am < s, their reassignments could be to either stations < s or > s. In the former case, the reassignment of any meQs cannot have any effect on the reassignment of tasks is + 1, etc., given the definitions of Li, is and s. In the latter case, the reassignment of any meQs can only reduce the station time available in some station > s, thus making it more difficult to reassign at least one task is + 1, etc., within its upper bound. Since the reassignment of tasks meQs cannot improve the reassignment opportunities for any task i > is, but in particular task N, there is no benefit to be derived from backtracking to any task meQs. Thus when an effort is made to backtrack to iseQs, the incumbent solution BN is optimal. Theorem 3 If backtracking proceeds to is, then it is possible to backtrack to task i' = max {ilieQs and Ai > s}. Proof The reason that backtracking proceeded to is was because it was impossible to feasibly assign some task i.> is within its upper bound Ui. Thus to improve the assignment opportunity for such a task i, backtracking

-29 - to i" < is must increase the time available for assignment in at least one station > s. The proof follows by showing that this time cannot be increased by backtracking to any i" > il. Backtracking to any task i"tQS, where i" > i' results in the reassignment of i",i" + l,...,is to either a station < s or a station > s. In the former case there is no effect on the time in stations > s. In the latter case, the time available is actually decreased in at least one station > s. Theorem 4 If the partial solution A(is) is fathomed due to (5) or (9), then it is possible to backtrack directly to isProof If (5) or (9) causes backtracking it is due to excessive idle time in stations < s, or equivalently, it is due to too much task time being assigned to stations > s in the current partial solution A(is) Thus, the only way to satisfy (5) or (9) is to reassign at least one task i e Qs that currently has an assignment Ai > s to a station < s. The Theorem follows by showing that it is impossible to reassign a task i > is to a station < s until backtracking progresses to i.* The key to understanding the Theorem and the proof is to understand how the algorithm assigns tasks and backtracks. First, tasks are considered for assignment in increasing numerical order and are assigned to the earliest precedent and cycle-time feasible station. By the definition of is, this By* tasks~theis means that when is was last assigned, tasks is + 1, i + 2,..., i were all, in turn, assigned to stations > s, because their earliest feasible station assignments were > s. It will be shown now that backtracking to any

-30 - i E Rs = i, + 2,...,i} cannot decrease the earliest feasible station assignments to < s for any k e Rs. In general, backtracking proceeds in reverse numerical order and since assignment is in increasing numerical order, backtracking to any task i can only affect the reassignments of tasks > i. When the algorithm backtracks to any task i, it attempts to reassign this task from Ai, its current assignment, to A', where A. < A! < U. Assuming the assignment is made, 1 i 1 — i the time available in station Ai is increased by Ti units and decreased by T. units in Ai. Of particular interest is that the reassignment of i 1 1 has no impact on the time available for task assignments in stations < A.. Specifically, it does not increase the time available in stations < A. for tasks > i. Assume now that the partial solution A(iS) has been fathomed due to (5) or (9) and that we backtrack directly to any i'e R. Further assume that it is s possible to reassign i' to A! where A., < A!, < U,. (If it is not possible to i 1i 1 -i ' reassign i', then we backtrack to i' - 1, i' - 2, etc., until we find a task that can be reassigned, or backtracking proceeds to i. In the latter case s the theorem is trivial. So we assume that a task is found that can be reassigned. We will still call it i', to keep the notation simple.) Because A., > s, the discussion on backtracking makes it clear that i' cannot be reassigned to a station < s, nor does its reassignment have any impact on the time available in station < s. Because there has been no increase in the time available in station < s, and since the previous earliest station assignment for i' + 1 was > s, i1 + 1 cannot now be reassigned to a station < s. By induction neither can i' + 2, i' + 3,...,i be reassigned to stations < s. Hence, the task time assigned to stations > s cannot be reduced without backtracking to i, or lower, which proves the Theorem.

-31 - REFERENCES [1] Arcus, A. L. "An Analysis of a Computer Method of Sequencing Assembly Line Operations." Ph.D. Dissertation, University of California, 1963. [2] Balas, E. "An Additive Algorithm for Solving Linear Programs with ZeroOne Variables." Operations Research 13, no. 3 (July-August 1965); 517-546. [3] Bowman, E. H. "Assembly Line Balancing by Linear Programming." Operations Research 8, no. 3 (May-June 1960): 385-389. [4] Goeffrion, A. M. "An Improved Implicit Enumeration Approach for Integer Programming." Operations Research 17, no. 3 (May-June 1969): 137-151. [5] Gutjahr, A. L. and Nemhauser, G. L. "An Algorithm for the Line Balancing Problem." Management Science 2, no. 2 (November 1964): 308-315. [6] Held, M.; Karp, R. M.; and Sharaeshian, R. "Assembly Line Balancing — Dynamic Programming with Precedence Constraints." Operations Research 10, no. 3 (May-June 1963): 442-460. [7] Heskia, Heskiaoff. "An Heuristic Method for Balancing Assembly Lines." Western Electric Engineer 12, no. 3 (October 1968): 9-16. [8] Jackson, J. R. "A Computing Procedure for a Line Balancing Problem." Management Science 2, no. 3 (April 1956): 261-272. [9] Jaeschke, G. "Eine allgemaine Methode Zur Losung Kombinatorisher Probleme." Ablauf-und planungforschung 5 (1964): 133-153. [10] Kilbridge, M. D., and Wester, L. "A Heuristic Method of Assembly Line Balancing." Journal of Industrial Engineering 12, no. 4 (July-August 1961): 292-298. [11] Magazine, M. J. and Wee, T. S. "An Iterative Improvement Heuristic for Bin Packing and Assembly Line Balancing Problems." Working Paper, Department of Management Sciences, University of Waterloo, 1979. [12], "Generalization of Bin Packing Heuristic to the Assembly Line Balancing Problem." Working Paper, Department of Management Sciences, University of Waterloo, 1979. [13], "An Efficient Branch and Bound Algorithm for the Assembly Line Balancing Problem." Working Paper, Department of Management Science, University of Waterloo, 1979. [14] Merten, P. "Assembly Line Balancing by Partial Enumeration." Ablaufund Planungforschung 8 (1967): 429-433.

-32 -[15] Patterson, James H., and Albracht, Joseph J. "Assembly Line Balancing: Zero-One Programming with Fibonacci Search." Operations Research 23, no. 1 (January-February 1975): 166-172. [16] Sawyer, J. F. H. Line Balancing. Washington, D.C.: Machinery and Allied Products Institute, 1970. [17] Talbot, F. Brian. "An Integer Programming Algorithm for the Single Model Assembly Line Balancing Problem." Working Paper 171, Division of Research, Graduate School of Business Administration, The University of Michigan, 1978. [18], "An Integer Programming Algorithm for the ResourceConstrained Project Scheduling Problem," Ph.D. Dissertation, Pennsylvan a State University, 1976. [19], and Patterson, James H. "An Efficient Integer Programming Algorithm with Network Cuts for Solving Resource Constrained Scheduling Problems " Management Science 24, no. 11 (July 1978): 1163-1174. [20] Thangavelu, S. R., and Shetty, C. M. "Assembly Line Balancing by ZeroOne Integer Programming." AIIE Transactions 3, no. 1 (March 1971): 61-68. [21] Tonge, F. M. A Heuristic Program of Assembly Line Balancing. New York: Prentice-Hall, 1961. [22] White, W. W. "Comments on a Paper by Bowman." Operations Research 9, no. 2 (March-April 1961): 274-276.