Division of Research Graduate School of Business Administration The University of Michigan SINGLE MACHINE SCHEDULING TO MINIMIZE WEIGHTED EARLINESS SUBJECT TO NO TARDY JOB Working Paper No. 423 Suresh Chand Purdue University and Hans Schneeberger The University of Michigan FOR DISCUSSION PURPOSES ONLY None of this material is to be quoted or reproduced without the expressed permission of the Division of Research. April 1985

ABSTRACT This paper considers the Single-Machine Scheduling Problem where the objective is to minimize the total weighted earliness subject to no tardy jobs. Such problems arise in "just-in-time" kind of production environments where customers accept delivery of their jobs exactly at the specified due dates -- not before and not after. (On the shop floor, a work station receiving partially completed parts from other work stations is a customer.) If a job is completed earlier than its due date then the shop holds the job and incurs holding cost for earliness. Jobs are not permitted to be tardy. Both exact and approximate methods are developed to solve this problem.

1. Introduction This paper considers a Single-Machine Scheduling problem that a production shop is likely to face in a "just-in-time" kind of production environment. We assume that the due dates for the jobs are known and that the customers accept delivery of their jobs exactly at the specified due dates. The shop incurs a holding cost for early jobs; jobs are not allowed to be tardy. An ideal solution for such a scheduling problem would be to complete the jobs exactly at their due dates. However, if the shop capacity is tight and the due dates for several jobs are clustered together, then it may not be possible to schedule the jobs to finish exactly at their due dates. As a result, the shop may have to schedule some jobs early and incur a holding cost for earliness. Thus, the scheduling problem under consideration requires minimizing the total holding cost for early jobs (or the total weighted earliness) subject to no tardy jobs. Let Pi, Ui, di, and Ci denote the processing time, weight, due date, and completion time, respectively, for job i. Let N be the number of jobs. The problem of finding the optimal values of C1, C2....,CN can be formulated as follows: N Minimize Z (d.-C.)U. (1) i=l 1 1 1 subject to C.-d. < 0 for i=1,2,...,N. (2) 1 1i 1

2 We will call this problem the Weighted Earliness Problem (WE-Problem). The problem with the added constraint that no machine idle time is allowed, that is: N ZP. C. < =. i - j=1L for i=1,2,...,N, will be referred to as the Constrained Weighted Earliness Problem (CWE-Problem). In both these problems and the rest of the paper, we assume that jobs are not allowed to be preempted. Note that the scheduling criteria introduced above are not regular measures of performance (see Page 13 of Baker [1] for a definition of regular measures). Most scheduling research to date has considered only the problems with regular measures. For example, Baker [1] claims that nearly all important scheduling criteria use regular measures of performance. Thus, this paper identifies and analyzes an important class of scheduling problems which does not use regular measures. In the next section, we analyze the CWE-Problem and give several results. Section 3 analyzes the WE-Problem and gives a heuristic algorithm to solve the problem. Section 4 gives a dynamic programming algorithm for solving the WEProblem. Section 5 consists of computational results. The paper closes in Section 6 by giving a brief summary and conclusions. Several theorems and proofs are given in the appendix. 2. Analysis of the CWE-Problem In this section, we prove several important results for the CWE-Problem. We first establish that the CWE-Problem is equivalent to a well researched single-machine scheduling problem where the objective is to minimize the total weighted completion time subject to no tardy jobs. (We will call this problem

3 the Constrained Weighted Completion Time Problem or the CWCT-Problem.) Most of the results for the CWE-Problem are developed by using the known results for the CWCT-Problem. The CWCT-Problem is stated below: N Minimize Z A.C. i=1 1 1 i=l subject to C.-d. < 0 for i=1,2,...,N, where A. is the known weight for job i. The reader is referred to Burns [2], Emmons [4], Heck and Roberts [5], Lenstra et al. [6], Miyazaki [7], Schneeberger [8], Shanthikumar and Buzacott [9], and Smith [10] for several results and algorithms for the CWCT-Problem. Equivalent CWCT-Problems A given CWE-Problem will be considered equivalent to a CWCT-Problem if these differ from each other only by a constant term in the objective function. The following result proved in Chand and Schneeberger [3] suggests a transformation process to find equivalent CWCT-Problems for a given CWEProblem. (We repeat the proof here for the sake of completeness.) THEOREM 1: The CWE-Problem with weights Ui, i=1,2,...,N, is equivalent to the CWCT-Problem with weights Ai = qPi-Ui, i=1,2,....,N, for any value of q. Proof: Let P[i] denote the processing time of the job in position i. Then C[i] can be expressed as: i [i] [jj=

4 The objective function for the CWCT-Problem can be written as: N V = Z (qPi-Ui)Ci i=l N N = Z U.(-C.) + q Z P.C. i=l 1i= 1 N N = ZU.(d.-C.) + q Z P.C. i=l i= N - U.d. i=l 1 1 i=l = The objective function for the CWE-Problem N + q Z PiC. i=l N = ZU.d. i 1 1 i=l Note that N N q P.C. = q i=l i=l N = qZ 1constant = constant I EPP Eil P[j] j=1 1 J and N Z Ud. i=l = constant. Thus, the objective functions for the CWE- and the CWCT-Problems differ only by a constant. This completes the proof. Later on, we extend the analysis for the CWCT-Problem in Chand and Schneeberger [3] to show that the CWE-Problem is NP-hard. As in [3], we also

5 identify some special cases of the CWE-Problem that can be solved optimally in polynomial time. Before that, we make some general observations about solving the CWE-Problem. Procedures for Solving the CWE-Problem Several exact and approximate procedures have been reported for solving the CWCT-Problem [2,4,6,7,8,9,10]. Most of these procedures require A. > 0 for all i=1,2,...,N. These procedures can be extended to solve the CWEProblem by transforming it into an equivalent CWCT-Problem with q large enough such that A. = qP - U > 0 for i=1,2,...,N. Many procedures for solving the CWCT-Problem require testing the P /Ak > P./Aj - condition. The following lemma gives an efficient procedure to test this condition for the CWE-Problem. Lemma 1: With Ai = qPi - U, Ai > 0 for i=1,2,...,n, the condition Pk/A > P./A. is equivalent to P/Uk < P /U. Proof for this lemma is straightforward, so it has been omitted. Using this result, we extend the Smith-heuristic [10] for the CWCT-Problem to solve the CWE-Problem. In the rest of the paper, we define R. = P./U.. Modified Smith-heuristic for the CWE-Problem (a) Arrange the jobs in the order of increasing due dates. It is assumed that all jobs are on time in this schedule; if not, then the problem does not have a feasible solution. (b) Find all the jobs with their due dates greater than or equal to the total processing time for all jobs yet to be scheduled; these jobs are feasible candidates for the last position in the sequence.

6 (c) Find the job with the smallest R.-value among all the feasible jobs and schedule it last. (d) Reduce the set of jobs yet to be scheduled by one, by removing the job just scheduled in (c). Go to (b) until all jobs have been scheduled by this method. As in Burns [21, it can be shown that this algorithm gives a locally pairwise optimal schedule. The computational time is bounded by O(NlogN). Complexity of the CWE-Problem Schneeberger [8] showed that the CWCT-Problem with A = qPi - Ui, q > 0, Ui > 0, for i=1,2,...,N, is NP-hard even for the special case when U1 = U2 =... =UN. Since the CWCT-Problem with Ai = qPi-Ui, i=1,2,...,N, is equivalent to the CWE-Problem with weights U1,U2,P...UN, the CWE-Problem is also NP-hard. And, it remains NP-hard even when U1 = U2 =... = UN. As in Chand and Schneeberger [3], we identify several cases of the CWEProblem that can be solved optimally by using the modified Smith-heuristic. We will refer to these as the easy problems. In the description of these problems, jobs are numbered such that P1 < P2 <... < PN 1- 2- - N' Easy Problem 1: The weights for different jobs are an increasing convex function of their processing times and P2-P1 PN U2-U1 -UN

7 Easy Problem2: The weights for different jobs are an increasing concave function of their processing times and RN > Ri, i=1,3,...,N-1. Easy Problem 3: N d. > Z P., i=1,2,...,N. 1 -j=l Easy Problem 4: For every pair of jobs i and j if R. > R. then d. < d.. See Figures 1 and 2 for a pictorial description of easy problems 1 and 2; proofs for these follow from Theorem 2 in the Appendix. Note that easy problems 1 and 2 imply that the CWE-Problem with P1 = P2 = '.. =PN is also an easy problem. For easy problems 3 and 4, the sequence found by the modified Smith-heuristic satisfies the conditions of Lemma 2 (in the Appendix) and therefore it is optimal. 3. Analysis and a Heuristic Algorithm for the WE-Problem We first argue that the WE-Problem is NP-hard. Note that any CWE-Problem can be transformed into a WE-Problem simply by replacing the due dates N N exceeding ZP. by ZP.. Since the CWE-Problem is NP-hard, the WEj=l3 j=l1 Problem is also NP-hard. We now propose an extension of the Smith-heuristic to solve the WEProblem. Later, we will identify cases of the WE-Problem for which this

8 (PN UN) Slope B > Slope A t U (P3U3) P1 1L p C Figure 1: Easy CWE-Problem with Convex Cost P PN Figure 2: Easy CWE-Problem with Concave Cost

9 heuristic is optimal. In the description of this algorithm and the rest of the paper, we let [1,N] = {1,2,...,N}. Recall that Ri = i/Ui Modified Smith-heuristic (MSH) for the WE-Problem: The algorithm starts scheduling the jobs from the last position. Let S denote the set of jobs scheduled and YS = [1,N] - S the set of jobs yet to be scheduled. Step 1: (Initialization) S = {i}, YS = [1,N], T = max {d.}, and Z = N. i~[1,N] Step 2: (Finding job j to be scheduled in position Z with C. =T) J Find job j such that Rj = min {R.} ildi > T, iEYS Step 3: C. = T, S = SU {j}, YS = YS - {j}, = -1, and T = min {C.-P., max {d}}. J J isYS i ' Step 4: Stop if Z =0, otherwise go to Step 2. Theorem 3 in the Appendix gives sufficient conditionsfor the optimality of a sequence found by this heuristic for the WE-Problem. It is straightforward to see that the following three problems satisfy the condition in Theorem 3 and therefore these are easy WE-Problems.

10 Case 1: R1 = R2 =... =RN. Case 2: For any pair of jobs i and j if R. > R. then d. < d. l_ i l_ J. Case 3: d = d.. =d. In the next section, we present a dynamic programming algorithm for the WE-Problem. 4. Dynamic Programming Algorithm for the WE-Problem We develop a Dynamic Goal Programming (DGP) algorithm for the WEProblem. The algorithm is similar to the Bicriteria-DP algorithm except that we are not concerned with obtaining the efficient frontier. Rather, we seek the solution that minimizes the first criteria subject to a certain minimum level of achievement on the second criteria. We will refer to these as the objective function and the goal constraint, respectively. The objective is to minimize the total weighted earliness, that is: N Minimize Z (di-Ci)Ui i=l 1 and the goal constraint enforces the time zero feasibility, that is, min (Ci-Pi) > 0. is[l,N] In addition, we have the regular constraint (2) requiring no tardy jobs. The DGP algorithm starts by scheduling the jobs from the last position. Let S denote the set of scheduled jobs. Because of the possibility of machine idle time in the optimal solution, several sequences of jobs in S can be candidates for optimality. The starting time of the first job in S will

11 depend on the machine idle time. For this reason, our DP formulation requires storing all points on the efficient frontier for state S instead of just one function value for it. The two dimensions that define the efficient frontier are v and s, where v corresponds to the objective function value and s corresponds to the achievement level of the goal constraint or the start time of the first job. We set v = ~ if s < O. The efficient frontier for a state S may look as follows: v S s I Efficient Frontier Note that the points which are not on the efficient frontier for S cannot be optimal for the entire problem. Thus, in the DGP formulation, we only have to store the points on the efficient frontier. The DGP algorithm starts with S = {i} and the corresponding v = 0 and s = max {d.}. We add jobs to S by using the reaching operation described i~[1,N] 1 below and find efficient frontiers for larger and larger S sets. The algorithm stops when we have the efficient frontier for S = {1,2,...,N}. Reaching Operation in DGP: Let Si denote the sequence of jobs in S for the j data point on the efficient frontier for S. v (S3) and s(S3) denote the objective function value for the jobs in S and the start time of the first job, respectively, for S. Let K(S3) denote the jobs which are candidates for scheduling as immediate predecessors to the jobs in S3

12 in the optimal solution to the entire problem. A procedure to find K(SI) is proposed below: Let YS = {1,2,...,N}- S, t = min {s(Sj), max {di.}, iEYS L = {Ildz > t,Z ~ YS}, and A = min {P.} i~L 1 then K(Sj) = {kd > t - A, k E YS}. (3) For the case where t = s(SJ), A is used to allow for machine idle time immediately before the first job in S. In this case, it is easy to see that, in the optimal sequence for the entire problem, this machine idle time cannot exceed A. For the case where t < s(Si), A is used to allow for the machine idle time in excess of s(SJ) - t. Consider reaching from the j data point of the efficient frontier of state (S-k) to state S. This reaching operation is performed only if k E K(S-k)j. Let sequence S = (k,(S-k), then v (S ) and s(S ) can be computed as follows: s(S~) = min {s((S-k)3) - Pk dk-Pk (4) v(s0) = V ((s - k)3) + (dk - P - s(sO)) Uk (5)

13 After performing all the possible reaching operations to state S, we have several sequences of the jobs in S with the corresponding v ( )- and s( )values. The efficient frontier for S can be found by dropping all the dominated sequences and retaining only the non dominated ones, (Sequence S dominates sequence Sa if v (Sa) > v (S ) and s(Sa) < s(Sb). Thus, Sa and S belong to the efficient frontier only if either v (Sa) > v (Sb) and Sa) > b a b a b We now give steps of the DGP algorithm. In this, we let n denote the stage, defined as the number of scheduled jobs. Step 1: (Initialization) n =0, S = {}l, S1 = no jobs, v(S1) = 0, s(S ) = max {d.} ic[l,N] 1 Step 2: (Reaching) Find K(S3) for every data point SJ on the efficient frontier for state S and for every State S at stage n (using (3)). Step 3: (Efficient Frontiers for States at Stage n+1) The states for consideration at stage (n+1) are found by combining the elements of K(S3) with S for every data point Sj on the efficient frontier for S and every state S at stage n. Complete the reaching operation using (4) and (5), retain the nondominated points and find efficient frontiers for all states at stage (n+1). Set n = n+1. Step 4: Go to Step 2 if n < N, stop otherwise. Find the minimum objective function value and the corresponding sequence from the efficient frontier for the set of all jobs.

14 Example: Consider the following 5-job WE-Problem: i P. U. d. 1 1 1 1 2 6 11 2 4 2 7 3 2 3 12 4 5 2 17 5 2 4 18 Table 1 summarizes the steps of the solution procedure. Note that the points on the efficient frontier for set S in column (7) are found after completing all the reaching operations to S. The optimal sequence is 2-3-1-4-5 with the objective function value equal to 11. It is interesting to see that each state has only one point on the efficient frontier. 5. Computational Results The objective of the computational study in this Section is to analyze the relative performance of the approximate and exact methods for several different scenarios. The best scenarios are those for which the problem data satisfy any of the conditions of cases 1, 2, and 3; that is, where the problem data is such that the modified Smith-heuristic will give an optimal sequence. The experiment is designed such that the problems generated satisfy cases 1, 2, and 3 on one extreme and then violate these to different degrees. To

15 Table 1: DGP Solution Stage n State S Data Point from which S is reached I Sequence of Jobs in S S~ I S~ on the efficient frontier? K(S~) for points on the efficient frontier v(S~) s(S~) 0 {0} 0 18 Yes 4,5 1 {4} - 4 0 12 Yes 1,3,5 {5} - 5 0 16 Yes 3,4 2 {1,4} 4 1-4 0 9 Yes 3,5 {3,4} 4 3-4 0 10 Yes 1,5 {4,5} 4 5-4 24 10 No - 5 4-5 2 11 Yes 1,3 {5,3} 5 3-5 0 10 Yes 1,2,4 3 {1,3,4} 1-4 3-1-4 9 7 No 3-4 1-3-4 6 8 Yes 2,5 {1,4,5} 1-4 5-1-4 36 7 No - 4-5 1-4-5 2 9 Yes 3 {3,4,5} 3-4 5-3-4 32 8 No - 4-5 3-4-5 5 9 Yes 1 3-5 4-3-5 14 5 No - {1,3,5} 3-5 1-3-5 6 8 Yes 2,4 {2,3,5} 3-5 2-3-5 0 5 Yes 1,4 4 {1,2,3,4} 1-3-4 2-1-3-4 9 5 Yes 5 {1,3,4,5} 1-3-4 5-1-3-4 46 6 No - 1-4-5 3-1-4-5 11 7 Yes 2 3-4-5 1-3-4-5 17 7 No - 1-3-5 4-1-3-5 24 3 No - {1,2,3,5} 1-3-5 2-1-3-5 6 3 Yes 4 2-3-5 1-2-3-5 36 3 No - {2,3,4,5} 2-3-5 4-2-3-5 24 0 Yes 1 5 {1,2,3,4,5} 2-1-3-4 5-2-1-3-4 51 3 No - 3-1-4-5 2-3-1-4-5 11 3 Yes - 2-1-3-5 4-2-1-3-5 o <0 No 4-2-3-5 1-4-2-3-5 oo <0No I

16 keep the computational work within reasonable limits, we kept the processing time constant for all jobs (Pi = 5). The weights and due dates were generated by using three control parameters (a,3, and D) as described below. These parameters are allowed to assume the following values: a = 0.0, 0.25, 0.50, 0.75, 1.0 3 = 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 D =, 25, 50, 75, 85, 95, 100 a: Control parameter for the variance in the P./U. - ratios. This parameter is used to control the variance in the P./U. ratios. The variance increases with increasing a. The lower limit ( a = O) implies that the ratio P./U. is constant for all jobs. This implies the best scenario (case 1) and hence the modified Smith-heuristic will give an optimal sequence for a = 0. 3: Control parameter for the correlation of P./U. and d.. This parameter..1 — 1. i is used to control the correlation between P./U.-ratios and the due 1 1 dates. If B is at its lower limit ( B = 0) then R. < R. + d. > d. 1 J 1 j which implies the best scenario (case 2) and hence the modified Smithheuristic will give an optimal sequence. D: Control parameter for the due dates The parameter D is used to control the due dates; a small value of D implies that the due dates are uniformally distributed over certain intervals of time and a large value of D means the due dates are clustered towards the end of the scheduling horizon. The upper limit

17 for D will give identical due dates for all jobs (case 3) and hence an optimal sequence for the modified Smith-heuristic. The due dates and weights were generated as follows. Note that N denotes the number of jobs. i) d. = 5N + U(D, 100), where U(a,b) denotes the uniform distribution between a and b. The term 5N has been added to guarantee feasibility. It is easy to see that all due dates are identical (equal to 5N + 100) for D = 100. For D = 0, the due dates are uniformally distributed between 5N and 5N + 100. ii) P di U 1 if x < 1 0 1OO + 5N + {1 if x < 1.O x = U (B, 1 + ). By using formula ii), we can generate weights (U.'s) while controlling the variance of P./U.-ratios and also the correlation between P./U. and d.. For a= 0, P./U. = 1 for all jobs; the variance of P./U. - ratios increases with increasing a. For = 0, x < 1, and the correlation between P./U. and d. will be negative. Note that because d. < 5N + 100, P /U. > 0 for all jobs. We used N = 10 jobs in this experiment. For each combination of parameter values, we generated 10 problems and solved each problem by using the DGP and MSH procedures. In total we used 5 x 6 x 7 x 10 = 2100 test problems. The results for a= 0, 6 = 0 and D = 100 are not reported because the heuristic is optimal for all these cases. To our surprise, we found that the heuristic is optimal for = 1 also; B = 1 implies that R. is a linearly increasing function of di. (We do not yet have a proof for this empirical observation.) The results for S = 1 also are not being reported.

18 Table 2 Percentage Increase in the Average Objective Function value for MSH D IL 6 =.2 I e =.4 0 =.6 3 =.8 0.25 0.0 0.7 0.0 0.0.50 0.0 2.1 0.0 0.0.75 0.0 3.5 0.0 0.0 1.00 0.0 5.1 0.6 0.0.25 0.0 0.0 0.0 0.0 25.50 0.0 0.0 0.0 0.0.75 0.1 0.1 1.0 0.7 1.00 3.7 3.8 7.6 3.9.25 0.0 0.2 0.0 0.0 50.50 0.0 4.2 1.3 0.0.75 2.2 11.8 4.4 4.6 1.00 19.2 24.9 11.3 13.8.25 0.0 0.0 0.0 0.0 75.50 1.5 0.7 0.5 0.0.75 9.2 6.5 2.4 0.0 1.00 43.8 51.2 6.7 1.7.25 0.0 0.0 0.0 0.0 85.50 0.0 2.5 0.0 0.0.75 1.3 6.3 3.6 4.3 1.00 21.7 54.5 26.9 18.2 95.25.50.75 1.00 0.0 0.0 0.5 60.6 1.1 5.5 15.3 134.2 0.4 4.4 11.6 42.2 1.0 5.2 11.2 28.8

19 Table 3 Summary of Percentage Increase in the Average Objective Function Value for Different Parameters TABLE 3A: % Increase for Different a- values 0.00 0.25 0.50 0.75 1.00 % Increase 0.00 0.14 0.93 3.23 20.90 TABLE 3B: % Increase for Different 3 -values 0.00 0.20 0.40 0.60 0.30 1.00 % Increase 0.00 6.82 13.92 5.20 3.89 0.00 TABLE 3C: % Increase for Different D-values D 0 25 50 75 85 95 1000 % Increase 0.75 1.30 6.11 7.76 8.70 20.12 0.00

20 Table 4 CPU-Time in Seconds for DGP D o B =.2 3 =.4 - =.6 - =.8 0.25.50.75 1.00 0.059 0.059 0.059 0.058 0.062 0.062 0.062 0.062 0.065 0.063 0.063 0.063 0.063 0.063 0.064 0.063 25.25.50.75 1.00 0.111 0.126 0.114 0.132 0.123 0.138 0.122 0.144 0.122 0.142 0.131 0.151 0.129 0.148 0.132 0.152 50.25.50.75 1.00 0.735 0.678 0.790 0.782 0.786 0.739 0.832 1.020 0.771 0.726 0.860 1.170 0.780 0.741 0.890 1.179 75.25.50.75 1.00 4.238 5.310 5.815 6.202 4.416 5.437 6.049 7.041 4.211 5.541 6.323 7.433 4.271 5.263 5.548 6.626 85.25.50.75 1.00 6.547 6.946 7.747 8.157 6.772 7.235 8.244 11.317 6.405 6.938 8.153 10.127 5.873 6.545 7.670 9.190 95.25.50.75 1.00 6.243 6.810 8.855 11.900 7.366 8.857 9.247 10.540 8.029 9.366 9.470 10.052 8.630 9.517 9.595 9.937 The CPU-time solved. for MSH was between 0.005 and 0.006 seconds for all problems

21 Table 2 gives the percentage increase in average objective function value (average computed over 10 problems) for the MSH proceudre over the DGP algorithm. Table 3 summarizes the results in Table 2 for different parameters. Table 4 reports the CPU run times for these two procedures. Summary and Anaysis of the Computational Results: Table 3A shows that the performance of the Smith-heuristic deteriorates with increasing a, or, in other words, with the increasing variance of R. This heuristic does well for problems with low variance of R.. This result is as expected. Table 3B shows that the heuristic performs well for extreme values 6, i.e., for the cases when there is strong linear relationship between Ri and di values. ForB values in the mid-range (around.5), the relationship between R. and d. is weak and the heuristic performs poorly. While we have proved in 1 1 the paper that the heuristic is optimal for B = 0, we are not yet able to prove the same for B = 1. Table 3C shows that the heuristic gives very good results if either the due dates for all jobs are identical or if these are uniformly distributed over a large interval. The heuristic performs poorly if the due dates are clustered together within a small interval. This result points to a weakness of the heuristic: it does not look ahead while scheduling a job; that is, it does not consider the effect of scheduling a job on the earliness of other jobs. It is easy to see that the machine idle time for the heuristic between the beginning of the first job and the completion of the last job is lower than (or equal to) the machine idle time for the optimal solution. The results in Table 3C imply that the heuristic, while attempting to reduce this idle time, has a tendency to schedule a less costly job (job with small u) at the expense of increased earliness for costly jobs.

22 With respect to CPU times, the heuristic is very fast. The DGP algorithm is inherently slow. Based on these results, we do not recommend using the DGP algorithm for solving large problems (N > 15) in practice. Overall, we conclude that the heuristic does well when the due dates are relatively uniformly distributed over a wide range, R. and d. have a strong linear relationship, and Ri has a low variance. There is a need to develop better heuristics for the cases when these conditions are not met. 6. Conclusions: This paper introduced a new single machine scheduling problem where the objective is to minimize the weighted earliness subject to no tardy jobs. This problem is relevant for manufacturing companies using "just-in-time" or "pull" type approaches. It should be remarked that most problems addressed in the scheduling literature so far relate to "push" type approaches. Both optimal and heuristic procedures are suggested to solve this problem. The optimal procedure is developed to provide a benchmark to study the performance of the heuristics; we do not recommend using of this procedure to solve large problems in practice because of its excessive computational time. Situations where the heuristic performs well are identified. Indeed, there is a need to develop better heuristics for several realistic situations. We expect the heuristic and the computational results in this paper to provide a good starting point for developing better heuristics. This paper discusses single level problems although most real problems in manufacturing are multi-level in nature. One possible way to solve a multilevel problem would be to decompose it into several single level problems by

23 using the Lagrangean relaxation approach. The results of this paper could then be used to solve the single level problems. However, much work is required to develop this approach.

24 References 1. Baker, K. R., Introduction to Sequencing and Scheduling, John Wiley and Sons, Inc., 1974. 2. Burns, R. N., "Scheduling to Minimize the Weighted Sum of Completion Times with Secondary Criteria," Naval Research and Logistics Quarterl, Vol. 23, No. 1 (1976), pp. 725- 729. 3. Chand, S. and H. Schneeberger, "A Note on Smith's Schedule to Minimize the Weighted Sum of Completion Times with Maximum Allowable Tardiness," Working Paper, Purdue University. 4. Emmons, H., "Note on a Scheduling Problem with Dual Criteria," Naval Research and Logistics Quarterly, Vol. 22, No. 4 (1975), pp. 615-616. 5. Heck, H. and S. Roberts, "A Note on the Extension of a Result on Scheduling with Secondary Criteria," Naval Research and Logistics Quarterly, Vol. 19, No. 3 (1972), pp. 403-405. 6. Lenstra, J.K., A.H.G. Rinnooy Kan, and P. Brucker, "Complexity of Machine Scheduling Problems," Annals of Discrete Mathematics, 1 (1979), pp. 343-362. 7. Miyazaki, Shigeji, "One Machine Scheduling Problem with Dual Criteria," Journal of the Operations Research Society of Japan, Vol. 24, No. 1 (1981), pp. 37-50. 8. Schneeberger, I.1I., "Job Shop Scheduling in Pull Type Production Environments," Ph.D. Thesis, Krannert Graduate School of Management, Purdue University, West Lafayette, Indiana, May 1984. 9. Shanthikumar, J.G. and J.A. Buzacott, "On the Use of Decomposition Approaches in a Single Machine Scheduling Problem," Journal of the Operations Research Society of Japan, Vol. 25, No. 1 (1982), pp. 29 -47. 10. Smith, W.E., "Various Optimizers for Single Stage Production," Naval Research Logistics Quarterly, Vol. 3, No. 1 (1956), pp. 56-66.

25 Appendix Recall that [i] denotes the index of the job in position i and that R. = Pi/Ui.. THEOREM 2: Let the jobs be labelled such that P1 < P2 <... < P. Any CWE-Problem with weights U. > 0 for all i=1,2,...,N, can be solved optimally using the modified Smith-Heuristic if the following two conditions are met: P P P a) N -l <... < 1 UN UN-1 U1 p - p P b) max i+1 i < N iEl1,N-l] Ui1 - Ui UN Proof: We need the following result from Schneeberger [8]: any CWCT-Problem with weights Ai > 0 for all i=1,2,...,N, can be solved optimally using the Smith-heuristic if Pi > P implies Ai < A. for all pairs of jobs i and j. Consider the equivalent CWCT-Problem with Ai = qPi-Ui, q sufficiently large such that Ai > 0 for all i=1,2,...,N. We show that there exists a value of q such that A > A2 >... > AN > under conditions (a) and (b). 1 N Let = Then from a) we have q UN 1 < 1 for i=1,2,...,N, q U 1

26 N or qPi -Ui > 0 or A. > 0 1 - for i=1,2,...,N, for i=1,2,...,N. From (b) we have 1 > Pi+l -P q Ui+l -U for i=1,2,...,N-1, or qPi - Ui > qPi - Ui 1 1- 1+l 1+1' or A > A i - 1+1 for i=1,2,...,N-1, or A1 > A2 >.. > AN. Thus, the Smith-heuristic is optimal for the equivalent CWCT-Problem. Because the sequence found by the modified Smith-heuristic for the CWE-Problem is the same as the one found by the Smith-heuristic for the equivalent CWCTProblem, the modified Smith-heuristic is optimal for the CWE-Problem under conditions (a) and (b). This completes the proof. Lemma 2: If the sequence found by the modified Smith-heuristic for the CWEProblem has R[1] > R[2] >... > R[N] then the sequence is optimal. Proof for this is trivial, so we skip it.

27 Lemma 3: The modified Smith-heuristic for the WE-Problem is optimal if N d= d d d> P.. Ndl d2 j= N Proof: Let t = d - P.; t denotes the total amount of I ji T j=l I the machine-idle time. In order to minimize the total weighted earliness, with identical due dates, it is easy to see that the entire idle time of tI periods should be scheduled in the beginning in an optimal solution. The first job starts at the end of period tI and all jobs are scheduled from tI to d without any additional machine idle time. Scheduling the jobs from tI to d is like solving a CWE-Problem for which the modified Smith-heuristic is optimal. This completes the proof. THEOREM 3: Consider the following sequence found by the modified Smithheuristic for the WE-Problem. The sequence has several production blocks as shown in the figure below: Block 1 Block 2 Block M 1/1/ /71/ rrvi,..1,,/ F/ K TimeFigure 3: Example for Theorem 3

28 The sequence is optimal if the R-values for jobs within each production block are in a decreasing order. Proof: If the sequence has only one production block (with R[1] > R2] > for the jobs in the block) then, from Lemma 3, it is optimal for the relaxed WE-Problem with the due date for every job increased to max {d.}. Clearly, i~[l,N] it is optimal for the WE-Problem also. If the sequence has n production blocks, then consider a relaxed WEProblem with n identical machines, all jobs in a production block to be processed on one machine, and one machine assigned to the jobs in one production block only. Using the Smith-heuristic for every machine, it is easy to see that each machine has one production block and the resulting schedule is optimal for the relaxed problem. The start and finish times for the jobs in the optimal schedule for the relaxed problem are the same as those in the schedule in Figure 3, implying that the sequence in Figure 3 is optimal for the WE-Problem. THEOREM 4: Let d = max {di}. If C1*, C "...,C is optimal iEll,N] for the WE-Problem then C.* = d - C., i=1,2,...,N, is optimal for the -1 max i weighted flow time problem with nonsimultaneous job arrivals; the arrival time for job i, r. = d - d., and weight for job i equal to U.. i max i i

29 Proof: Using r. d - d. and C. = d - C. in the WE-Problem leads to N min Z (C.-r. )U. i=l -1 1 1 s.t. C.- r. >, i-l,2,...,N, which is the same as the weighted flow time problem with non-simultaneous job arrivals. If C., is [1,N], is optimal for the WE-Problem, then C.* dmax - Ci is optimal for the corresponding weighted flow time problem. max 1