THE UNIVERSITY OF MICHIGAN INDUSTRY PROGRAM OF THE COLLEGE OF ENGINEERING SEQUENCING PROBLEMS WITH DUE DATE ORIENTED OBJECTIVE FUNCTIONS Jon M. Moore A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the University of Michigan Department of Industrial Engineering 1967 December, 1967 IP-796

PREFACE To Professor Richard C. Wilson, who has been a most stimulating chairman and an outstanding advisor; To ProfessorsR. M. Thrall, E. L. Lawler and D. H. Wilson for their advice and suggestions; To the Industry Program of the College of Engineering at the University of Michigan for the final preparation; To my wife for help and patience in preparing the initial draft; To my mother and father for years of help and encouragement; Sincere thanks. J. Michael Moore ii

TABLE OF CONTENTS Page PREFACE......................................................... ii LIST OF APPENDICES.................... v LIST OF SYMBOLS.................................................. vi Chapter 1 INTRODUCTION............................................ 1 L.1 Introduction to Job Shop Scheduling Problems..... 1 1.2 Job Shop Scheduling Models to be Considered........ 4 1.3 Objective Function Forms Considered................ 6 1.4 The fi's are Continuous, Monotone Non-decreasing................... 7 1.5 The fi's are (single) Step Functions............... 9 2 THE DEVELOPMENT OF THE REDUCTION THEOREM............... 11 2.1 Introduction........................... 11 2.2 Two Job Sequencing Relations..................... 13 Case I: Di < Dk.............................. 13 Case II: Dk < Di................................... 16 2.3 Reduction Theorem...................... 22 3 SEQUENCING PROBLEMS WITH IDENTICAL LOSS FUNCTIONS...... 35 3.1 Introduction................................ 35 3.2 Two Job Sequencing Rules........................ 35 3.3 Single-Time Dependent Jobs......................... 38 3.4 Further Characterization Results................... 42 4 SEQUENCING PROBLEMS WITH LINEAR LOSS FUNCTIONS.......... 48 4.1 Introduction....................................... 48 4.2 Two Job Sequencing Rules.......................... 48 4.3 Further Characterization Results................... 50 4.4 Schedules for Three Jobs..................... 57 iii

TABLE OF CONTENTS (CONT' D) Chapter Page 5 AN n JOB, ONE MACHINE SEQUENCING ALGORITHM FOR MINIMIZING THE NUMBER OF LATE JOBS...................... 63 5.1 Introduction........................................ 63 5.2 The Algorithm.............................. 64 5.3 Theoretical Development............................ 68 5.4 Minimizing the Maximum Deferral Cost.............. 75 6 DYNAMIC PROGRAMMING ALGORITHMS FOR THE SEQUENCING OF JOBS SUBJECT TO DEADLINES........................... 77 6.1 Introduction....................... 77 6.2 One Machine, Multiple Deadlines and Priorities. 78 6.3 Two Machines in Series, Common Deadline........... 79 6.4 Single Machine, Common Deadline, Linear Deferral Costs............................................. 81 6.5 Machines in Parallel.............................. 83 APPENDICES.............................. 85 BIBLIOGRAPHY..................................................... 97 iv

LIST OF APPENDICES Appendix Page I Basic Mathematical Theory for Chapter 2............... 86 II Basic Mathmatical Theory for Chapter 3................. 88 III Additional Mathematical Theory for Chapter 3........... 90 IV Proof of Single Time Dependence for Identical Quadratic Loss Functions............................... 92 V Proof That R(t) Is Transitive....................... 93

LIST OF SYMBOLS Ji The job associated with the index i in some set of jobs. ti The processing time of job Ji Di The due date of job Ji ri The priority or reward associated with job Ji fi The penalty or loss function associated with Ji where fi(x) represents the cost of being x units of time late in the completion of job Ji J Represents an arbitrary set of jobs, {J1 {.. J* S Represents a schedule or well ordering of an arbitrary set of jobs. L(i,k,t,T) Represents the loss associated with scheduling a job Ji at time t and another job Jk, T units of time after the completion of Ji Mi = ti + t - Di is the amount of time by which Ji meets or fails to meet its due date in a schedule of the form given above. Nk = ti + tk + t + T - Dk is the amount of time by which Jk meets or fails to meet its due date in a schedule of the form given above. R(tT) The relation defined on a set of jobs, J, where (Ji,Jk) E R(t,T) iff L(i,k,t,T) < L(k,i,t,T) for arbitrary Ji, Jk E J Ji<<(t,T)Jk Indicates that (Ji,Jk) e R(t,T) Ji << Jk Indicates that (JiJk) e R(t,T) for all T > 0. R The relation defined on a set of jobs, J, where (Ji,Jk) E R iff Ji << Jk for arbitrary Ji, Jk E J P(S,t) The sum of the lateness penalties incurred over a set of jobs, J, scheduled according to S where the first job commences at time t. Ji<<(a)Jk Indicates that (Ji,Jk) e R(t,O) for all t o R' The relation defined on a set of jobs, J, where (JiJk) E R' iff Ji << (a) Jk for Ji1 Jk E J vi

D The relation defined on a set of jobs, J, as follows: (1) If Ji E J, then (JiJi) C D. (2) If Ji,Jk are time dependently associated, then (Ji'Jk) and (JkJi) E D. Ei A time dependent equivalence class with index i associated with the partition of a set of jobs, J, by the equivalence relation D. Ti,k(T) The value of t for a pair of single-time dependent jobs such that Ji < (t?,T)Jk when t <K t and Jk < (t',T)Ji for tT > t. E The set of jobs completed on or before their due dates in some schedule S. L The set of jobs completed after their due dates in some schedule S o A The ordered set of jobs defined by ordering the elements of the set E (defined above) according to their order in the schedule S R The ordered set of jobs defined by ordering the elements of the set L (defined above) according to their order in the schedule S. An arbitrary permutation of the set R P The ordered set obtained by applying the permutation rs to the set R. AD The ordered set obtained by ordering the jobs in the set E (defined above) according to their due dates. aj The processing time for a job on the first of two machines in series. b. The processing time for a job on the second of two machines in series. pj Linear deferral cost coefficient for a job. vii

CHAPTER 1 INTRODUCTION 1.1 Introduction to Job Shop Scheduling Problems The problems considered here are in the general area of job shop scheduling. Such problems occur in "job shops," which denote special types of manufacturing or service environments. While there is no generally accepted definition of a job shop, the following definition is given to point out some of its distinctive characteristics: A job shop consists of a collection of service centers, each composed of a finite set of servers that perform operations or services for a set of entities called jobs. These jobs are unique in the sense that: (a) They are identified with a given customer; (b) They cannot generally be stockpiled for sale or service. The term "job" was used in the above definition and can be thought of asacustomer order consisting of a partially ordered set of operations or services to be performed in order to accomplish a well defined objective specified by the customer. In addition to the operations, there may be a delivery commitment (deadline) established by the shop, the customer, or both. Some good examples of job shops are computer or data processing centers, construction companies, repair shops, companies that manufacture primarily to "special order," and hospitals. The scheduling of a job shop refers to the sequence of decisions that determines the operations on which the servers in each center are to be engaged as a function of time. The result of -1

-2scheduling a job shop can be represented in the traditional form of a Gantt Chart. The majority of the job shop scheduling problems considered in the literature concern methods for producing Gantt Charts that optimize a well defined scheduling objective. Some of these objectives are listed below. In each case a rather general statement of the objective is given (one that a shop manager might specify) followed by a series of possible specific mathematical representations of it. 1. Good customer relations should be maintained by seeing that jobs are delivered reasonably on time. Some traditional representations of this objective are: (a) Minimize the number of late jobs. (b) Minimize the total lateness over all jobs. (c) Minimize the maximum lateness. (d) An arbitrary objective of this form is as follows: Let J1... Jn be jobs with deadlines D... Dn and loss functions fl.. fn where fi(t) is the cost of being t units of time late in the completion of job Ji. The objective is to minimize F(fl(Cl-Dl).,.. fn(Cn-Dn)), where Ci is the completion time of job Ji in the schedule being evaluated and F is monotone non-decreasing for each of the arguments. For example, in (a) and n (b), F(fl(Cl-D1) ovo fn(Cn-Dn)) = E fi(Ci-Di) i=l where fi is a unit step function at Ci = Di for (a) and a unit slope ramp function commencing at Ci = Di for (b).

-32. The workload at each service center should be reasonably level to avoid excessive overtime and idleness, This objective has been represented as: (a) Minimize the total idle time for all servers. (b) Minimize the total make-span (the time from the start of the first operation until the completion of the last operation), 3. In process inventory costs should be minimized, This objective can assume many forms depending on the'"financial" structure of the organization. (a) Minimize the mean number of jobs in the shop. (b) Minimize the mean work completed or in progress on all jobs in the shop. (c) Minimize the mean total work on jobs in the shop. (d) Minimize the mean work remaining on jobs in the shop. 4. Minimize the shop congestion and flow times for jobs. Some common representations of this objective are: (a) Minimize mean flow time, (b) Minimize flow time variance. (c) Minimize the probability that the flow time for any job is greater than some given constant. (d) Minimize maximum flow time. (e) Minimize the mean number of jobs in the shop. (f) Minimize the variance of the number of jobs in the shop. While this list of scheduling objectives is by no means exhaustive, most real-life objectives will fall into one of these classes.

The problem, then, is to devise some method for constructing Gantt Charts which optimizes the appropriate performance measure(s). The complexity of this problem depends on the following factors: 1. The number of facility types or service centers and whether this number is stochastic or deterministic. 2. The number and type of servers in each center and whether this number is stochastic or deterministic. 3. The type of possible routing patterns for jobs being processed by the shop. 4. The job arrival process. 5. The total number of jobs to be scheduled, 6. The number of operations to be scheduled and whether this number is stochastic or deterministic, 7. The performance measure to be optimized, 8. The presence or lack of "special shop properties" (for example, due-dates are proportional to processing times, etc.). It is necessary to specify each of these factors before a reasonably well defined scheduling problem can emerge, 1.2 Job Shop Scheduling Models to be Considered In order to obtain a reasonably workable and practical model, some simplifying assumptions are generally made concerning the factors listed above. The work in this dissertation falls into a class of job shop scheduling studies in which the following assumptions are made:

-51. The number of facility types or service centers is a known constant. 2. The servers in each center are identical and their number is a known constant, 3. The job arrival process consists of one batch arrival of all jobs to be scheduled, 4. The operations for each job are fixed and known. In particular a special subclass of these problems referred to as sequencing problems is considered where it is assumed that the shop has only one facility type or service center and that each job coming into the shop consists of one operation to be performed on that facility. In order to obtain analytic methods for optimizing the various performance measures in models of this type, the following assumptions are made: 1. The processing times (including set up and tear down) are known and assumed to be sequence independent, 2. If deadlines are associated with any jobs, these are known. 3. Once a job (operation) is started on a given server, it is processed to completion without interruption, 4. There are no machine breakdowns, 5. Only one operation can be performed by a server at one time. 6. A server is not permitted to be idle if jobs are queued for service. This thesis deals primarily with single server systems although service centers with m (not necessarily identical) servers in parallel or 2 (non-identical) servers in series are also considered,

-6There are many known results for the single server case. Here the problem is to sequence a finite set of jobs through the server so as to optimize one of the previously mentioned objective functions. Most of these problems have been solved in the sense that for each job in the queue, an index may be calculated from its due-date and processing time. The performance measure is optimized by ordering the jobs according to these indices. A good summary of the state of the art in this area may be found in Conway, Maxwell, and Miller. () Little is known, however, about these problems if the objective function is duedate oriented (i.e. some realization of the first general objective previously specified). This work deals solely with these problems. 1.3 Objective Function Forms Considered It was previously pointed out that a general objective for sequencing jobs subject to deadlines could be represented by a function of the following form: Minimize F(fl(C1-D1), o. a fn(Cn-Dn)) The two general types of problems considered here are: n 1. F(fl(C1) -D) fn(CnDn)) E fi(Ci-Di) i=l 2. F(fl(Cl-Dl) o. fn(Cn-Dn)) = max (fl(Cl-D1) e.. fn(Cn-Dn)) An efficient solution method for the latter problem is presented in Chapter 5 when the fi's are monotone non-decreasing functions, This result is an extension of a theorem obtained by Smith. (15)

-7The former problem is considered for two classes of penalty functions; one in which the fi's are continuous, monotone non-decreasing and the other in which the fi's are (single) step functions. These are discussed separately. 1.4 The fi's are Continuous, Monotone Non-decreasing This problem (and in some cases a more generalized form) has been considered for certain types of penalty functions by many authors (References 3,5,7,9,11,13,14 and 15). Held and Karp,(5) Lawler, (7) and Root(ll) all present dynamic programming formulations of these problems for arbitrary f.'s. Unfortunately, the practical applicability of these methods is limited in that they are generally computationally infeasible for problems for twenty jobs or more. Me Naughton(9) and Smith(l5) present nice solutions for problems in which the penalty functions are linear and all due-dates are equal, However, attempts to extend these results to models in which the facility has m servers in parallel(ll) or in which the due-dates are different(3'1113) have met with little success, Problems in which the loss functions are quadratic have also been considered,(14) but not solved, The primary objective in the first part of this work was to determine under what conditions a problem of this type can be easily solved and to show what the solution would be, The main result is the "reduction theorem" at the end of Chapter 2. This theorem states that for a certain class of these problems there exists an equivalence relation on the set of jobs to be scheduled, This relation induces a partition of the jobs into a set of equivalence classes, say E1...p, on which a well ordering is defined so that there exists an optimal

-8schedule (or sequence) for the jobs of the form (Ell i.o, Eip) where the pairs of classes (Eil,Ei2),.., (Eipl, Eip) belong to the well ordering. The importance of this theorem is twofold. First, it follows that in these cases the only real problem involved in finding optimal schedules is that of scheduling jobs within these equivalence classes. It is further shown that the hnice': results in this area(9'11'5) arise when the latter problem is trivial. In this sense, they all follow directly from the reduction theorem. Second, it broadens the practical applicability of exact tech(8) niques such as integer or dynamic programming by possibly reducing the original problem to a series of proper subproblems. On the basis of this result, Chapters 3 and 4 consider the characteristics of an optimal schedule for the jobs in such an equivalence class. In order to obtain meaningful results, only problems in which fl = f2.. = fn = f are considered where f is a polynomial. It is shown that the reduction theorem can be applied to this class of problems but that little can be said concerning the characteristics of optimal schedules unless f is of the form ax + bx2 (a,b > 0)o Further, it is shown that the most highly structured of these problems is that in which f = ax. Several characteristics that an optimal schedule for a problem of this type will have are determined, However, these are not sufficient to guarantee that any schedule having them will be optimal. This is shown by considering in detail problems involving just three jobs. The investigation of the problems in this class ends on that rather negative note and it is this author's opinion that further analytic work in this area is likely to meet with only limited success.

-9 The discussion of the three job problem, however, should provide a valuable source of counter-examples to proposed algorithms in this area. (3 13) 1.5 The fils are (single) step functions. This problem has received little attention in the literature and the only known pertinent result is that obtained by Smith(l5) All jobs can be completed by their due-dates if and only if they are all on time in the schedule obtained by ordering the jobs in order of increasing due-dates. Chapter 5 contains a simple algorithm for sequencing the jobs so as to minimize the number of late jobs (the fi s are unit step functions). While the author was unable to find similar algorithms for minimizing the number of late jobs when the service center has m servers in parallel or for minimizing the weighted number of late jobs through a single server (fi is a step function of height ri > 0), efficient dynamic programming algorithnms for these problems are presented in Chapter 6, Results not included here have also been obtained concerning the problem of sequencing jobs subject to some arbitrary partial ordering. A generalization of Smithss result was found, showing that there exists a well ordering of the jobs such that the jobs can all be completed on time consistent with the partial ordering if and. only if they are all on time in the schedule defined by the well ordering. A rather simple algorithm was developed for determining this well ordering. In addition, a subclass of problems was specified for which an efficient dynamic programming algorithm can be used to find a schedule

-10that minimizes the weighted number of late jobs subject to the condition that the jobs completed on time be consistent with the partial ordering. A final result defines one class of problems for which the dynamic programming results of the previous sections can be used to minimize the weighted number of late jobs subject to the condition that all jobs be scheduled in a manner consistent with the partial ordering,

CHAPTER 2 THE DEVELOPMENT OF THE REDUCTION THEOREM 2.1 Introduction The problems considered here belong to a general class of problems in which a finite set of jobs must be sequenced through a single facilaitlr, minimizing some function of the lateness penalties incurred. Conway, Maxwell, and Miller(1) and Smith(15) contain good summaries of the known results in this area. Specifically, the special problem of minimizing the sum of the lateness penalties(3'5'7,9,11,13'14) is treated when the penalties are independent (i.e. the penalty incurred by the job does not depend on the penalties incurred by other jobs). The case of linear penalty functions,(3)9,11,13) quadratic penalty func(14)'(5,7) tions, and general penalty functions have been considered in the literature. Held and Karp(5) present a dynamic programming algorithm for solving these problems, which, unfortunately, is generally computationally infeasible for problems of 20 jobs or more. This chapter and the two that follow concern conditions under Which an optimal solution to such a problem can be obtained by solving a series of proper subproblems. The results, then, will broaden the practical applicability of exact algorithms, such as dynamic or integer programming. The problem is to sequence n jobs, J1... Jn, with known processing times, tI... tn, and due-dates, D1... Dn, through a single production facility. For each job, Ji, there is a penalty or loss function, fi, where fi(x) is the cost of being x units -11

-12of time late in the completion of job Ji The objective is to find a sequence for the set of jobs which minimizes the sum of the lateness penalty costs incurred over all jobs. It is assumed that the processing times. which are defined to include the set-up and tear-down times, are independent of the sequence. In addition, the n jobs are to be available for production throughout the scheduling period. Once production begins, the facility operates continuously until all jobs are completed. Finally, no lot-splitting is allowed, so that production on a job continues from start to finish without interruption. This chapter deals with the special class of these problems where the following assumptions are made concerning the penalty functions. (1) fi(x) = o x < fi(x) > o0 x > O (2) fi is continuous* (3) fi is non-decreasing The first and third assumptions imply that there is no penalty until a job is late, at which time a positive penalty is incurred which does not decrease with time. The continuity assumption rules out the possibility of jumps in the loss functions. In fact, as is illustrated in later chapters, the scheduling methods change considerably when this assumption is not satisfied. *Mathematically, as will be shown in Appendix I, it is necessary that fi be absolutely continuous. However, continuous functions which occur in real-life problems are normally absolutely continuous.

-13The theorems presented here concern the characterization of certain schedules which are optimal with respect to the specified performance criterion. The main result concerns the condition under which a special equivalence relation can be defined on the set of jobs, J = {Ji.. Jn}. This relation induces a partition of the jobs such that an optimal schedule for the complete set is obtained by finding an optimal schedule for the jobs in each equivalence class. 2.2 Two Job Sequencing Relations Definition: Let L(i,k,t,T) represent the loss associated with jobs Ji and Jk in a schedule where the processing of job Ji commences at time t and that of job Jk commences at time t + ti + T Further, let Mi = ti + t - Di and Nk = tk + ti + t + T - Dk represent. the lateness of jobs Ji and Jk respectively in a schedule of the form just defined. Thus, L(i,k,t,T) = fi(Mi) + fk(Nk) and L(k,i,t,T) = fk(Mk) + fi(Ni) The following set of results concerns the conditions under which a general inequality can be obtained between L(i,k,t,T) and L(k,i,t,T). Without loss of generality, assume that ti < tk throughout this section. CASE I: Di < Dk LEMMA 2.1 Given jobs Ji and Jk with Di < Dk If -, fi fk fi' is non-decreasing, and fi > fk' then L(i,k,t,T) < L(k,i,t,T) for all T > 0, - 0_ < t < +.

-14Proof: L(iktT) = fi(Mi) + fk(Nk) L(ki,tT) = fk(Mk) + fi(Ni) If Nk < 0, then fk(Nk) = fk(Mk) = 0 and L(i,kwt,T) < L(ki,t,T) since fi(Mi) < fi(N) Hence, assume Nk > 0 Since Di < Dk, Nk = ti + tk + t + T - Dk < Ni = ti + tk + t + T - Di and for Nk > 0 we have Ni > 0. For fi > fk, fk(Nk) < fi(Ni) and there are now three cases to consider. (1) If Mi < 0, then fi(Mi) = 0 and L(iktT) fk(Nk) < fk(Mk) + fi(Ni) = L(k,i,t,T). (2) If Mi > 0 and Mk <0, then fk(Mk) = 0 and L(iktT)" = fi(Mi) + fk(Nk) L(ki,t3T) - fi(Ni) = fi(Nk) + fi(Ni) - fi(Nk) Since fi(Nk) > fk(Nk), it is sufficient to show that [fi(Ni) - fi(Nk)] > fi(Mi) But, Mk = tk + t - Dk < O or tk + t < Dk Hence Mi = ti + t - Di < tk + t - Di Dk - Dio Consequently, it is sufficient to show that [fi(Ni) - fi(Nk)] > fi(Dk - Di) > fi(Mi) ~ This follows from the fact that Ni - Nk = Dk - Di, fi' is non-decreasing, and Nk > 0. A formal proof is obtained by applying the theorem presented in Appendix Io

-15(3) If Mi > O and Mk1 > 0, then fk(Mk) > O and L(ik tgT) = fi(Mi ) + fk(k k) L(kitqT) = fk(k) fi(N) = fk(Mk) + fi(Mi) + fi(Ni) - fi(Mi) In this case, it is sufficient to show that fk(Mk) + fi(N{) - fi(Mi) > fk(Nk) or fi(Ni - f i(Mi) >fk(Nk)- fk(Mk) This follows from the fact that Ni > Nk, Ni - Mi tk + T > ti + T Nk - Mk an d fi > fk A formal proof results from applying the theorem presented in Appendix I. Therefore, L(i,k tT) < L(ki,it,T) for all T > 0, t. Reviewing the proof of this lemma, it is interesting to point out how each sufficient condition concerning the loss functions affects the result. The three stated conditions wereo (1) fi >:f (2) fij is non-decreasing (3) fi' > fk If the first condition is not satisfied, L(i,k9tT) < L(k,i t,T) if Nk < 0 (ioe. T + t < Dk - ti - tk) If fi > fk but the second condition is not satisfied, then L(i,k t, T) < L(k,i t9T) for t < Di - to If only the third condition is not satisfied, then L(ikzttT) < L(k,i.,tT) for t < max ((Di - ti), (Dk - tk))

CASE II: Dk < Di o The following points in time are significant in the analysis to follow(Dk- ti - tk), (Di - ti - tk), (Dk - tk), (Di - tk) and (Di - ti) Since ti < tk and Dk < Di 9 (Dk - t - t k)< (Dk - tk), (Di - ti - tk) (Di - tk) > (Dk - tk), (Di - ti - tk) and (Di - ti) > (Di - tk) o There is no general relation, however, between (Dk - tk) and (Di - ti - tk) o The two possible cases are shown below. (1) (Dk-ti-tk) <k) < (D )tk < (Di -ti-tk) < (Di-tk) < (Di-ti) (2) (Dk-ti-tk) < (Di-ti-tk) < (Dk-tk) < (Di-tk) < (Di-ti) As before, L(i,k,t,)T) = fi(Mi) + fk(Nk) L(k,it,T) = fk(Mk) + fi(Ni) The:re are five general cases t;o consider~ (1) If t < Dk - tl - tk with T > 0, then ti + tk + t - Dk O0 and hence fi(Mi) - fk(Mk) =: O In this case, L(i,k,t,T) f: k( k) L(k,i,t,T): fi(Ni)

'17 For Lemma 201, it was necessary to assume that fi > fk In this case, however, no comparison can be made between L(i,kt,T) and L(kit,T) under this assumption in that Nk > Ni. Hence, it will be assumed throughout this discussion that fi < fk o Consequently, L(k,i,t,T) < L(i,k,t,T) for the case being considered. (2) (Dk - ti - tk) < t < (Di - ti - tk) with T > 0 In this case if fk is non-decreasing, then L(k,it,T) < L(iyktT) for all T 1> 0 L(iktT) fk(Nk) since Mi < 0 L(kni.tT) fk(Mkk) 4 fi(Ni) o Clearly L(k,i,t,T) < L(i9k,t,T) when t < Dk - tk (i.eo Mk < 0) If t > Dk - tk Y then rewrite the expression for L(i,k,t,T): L(i.ktT) = fk(Ni) + fk(Nk) - fk(Ni) Since fk(Ni) > fi(Ni), it is sufficient to show that fk(Nk)- fk(Ni) > fk(Mk) This follows from the fact that k(Nk) - fk(Ni) > fk(Nk-!) - fk(i-T) = f(Nk-) (since Ni - T < 0 for t Di - ti - Dk), fk' is non-decreasing, and Nk - T > Mk o A formal proof results by applying the theorem of Appendix io (3) (Di - ti - tk) < t < Di - tk with T > 0 As in the previous case, L(kit,T) < L(i,k,tT) for all T > 0 if fk? is non-decreasing~ L(ikt,T) fk(Nk) L(knigtT) fk(Mk) + f((Nj)

If t < Dk - tk, then L(k,i,t,T) < L(ik,t,T) If t > Dk - tk, then write L(i,kt,T) = fk(Ni) + fk(Nk) - fk(Ni) and it is sufficient to show that (fk(Nk) - fk(Ni)) > fk(Mk) o This follows directly from the theorem of Appendix I in that fk(Nk) - fk(Ni) > fk(Nk-Ni.) fk(Di - Dk), fk' is non-decreasing, and Mk < Di - Dk for t < Di - tk (4) Di - tk < t < Di - ti with T > 0 Here, the above development would hold except that (Di - Dk) < Mk <_ (Di - Dk) + (tk - ti) Consequently, no general comparison can be made between (fk(Nk) - fk(Ni)) and fk(Mk) o In this case, then, there is no general inequality between L(k,i,t,T) and L(i,k,t,T) o (5) t:> Di - t with T > 0 L(i,kt,T) = fi(Mi) + fk(Nk) L(k,i,t,T) - fk(Mk) + fi(Ni) Since Mi = ti + t - Di < tk + t - Dk = Mk Ni ti + tk + T + t - Di < ti + tk + T + t - Dk Nk fi < fk it follows that fi(Ni) < fk(Nk) and fi(Mi) < fk(Mk)

19 Therefore, it is necessary to compare fk(Mk) - fi(Mi) and fk(Nk) - fi(Ni) or fi(Ni) - fi(Mi) and fk(Nk) - fk(Mk) But Ni - Mi tk + T > ti + T = Nk - Mk and > Further, M k> Mi Hence, no general inequality exists between L(i.,ktT) and L(k,i,t,T) The results of this analysis are summarized below: When Dk < Di, no comparison can be made between L(i,k,t,T) and L (k,i,t,T) unless fk > fi (a) For fk fi, L(kitT) < L(i,k,t,T) when t < Dk - tk and T > O (b) For fk > fi fk' non-decreasing, L(k,itT) < L(i,kgt,T) when t < Di - tk and T > 0 o (c) No general inequality exists for t > Di - tk and T > 0 The statement that no general inequality exists between L(i,kht,T) and L(k,i,t,T) for certain ranges of values of t, T should be clarified: For given fi, fk, t, T, there exist, values of ti, tk, Di. Dk for which L(i,k, t,T) < L(k,i, t, T) and others for which L(kit,T) < L(ik,tT) So far the discussion has centered on sufficient conditions for L(iktT) < L(k,itT) or vice-versa for various values of t, T o One important necessary condition for such a relation to hold is given below.

-20 - LEMMlA 2 2 Given Ji and Jk with arbitrary ti, tk, Di Dk B fi, and fk L(i,kgt,T) < L(k,i,tT) for all. T > 0 only if Di <Dk Proof: Suppose Dk < Di and, without loss of generality, T = 0 Let t be such that ti + tk + t - Di = 0, that is t = Di - ti - tk Hence, ti + t - Di =Mi = -tk < O ti + tk + t - Dk Z Nk Di - Dk > 0 But tk + t - Dk = Mk Di - Dk - ti If Mk < O, L(i,kt,T) = fk (Di - Dk) > 0 L(k,i,t,T) = 0 and the proof is complete, otherwise Di - Dk - ti > 0 or Di = Dk - b = ti where > 0. Then, let t = Di - ti - tk - b and L(iktT) = fk (Di - Dk - s) > 0 L(ksi,t,T) 0 and hence, there exists t such that L(k,itT) < L(i,kt,T) for arbitrary T > 0 The results obtained so far are summarized in the diagram on the next page~ These results are not particula:rly encouraging and should indicate the potential complexity of these problems. However, the fact that there are conditions for which L(i,k,t,T) < L(k,i,t,T) for all T > 0 can be used in problems with a special property to obtain the main result in this chapter, a reduction theorem for decomposing the original problem into a series of subproblemso

ti < tk ti < tk Di <Dk Dk <i L(i,k,t,T) < L(k,it,T) |Nfo > fk No t < Dk-ti-tk-T,7 T > O l| No General Inequality f f No General Inequality for t > Dk-ti-tk-T Yes Yes L(i,k,t,T) < L(k,i,t,T) L(k,i,t,T) < L(i,k,t,T) No A - A- tkT>O NO <D1 ti T _ 0k fi t <D Dk - tk T >O fk non-dec. No General Inequality No General Inequality non-dec for t > Di - ti for t > Dk - tk Yes Yes L(i,k,t,T) < L(k,i,t,T) L(k,i,t,T) < L(i,kt,T) fi I> fk ot < Dk -tk, T > 0 t < Di - tk, T >O No General Inequality No General Inequality for t > Dk - tk for t > Di - tk L(i,k,t,T) < L(k,i,t,T) T >, - oo < t < + oo Diagram 1. Sequencing Rules for Two Jobs, Ji and Jk

-222,3 Reduction Theorem Definition: Given a set of jobs, J = {J1 ~.. Jn1 and values for T > 0 and t, a relation, R(t,T), is defined on the set J as follows: (Ji'Jk)E R(t,T) iff L(i,k,t,T) < L(k,~it,T) Denote the fact that (JiJk)E R(t,T) by writing Ji << (t,T) Jk Definition: Two jobs, Ji and Jk, are said to be time independent iff Ji << (t,T) Jk for all - co < < < + co T > 0 or Jk << (t,T) Ji for all such t, T. Otherwise, the jobs are said to be time dependent. Definition: Two jobs, Ji and Jk, are said to be adjacent time independent iff Ji << (t,O)Jk or Jk << (t,O)Ji for all t o Otherwise, the jobs are said to be adjacent time dependent. It should be obvious that jobs which are time independent are adjacent time independent and those that are adjacent time dependent are time dependent. When Ji and Jk are time independent with Ji << (t,T)Jk for all. - oo < t < +, T > O, this is denoted by writing Ji << Jk In like manner Ji << (a) Jk denotes that Ji and Jk are adjacent time independent with Ji << (t,O) Jk o For a set of jobs) J = {Jl. poJn] let R denote the relation defined by time independence and R' the relation defined by adjacent time independence. One might expect that the flow diagram presented in the previous section would be simplified if it were assumed that T - 0 throughout the analysis. Unfortunately, no simplification arises if

-23this assumption is made. However, the necessary conditions stated in Lemma 202 for time independence are also necessary for adjacent time independence, This can be seen by noting that the proof of the lemma does not depend on the value of T o Definition: Given a schedule, S = (Jil'o' Jin), for a set of jobs, J = {J1..o Jn}, with known, processing times t;l.. tn, due-dates, D1 o.. Dn, and loss functions, fl.O. fn Let P(S,t) denote the total penalty incurred if the jobs are scheduled as in S with the first job commencing at time t where the due-dates are defined relative to t = 0 o n r P(St) - Z fir (t + E tim - Dir) r=l m=l LEMMA 2 3 Given a schedule, S = (Jil Jik' l iJik. l...Ji and a schedule SI' = (Jl JiklJik+Ji k Jin) then P(sT,t) < P(S,t) iff k-1. k-1 L(ik+lik,t + z ti, o) < L(ik,ik4l, t + z ti 0) r=l r r=l k-1 Proof: P(S,t) = P((Jil+oJik_l),t) + L(ik9ik+l, t + Z ti, 0) (S't):=P((J i k-.+' arld P(S',t) - P(S,t) + (L(ik+l,ikt + L ti 0 ) r=L r k-1 -L(ik ik~l~t + E t ~, 0)) o

-24L k-1 Hence P(S',t) < P(S,t) iff L(ikEl, t k t +, 0) r=l r k-1 r ti r <. L(iki'k+l, t + Z tir, O r=l Corollary: If Jik+ << (a) Jik then P(S',t) < P(S,t) for all t'ik+l Proof: L(ikl, ik, tO) < L(ikik+l,t,O) for all t LEMMA 2~4 Given a schedule, S = (J 1''I Jn) in which there are two jobs, Ji and Jk, and i < k but Jk << Ji with tk < ti The schedule S', obtained by interchanging Ji and Jk in the schedule S, is such that P(S',t) < P(S,t) Proof: P(S,t) =: fl(t + t D) + f2(t + t1 + t2 - D2) + i k. 0 + f (t + E tq Di) +. + fk(t + E tq - Dk) q=l q=l n f(t + Z t Dn) q=l L1 + L2 +. + L i-1 P(S It) = L1 + L2 + o + L1- +G (tL + tk + Z tq - Dk) q.l i-1 + fi+l (t + tk +tq + ti1 - Dil ) + q=l i-l k-l o + fo. + ~1 (t + t + tq + Z tr - Dkl) qkl r il

-25i.- k-l + fi (t + tk + E t tr + ti Di) 1 ~q +Lk l +, ~ ~ + Ln But LI + L~ < Li + Lk since L(k,itT) < L(i,k,t,T) for all T > 0 And since tk < ti and the fi s are non-decreasing, L' < L for j = 1.. (k-l-i) i+j - i+J Hence P(S',t) < P(S,t) At this point it is necessary to assume that the scheduling problems to be considered have the property that R' is a transitive relation. While this property is quite reasonable, it i s not obvious that all scheduling problems in the general class previously defined have it. However, the author has not been able to find a numerical example where R' i.s not transitive. LEMMA 2.5 Given a set of jobs J = J1. a Jn} where Ji and Jk are adjacent time dependent but are adjacent time independent with all other jobs in the set, J' =J - {Ji,JTk}, then J' can be partitioned into two sets, B and A, such that; Jq B iff Jq << (a) Ji and Jq << (a) Jk Jq c A iff Ji << (a) Jq and Jk << (a) Jq

-26Proof: Consider any job Jq c J9 o Since Ji and Jq are adjacent time independent, either Ji < (a) Jq or Jq z< (a) Ji (1) If Ji << (a) Jq then consider Jk and Jq As in the case of Ji, either Jk << (a) Jq or Jq << (a) Jk If Jk < (a) Jq, then Jq E B. If Jq << (a) Jk, then Ji << (a) Jq << (a) Jk and Ji << (a) Jk since R' is transitiveo But Ji and Jk are adjacent time dependent and we have a contradiction. Hence, Jk << (a) Jq (2) If Jq << (a) Ji, then the same type of argument as above can be used to show that Jq << (a) Jk and hence Jq E A Definition: For a set of jobs, J = {J1 ~ JnJ}, a pair of jobs, Ji and Jk, are said to be adjacent time dependently associated if there exists a sequence of jobs, (Ji1... Jip) Jiq E J for q = i... p, such that (Ji,Jil) (Ji1 Ji ).. (Jip Jip (JipJk) are adjacent time dependent pairs. Such a sequence of jobs, (JiJil o ooJipJk)., is called an adjacent time dependent string. Define the relation, D, on the set J as followsO (1) If Ji c J then (Ji,Ji) c D o (2) If JiJ k c J are adjacent time dependently associated, then (Ji1Jk) and (Jk Ji) c D LEMMA 2.6 D is an equivalence relation. Proof: Since D is reflexive and symmetric by definition, it is only necessary to show that D is transitiveo If (Ji'Jk) and. (Jk,Jq)e D, then there exist adjacent time dependent strings Ji, i Jip Jk and Jk J JkL q " Jks J q

-27Thus JiJil' i' J Ji k Jk Jkl'' 1' Jq is an adjacent time dependent string and (Ji,Jq) e D Definition: Associated with the equivalence relation D is a partition of the set, J, into a set of equivalence classes. Such an equivalence class will be called an adjacent time dependent equivalence class. LEMMA 2.7 Given a set of jobs, J = {J1... Jn}, and a proper subset, J' = Jp... Jp }, which is an adjacent time dependent equivalence class. The remaining jobs in J can be partitioned into two sets, B and A, such that Ji c B iff Ji << (a) Jl''' Ji << (a) Jpm Ji c A iff Jp << (a) J J (a) Ji Proof: Consider any job, Ji C J - J' Since J' is an adjacent time dependent equivalence class, Ji is adjacent time independent with each job in J'. In particular, either Ji (a) J or J << (a) Ji 1 If Ji << (a) JPl, consider any other job, JPk C J'. By definition of an adjacent time dependent equivalence class there exists an adjacent time dependent string of the form (Jpl1 Jql'... Jq, Jpk) where Jq,1..., Jq C J' But (Jpln Jql)... (Jqs, Jpk) are adjacent time dependent pairs. Hence, apply Lemma 2.5 to each pair in the order given to obtain: J~ (a) p J< i (a) Jq... Ji << (a) Jq Ji << (a) Jk JPk

-28Hence Ji << (a) JPk for arbitrary Jpk e J In like manner, if Jpl < (a) Ji, the same argument can be used to show that JPk << (a) Ji for arbitrary Jpk J'. Consequently each job, Ji e J - J', will be uniquely assigned to either set B or set A Remarks: Since it was assumed that the relation defined by adjacent time independence is transitive, it should be obvious that for any pair of jobs, Ji e B and Jk e A, Ji << (a) Jk LEMMA 2.8 Given a set of jobs, J = {J1''' Jn}, and a proper subset, J' =Jpl.. Jp4, which is an adjacent time dependent equivalence class. Any schedule, S, in which JPl'' Jpm are not adjacent can be transformed into a schedule S' in which JPl,..., J p are adjacent such that P(S',t) < P(S,t) Proof: Without loss of generality, assume S is of the form: (J1'"00 JP1' ""s JP2'""0 JPm,... Jn) where jobs Jil,...,Ji lie between jobs JPl''" JPm Using Lemma 2.7, each job in the set, {Jil.. Ji } is a member of either s the set B or the set A defined by the set of jobs J'. There are three general cases to consider. (1) If Jil E B, then Ji << (a) JPk for k = 1... m In this case, define a new schedule, S1, of the form (Jl... Jil, JPl''" JP2'''" JPm "~ ~ Jn) and P(Sl,t) < P(S,t) by repeated application of the corollary to Lemma 2.3. Now operate on S1.

-29(2) If (1) does not apply, then Ji C A and consider Ji 1 is f Jis E A, then JPk << (a) Ji for k = 1... m and a new schedule, S1. can be defined of the form (Jl'~" JPlV''O. P2 ~"~O~ JPM JiSTin,'' ~~n) J) As in the previous case, P(S1,t) < P(S,t) by repeated applications of the corollary to Lemma 2.3. Now operate on S1. (3) If neither (1) or (2) apply, then Jil c A and Ji c B Hence, there must exist a pair of jobs, Jib and Jib+l such that Ji E A and Ji E B. But Jibl << (a) JPk for k = 1.. m and Jp << (a) Ji for k = 1... m Further, J ib << (a) Ji'b'b+l b Hence, Jib may be moved to the right by successive interchanges to obtain a schedule, S1, of the form, (Jl,.~~, Jpl'~~' Jib, Jib'... Jp m~''' Jn) and P(Sl,t) < P(S,t) by repeated application of the corollary to Lemma 2.3. Now operate on S1 At each stage, either one of the jobs in the set, {Ji1...Ji} s is eliminated from the subsequence (JPl..JPm)' or a pair of jobs in the set, {Jil.oo Ji,} is interchanged so that the set is one interchange closer to being ordered in such a way that the jobs, Jik E B precede the jobs, Ji c A Since there are a finite number of jobs, the process terminates with a schedule, Sr, of the form (Jlnooo Ji j l' j' P iq+l s',') in which,'1 PM io," Jn )inhich Jp..0 JP are adjacent and J' e B for k =1.o q and J! c A for k = (q+l)... s. Now let S' = Sr and P(S',t) = P(Srt) < O o< < P(St) LEMMA 2.9 Given a set of jobs, J = {J1.o Jn}, with a proper subset, J' - {Jp,' Jp}, whnich is an adjacent time dependent equivalence

-30O class~ Any schedule, S, can be tralsformed into a schedule of the form S = (B, J', A) where B anrd A are defined for the equivalence class, j, where P(S',t) < P(S, t Proof: Consider a schedule, S, and assume that S is not of the form. (B, J?, A) If the jobs in the set, J,. are not adjacent in the schedule, S, apply Lemma 2o8 to obtain a new schedule, S1, in which they are adjacent, and S1 will be of the form (BL, J?, A1) with P(Slt) < P(S,t). There are now two steps that must be performed to obtain the schedule of the desired form. 1. If B1C B, go to step 2. Otherwise, consider the set of jobs B1 n A, Pick the last such job, Jq, in the current schedule and clearly Ji << (a) Jq for all Ji E B and JPk << (a) Jq for all Jpk e J'. Now move the job, Jq, to the right by successive interchanges until a job, Jr, is encountered where Jr c A e Since Jq was the last job in the current schedule which was a member of B! 0 A, the new schedule, S2, so defined most be of the form, (B2, J', A2) where B2 B1 - {Jq} A2 A1 U {q and P(S2,t) < P(S1,t) by repeated applications of the corollary to Lernma 2~3. Now use S2 as the current schedule and repeat this step. This process terminates in a finite number of steps with a schedule, Sr, where Sr is of the form (Br, J' J Ar) and Br C B with P(Srt) <..0 < P(S1,t) < P(St)

-3s2. Consider the set Ar and if Ar n B =, the schedule Sr is of the form (B. J', A) and the proof is complete. If ArnB pick the first job, Jq, in the current schedule where Jq E Ar n B Clearly Jq << (a) Ji for all Ji E A and Jq < (a) JPk for all Jpk J'. Now move the job Jq to the left by successive interchanges until a job, Jr, is encountered where Jr C B. Since Jq was the first job in the current schedule which was a member of Ar n B, the new schedule, Srl, so defined must be of the form (Br+l, J', Ar+i) where Ar+l = Ar - Jq} Br+l Br U {Jq} and P(Sr+,1t) < P(Sr, t) by repeated applications of the corollary to Lemma 2.3. Now use Sr+1 as the current schedule and repeat this step. This process terminates in a finite number of steps with a schedule, Srs, where Sr+s is of the form (Br+s, J', Ar+s) and Ar+s n B =. But Br+sn A = and hence Sr+s is of the form (B, J', A) with P(Sr+s,t) <... < P(Sr+l,t) <<.. _ P(Slt) < P(S,t) Definition: Given a set of jobs, J = {J1... J, for which there are at least two adjacent time dependent equivalence classes. For two such classes, Ei and Ek, we say that Ei precedes Ek, denoted Ei << Ek if Jq C Ei < (a) Jr c Ek Remark: The relation'precedes" as defined above is obviously transitive and anitsymmetric. In fact, for a set of m equivalence classes for a set of jobs, J, this relation constitutes a well ordering of the equivalence classes.

-32LEMMA 2.10 Given a set of jobs, J = {J1 ~ Jn} for which there are at least two adjacent time dependent equivalence classes, Ei and Ek where Ei << Ek o Then any schedule, S, can be transformed into a schedule of the form S' (Bi, EBi (A1 ) Bk), Ek, Ak) where Bi and Ai correspond to Ei and Bk and Ak correspond to Ek with P(S',t) < P(S,t) Proof: Since Ei << Ek, BiC Bk and AkC Ai By Lemma 2.9 the schedule, S, can be transformed into a schedule of the form, Si = (Bi, Ei1 A1) where Ek C Ai since Ei << Ek and P(Sl1t) < P(S,t) Now consider the set Ai. Using Lemma 209 and the principle of optimality, the schedule S l can be transformed into a schedule of the form S2 = (Bi, Ei, Bk, EkYAf), where Bk and Al are defined for Ek relative to the set of jobs Ai But, B = Bk nAi and Al kA n A1. Hence the schedule, S2, can be transformed into a schedule of the form., S' = (B1, Ei. (Ai n Bk), Ek,, (A1 0 Ak)) or S' = (Bi, EiB (Ai n Bk), Ek, Ak) and P(S',t) < P(S2,t) < P(Si, t) < P(S9t) THEOREM 2.1 (Reduction Theorem): Given a set of jobs, J = {J1 o Jn}. for which there are m adjacent time dependent equivalence classes, 1E.~. Em, Any schedule, S, can be transformed into a schedule of tthe form, S' -(EilB Ei2,.Ei ) where' ~ 2

-33Ei << Ei <<..< Eij and P(S',t) < P(S,t). Proof: Using Lemma 2.10, the schedule, S, can be transformed into a schedule of the form, SL = (Bil, Eil, (Ail n Bi2), Ei2, Ai2) where P(Sl,t) < P(S,t) However, since Eil << Ei2 <<... << Ei, it follows that there does not exist a job, Jq E J, such that for some job, Jr E il, Jq << (a) Jr and Jq t Eil. Hence, Bil = ~. Further, since the set of Eik's constitutes a partition of the set J, there cannot exist a job, Jq E J, where for jobs Jr e Ei and Js E Jr << (a) Jq << (a) Js and Jq ~ Ei or Ei2 Consequently, S1 is of the form (Eil. EiC, Ai2). Now operate on Ai2 to obtain a schedule of the form, ~2'2 S2 = (Eil, Ei2, Ei3, Ei4, Ai4) and P(S2,t) < P(Sl,t) < P(S,t), etc. This process terminates in a finite number of steps with the schedule Sq =S' and P(S',t) = P(Sq,t) <... < P(Sl,t) < P(S,t) Corollary: If R' is a well ordering of the jobs, then the schedule defined by R' is optimal. Proof: If R' is a well ordering, there exists a sequence of jobs, Ji << (a) Ji << (a)... < (a) Ji, and the schedule (Jil Jin) is clearly optimal.

-34The results obtained by McNaughton(9) and Smith(15) on scheduling jobs with linear deferral costs follow directly from the corollary using the following lemmao LEMMA 2J11 If L(i,ktO) = ai(ti + t) + ak(ti + tk + t) for all JiJk, t > 0-= O t < 0 ti tk then Ji << (a) Jk iff -< - ai ak Proof: L(ikt,O) = ai (t)i + t) + ak(ti + tk + t) L(kitO) = ak (tk + t) + ai(ti + tk + t) L(k~i t O) - L(i:kntO) = aitk - akti Hence, R' is a well ordering of the jobs and the result follows from the corollary to the reduction theorem, Incidentally, this type of problem provides an excellent example of where two jobs that are adjacent time independent are not time independent. Consider Ji and Jk where ti/ai < tk/ak but ak > ai Assuming that t is sufficiently large, L(kit,T) - L(i1,ktT) = aitk - akti + (ai-ak) T which is negative for sufficiently large T This theorem, then, demonstrates that there will be at least one optimal schedule of the form specified above, This reduces the problem to one of determining the optimal sequence within each adjacent time dependent equivalence class. One property that such an optimal sequence will have was given by Lemma 2,4, There are other properties, however, which at least one optimal schedule will possess if additional assumptions are made concerning the fi's. These are the subject of the next chapter.

CHAPTER 3 SEQUENCING PROBLEMS WITH IDENTICAL LOSS FUNCTIONS 3 1 Introduction The problem of sequencing two jobs, Ji and Jk, with known processing times, ti and tk, due-dates, Di and Dk, and loss functions, fi and fk, was considered in the previous chapter. For ti K tk and Di ( Dk, it was shown that fi >fk, fi? > fk' and fil non-decreasing were sufficient conditions for Ji ~< Jk. When ti < tk and Dk < Di, however, fk > fi was a sufficient condition for Jk << (t,T)Ji for limited values of t. Hence, it would appear that a more structured problem might emerge if the assumption were made that the loss functions for the set of jobs to be sequenced are the same, This chapter, then, considers the general problem posed in the previous chapter in the special case where fl -=.e - fn = f The assumptions of that chapter concerning f will be retained and the additional assumption made that fV is non-decreasing. The latter means tnat at least as much penalty is incurred during the n-th unit of lateness for a job as was incurred during the (n-l)st unit of lateness. 3,2 Two Job Sequencing Rules Under the assumption that the loss functions are identical and that fV is non-decreasing, the results of the previous chapter concerning the relation between L(ik,t,T) and L(k,i,t,T) can be refinedo There are two general cases to consider, (1) If ti < tk and Di < Dk, then L(i,kt,T) < L(k,i,t,T) for all T > 0 o -35

Proof: For fi = fk f, it follows that fi > fk and fi > fk' ~ Hence, the sufficient conditions stated in the previous chapter for Ji < Jk are satisfied. Consequently, since ti < tk and Di < Dk are sufficient conditions for time independence, ti < tk and Dk < Di or viceversa are necessary conditions for time dependence. (2) If ti < tk and Dk < Di, the results of the previous chapter obtained when fk > fi hold since fk = fi = f e Consequently, L(ki,t)T) < L(i,kt,T) for t < Di - tk, T > 0 Unfortunately, there is still no general relation that can be derived for t > Di - tk, T > 0. Further, the situation does not change if one considers the special case where T = 0. The question is, then, if ti < tk and Dk < D, what are sufficient conditions for the time dependence or adjacent time dependence of Ji and Jk ~ To determine such conditions, recall L(i,kt,T) = f(Mi) + f(Nqk) L(ki9t, T) = (M) + f(kTi) and L(k,i,t,T) K L(i,k,,tT) when t < Di - tk o For Ji and Jk to be time dependent, it is necessary to find t and T > 0 such that L(i,k,t,T) < L(k,iyt,T) o Sufficient conditions for the existence of such t, T are expressed in the lemma below. LEMMA 3.1 Given two jobs, Ji and Jk, where ti < tk, Dk < Di and fi = fk = f If lim f'i = 1 as y oo for arbitrary f (x y) 0 < x K co, then Ni and Jk are time dependent.

-'37Proof: Assume that L(k,i,t,T) < L(i,kt,T) for t = Di ti and L(i,k (Di - ti),T) = f(tk + T + (Di - Dk)) L(k,i,(Di - ti),T) = f(tk - ti + Di - Dk) + f(tk + T) Now let x1 = min((tk - ti + Di - Dk), (tk 4 T)) x2 = max((tk - ti + Di - Dk), (tk + T)) x3 =tk + T + (Di - Dk) and xl < x2 < x3 9 X1 + x2 > X3 Hence, L(i,k, (Di - ti), T) = f(x3) L(ki, (Di - ti), T) = f(xl) + f(x2) Consider t > Di - ti and let y = t - (Di - ti) Hence, L(ikt,T) - f(x3 + y) + f(y) and L(k,i,t,T) = f(xl + y) + f(x2 + y) The question now is under what conditions does there exist y > 0 such that f(xl + y) + f(x2 + y) > f(x3 + y) + f(y) It is shown in Appendix II that a sufficient condition for this to hold is that lim f'() 1 for O < x < co Y f'(x +y) It should be pointed out that the above proof does not depend on the value of T o Hence, the sufficient conditions stated for time dependence are also sufficient for adjacent time dependence, Hence, for loss functions having the property that Y 4 f'(xy)

-38a necessary and sufficient condition for the time dependence (and adjacent time dependence) of two jobs is ti < tk and Dk < Di or vice-versa. The class of functions having the property is quite broad. For example, all polynomials with a finite number of terms belong to this class. Functions such as an exponential function, however, do not. LEMMA 3.2 The reduction theorem can be applied to sequencing problems in which all the loss functions are the same function, f, and f' is non-decreasing with lim f (y) = 1 for 0 < x < o y - co f'(x+y) Proof: It is sufficient to show that the relation defined by adjacent time independence is transitive. Consider Ji << (a) Jk and Jk << (a) Jp Since ti < tk, Di < Dk and tk < tp, Dk < Dp are necessary and sufficient conditions for these adjacent time independence relations to hold, it follows that ti < tp Di < Dp and thus Ji << (a) Jp. This, then, defines one rather broad class of sequencing problems for which the reduction theorem holds. 3.3 Single-Time Dependent Jobs Definition: Two jobs, Ji and Jk, are said to be single-time dependent if there exists t* such that for arbitrary T > 0, L(i,k,t,T) < L(k,i,t,T) for t < t* and L(k,i,t,T) < L(i,k,t,T) for t > t * or vice-versa. In order to investigate the question of sufficient conditions for two jobs to be single-time dependent, consider Ji and Jk where ti < tk and Dk < Di. If there exists t' such that L(i,k,t',T) < L(k,i,t',T) then t' > Di - tk since L(k,i,t,T) < L(i,k,t,T)

-39for t < Di - tk. The question, then, is what are sufficient conditions for L(i,kt,T) < L(k,i,t,T) for all t > tv. There are two general cases to consider. (1) Consider Di - tk < t' < Di - ti a. For t' < t < D1 ti J L(iktT) = f(ti + tk + t + T - Dk) L(k,i,tT) = f(tk + t - Dk) + f(ti + tk + t + T - Di) Let x1 = min((ti + tk + tv + T - Di),(tk + t' - Dk)) x2 = max((ti + tk + t' + T - Di),(tk + t' - Dk)) x3 to + t + t' + T - Dk 3 i k k y t - t and 0 < x < x2 < x3 Now L(i,k,t,T) = f(x3 + y) L(k,i,tT) = f(x1 + Y) + f(x2 +y) The question is what are sufficient conditions for f(x1 + y) + f(x2 + y) > f(x3 + y) for O < y < o o Appendix TII shows that f'/f non-increasing is such a condition. b. Now consider t > Di - ti. For fT/f non-increasing, L(k,i,tTT ) > L(i,k,t,T) for t9 < t < Di - ti I In particular, L(k i Di-t, T) > L(i, kDi-ti T) or L(kigDi-tiT) = L(ikDi-ti,T) + A where X> O. By assuming that X = O (i.e. t' = Di - ti) this becomes a special case of of the second general case and is treated below,

(2) Now consider t' > Di - ti o For t > tv L(ik,tT) = f(ti + t - Di) + f(t. + tk + t + T - Dk) L(ki,tT) = f(tk + t - Dk) + f(t. + tk + t + T - Di) Let x1 = min((ti + tk + t9 + T - Di), (tk + tv - Dk)) x2 = max((tt + tk + t + T - D i) (tk + t' - Dk)) x3 ti + t. + tk + T - Dk x4 = t, + t Do 1 1 y t - t Hence O < x4 < xl 1 x2 < x3 and xl + x2 > x3 + x4 L(ikt9T) = f(x4 + y) + f(x3 + y) L(k,it,T) = f(xl + Y) + f(x2 + y) and the question is under what conditions is f(xl + Y) + f(x2 + Y) > f(x4 + Y) + f(x3 + y) for all y > O o It should be obvious now that (1) bo is a special case of this situation by simply considering t = Di - ti (ioe. x4 = 0) The author has been unable to determine a'nice" set of sufficient conditions for which f(x1 + Y) + f(x2 + y) > f(x4 + y) + f(x3 + y) for all y > 0. The class of functions for which this does hold is likely to be rather limited in that even most polynomials do not appear to belong to this class. Appendix IV contains a proof that general

-41quadratic functions with positive coefficients have this property, but the following counter example illustrates that general cubic equations do not. It should be noted that since the results obtained above do not depend on the actual value that T assumes, no simplification arises if T = O Counter example for cubic equations Let f(x) = lOx + x3 and assume xl = x2 = 1 and x4 - O. (The example will illustrate that x1 and x2 need not be equal and that x4 need not be zero. These values were selected to simplify the computation.) f(xl) + f(x2) 22 and f(x3) + f(x4) = 10x3 + x33 3 Solving the equation 1Ox3 + x33 = 22 by radicals \J -22 +1 (22)3 3 -221 (22) 4(3'3 2 3 2 2 3 and x3 A 1.703. Hence, let x3 1.7 and clearly 0 < x4 < xl < x2 < x3 where xl + x2 > x3 + x4 with f(x1) + f(x2) > f(x3) + f(x4) Now consider y = 1 &acd f(x1 + y) + f(x2 + y) 56 < f(x3 + y) + f(x4 + y) = 57.68 The following lemma is based on these results.

-42LEMMA 3.3 If two jobs, Ji and Jk, have processing times, ti < tk k due-dates, Dk < Di and loss functions fi o fk = f of the form ax + bx2 (a,b > 0), tnen Ji and Jk are single-time dependent. Proof: Consider f 9 = a +2bx and for 0 < xt < oo lim f'(x'+y) _ lim a +2b(x'+y) Y - 00 f (y) Y - ~ a +2by a 2bx' - + —- +2b =rlinm Y Y + 0 + 0 +2b _ 1 -Y +2b 0 +2b y Hence, Ji and Jk are time dependent by Lemma 2.1. a f' a+2bx x+2b f ax+bx2 a+bx is clearly non-increasing and f is of the form, ax + bx2. Therefore, the conditions of both case 1 and 2 are satisfied and Ji and Jk are single-time dependent. 3.4 Further Characterization Results Definition: Let Ji and Jk be single-time dependent jobs where ti < tk and Dk < Di. For fixed T > O0 let T k i(T) denote the value of t for which L(ki,t,T) < L(i,k,t,T) for t < I ki(T) and L(i,kt,T) < L(k,i,t,T) for t > Tki(T) LEMMA 3.4 T k,i (T) is a non-decreasing function of T Proof: Consider T1 < T2 and let y = T2 - T1. From the definition of Tki(T), it is sufficient to show that if

-43L(k,i,t9T1) < L(i,k,t,TL), then L(kitT2) < L(ikgtT2) and hence Tk,i(Tl) < Tk,i(T2) L(kitT2) = f(tk + t - Dk) + f(ti + tk + t + T2 - Di) = L(kit,T1) + f(ti + tk + t + T2 - Di) - f(ti + tk + t + TI - Di) and L(i,ktT2) = L(i,kt,T1) + f(ti + tk + t + T2 - Dk) - f(ti + tk + t + T1 - Dk) Let Xi = ti + tk + t + Tl - Di Xk = ti + tk + t + T1 - Dk y = T2 - T1 xi+Y and L(k,i,tT2) = L(k,i,t,T1) + f f'(x)dx xi L(ilk,t,T2) = L(i,k,tT1) + fXk+Y f'(x)dx Xk But x. < xk since Dk < Do and fV is non-decreasing, hence fXi+Y fv(x)dx < fxk+Y f'(x)dx Xo xk Consequently, for L(k,i,t,Tl) < L(i,k,tT1), L(k,i,t,T2) < L(iktT2)o Corollary~ If f' = c (a constant), then Tk,i(T) =Tk,i (a constant) for all T > 0 o Proof: Let Tk,i = Tk,i, () and consider the proof to the above lemma, If f' = c, then xi+Y f1 = fXk+Y f! Xi ~ Xk

- 44 But L(ki, T kiO) = L(ik, Tk,i9O) Henc e L(k,i, Tk,iT) = L(ik, Tk iT) for all T > 0 by the proof to the above lemmao Since f is continuous, L(ik, T ki + cO) < L(ki T7k i + 9o0) for all c > 0 by definition of T k i Consequently, L(i,k, Tki + e,T) < L(k,i, Tk i + E,T) for all T > 0 by the proof to Lemma 3040 Hence, by definition, T ki(T) = 7k,i for all T > 0 The results obtained in this chapter can now be used to determine an additional characteristic that some optimal schedules will have. LEMMA 3~5 Given a schedule, S - (J1..~ Jin)' where J1 is scheduled at time to, having jobs Ji and Jk 0 i < k, which are single-time dependent with Ti,k(T) k-1 i-l If T i,k ( tq) < to + S tj q=i+l =1 then the schedule, S', defined by interchanging Ji and Jk has P(S'9to) < P(Sto) o Proof: For T i k(T) to exist, it is necessary that tk < ti and Di < Dk. i-1 k-l Let xl tq, 2 x t q=l q=i+l P(Sto) = P((J1' ) Ji-l) to) + P((Ji+l ~O ) Jk-l), to + X + t ) + P( (JknL~.o J X) + tO + x1 + tk)

-45+ L(i,kto + x1, x2) =1 + P2 + 23 + Link P(S'Ito) = Pi + P((Ji+l... Jk-1), to + xl + tk) + P3 + L(k,i,to + xl, x2) = Pi + P2 + P3 + Lki But P2' <P2 since tk < ti And Lk, i < Li, k since Ti,k(x2) < to + x1 Hence, P(S',to) < P(Sto) Corollary: Given a set of jobs, J = {J1... Jn) with adjacent time dependent equivalence classes E1 << E2 <... Ep If Ji Jk E Em are single-time dependent with Ti,k(T) and T i k (z tq) < E tp 9qJqEEm P JpEEi q f i,k i =1. m-1 then there is an optimal schedule in which Jk precedes Ji Proof: Consider any schedule S in which Ji precedes Jk Since Ti,k(T) is non-decreasing the above lemma can be applied to obtain S' and P(S',O) K P(S,O) It is also reasonable to consider the consequences of Ti,k(O) > 7 tq + ~ tp q3 JqEEm p J cEi q / ik i =..om-l in the scheduling problem defined in the above corollary. Intuitively, one would expect a "dual" to the above corollary to hold, namely for Ji to precede Jk in some optimal schedule. However, there is no such general result. While counter examples are difficult to construct, one is given below for quadratic loss functions.

-46Counter example for Quadratic Loss Functions Let f(x) = x2 and consider the problem of sequencing three jobs, Ji, Ji+l' Jk starting at t = 0 For ti = 1/2t = tk 2 t 3 and Di= 1 1/2 Dk = 2 Dil = 2 2 Ti,k(0) can be found as follows: L(i,k,t,O) = (t + 3/2)2 + (t + 3)2 L(k,itO) = t2 + (t + 7/2)2 Equating and solving for t, T ik(O) = 1/2 Hence, Ti,k(O) > ti+1. (It will be seen that the strict inequality can hold). L(i,k,O,O) = 9/4 + 9 = 45/4 < L(k,i,O,O) = 49/4 Hence, P( (i, Jk Ji+1L) o) < P((Jk, Ji Ji+1), o ) In like manner, L(k,i+l,OO) = 0 +.09 < L(i+l,kOO) = 0 +.25 P((Jk Ji+l J%),~) < P((Ji+l Jk J i)O~) L(i+li,2,O) = 09 + 16 < L(i,i+l12,0) = 49/4 + (333)2 P((JkJ J~+l Ji)o) < P((Jk9 JiJ Ji+l)o) L(i,i+lO0O) = 9/4 + 1.69 = 3.94 < L(i+li,0,0) = 4 P((Ji' Ji+l' Jk)20 ) < P((Ji+l Ji' Jk)0) Hence there are only three possible optimal schedules: P((J3k Ji+l, Ji),O) = 0 +.09 + 16 = 16o09 P((Ji, Ji+l J Jk)J0) = 9/4 + 1.69 + 49/4 - 3094 4 12.25 = 16,19

m47P((Ji, Jk' Ji+l),O) = 9/4 + 9 + (3e3)2 = 22.14 and the schedule (Jk, Ji+l 9 Ji) is optimal. The general result does hold, however, for linear loss functions as will be demonstrated in the next chapter.

CHAPTER 4 SEQUTENCING PROBLEMS WITH LINEAR LOSS FUNCTIONS 4. 1 In-troduction The previous chapter dealt with problems having identical loss functions. For these problems an additional characteristic of an optimal schedule was obtained by deriving a restriction orn the ordering of singletime dependent jobso For problems where the loss function is of the form, f(x) = ax + bx2, x > 0 all time dependent jobs are single-time dependent, It was shown, however, that the "~dual'T of the derived restriction did not hold for all problems in this class. In particular, a counter example was given for f(x) = x. This chapter considers problems having loss functions of the form, f(x) = ax, x > 0, and shows that the "dual"' restriction holds for problems of this type. While these problems are the most structured of those considered so far, it will be shown that, even in this case, no easy solution technique is likely to arise, 4,2 Two Job Sequencing Rules Without loss of generality, it will be assumed that f(x) = x x > 0, (ioe. a = 1). Based on the results of the previous chapter, the following observations can be made, (1) For two jobs, Ji and Jk; ti < tk and Di < Dk, are necessary and sufficient conditions for Ji << Jk or Ji << (a) Jk ~ This follows from the fact that f'(x) = 1 and consequently f'(x)/f'(x+y) 1. (2) As a consequence of the above, the reduction theorem holds for problems of this type. -48

-49(3) All time dependent jobs are single-time dependent since f is of the form ax + bx2, x > O (4) For any pair of time dependent jobs, 7i k(T) = i, k(O) for all T > 0 since f'(x) = 1 Remark: Since the necessary and sufficient conditions for time independence and adjacent time independence are the same and since T i,k(T) = lT ik(0) for all T > 0, it follows that these concepts are equivalent. The terms time independence or dependence actually used in the chapter are completely interchangeable with those of adjacent time independence or dependence. In addition to (4) an even stronger statement can be made concerning T i,k(0) Notation: Let T i k denote T i k(O) LEMMA 4.1 Consider two time dependent jobs, Ji and Jk, where Tk,i exists. Then Tk, i Di -tk for Di defined relative to t = O Proof: Since Tk,i exists, it is necessary that ti < tk and Dk < Di Now consider t = Di - tk, T = O and L(kiDi-tk,O) = f(tk+t-Dk) + f(ti+tk+t-Di) Di - Dk + ti L(ikDi-tkO) = f(ti+t-Di) + f(ti+tk+t-Dk) = f(ti-tk) + Di - Dk + ti o + Di - Dk + ti Hence, L(kiDi-tk,O) = L(ik,Di-tk,O)

-50Consider c > 0 such that Di - tk + < Di - ti L(k,iDi-tk+GEO) = Di - Dk + E + ti + E = L(kiDi-tkO) + 2c L(ikDi-tk+cE O) = L(ikDi-tkO) + E Hence, L(i,kDi-tk+EcO) < L(kiDi-tk+E,0) for every such c and consequently, T k,i = Di - tk by definition. Remark: Since Tk,i(T) is a constant, it follows that the relation R(t,T), is independent of the value of T (i.e. R(t,O) = R(t,T) for all T > 0). For the remainder of this chapter, let R(t) denote R(t,O) and Ji << (t) Jk denote Ji << (tO) Jk LEMMA 4.2 The relation R(t) is transitive. Proof: It is necessary to show that if Ji << (t) Jk and Jk << (t) Jq, then Ji << (t) Jq. The proof of this involves considering all possible combinations of reasons for J < (t) Jk and Jk << (t) Jq It is presented in detail in Appendix Vo Having these results, a final characteristic of an optimal schedule can be obtained. 4, 3 Further Characterization Results LEMMA 4.3 Given a schedule, S = (Jl"o+ Jn), where J1 is scheduled at time to, having the following propertiesn-2 (1) J1 << (to) J2 << (to+tl) J30 Jn-l << (to+ z ti) Jn i=l

-51(2) JL and Jn are time dependent where Tn,1 > t2 + t3 + *oo + tn-1 + to A new schedule, S', can then be defined in which Jn precedes J1 and P(ST,to) < P(S,to) Proof: By induction on the number of jobs separating J1 and Jn Initial: Consider the schedule S = (J1,J2,J3) where J1 << (to) J2 << (to+tl) J3 and T3,1 > t2 + to Since T3,1 > t2 + to, it follows that: (1) J3 << (to) J1 (2) t1 < t3, D3 < D1 K and 73,1 = Dl-t3 > t2+to or DL > t2 + t3 + to For J << (to) J1 and J1 << (to) J2, it follows that J3 << (to) J2 by the transitivity of R(t) But J2 << ( totl) J3 and hence J2 and J- are time dependent with to < T 3,2 < to + tl Hence (3) t2 < t3 3 D3 < D2, and t0 < 73,2 < to + tl or t + t3 < D < to + tl + t3 In this case, it will be shown that P( (J3,J2`Jl)'to) < P( (JV 2' 3 )t to or S' = (J3, J21Jl) P( (J1,J2,J3),to) = f(tl+to-Dl) + f(tl+t2+to-D2) + f(tl+t 3+t+t -D3 ) 12~~o3 3

-52P( (J3,J2,JL) to) = f(t3+t -D3) + f(t 3+t2+t -D ) + f(tl+t2t3+t o-D1) Since D1 > t2 + t3 + to > t2 + tl + to P( (Jl,2,J3),to) = f(tl+t2+to-D2) + f(tl+t2+t3+to -D3) Since D2 < to + tL + t3 and D1 > to + t2 + t3, it follows that if t1 < t2 then D2 < to + t1 + t3 < to + t + t3 to + t2 D1 or D2 < D1. Further, if t2 < t1, then D1 < D2 since J1 << (to) J2 (i.e. t2 < tl, and D2 < D1 implies J2 << J1) These two cases are considered separately. (a) For t2 < t1 and D1 < D2 to + tl + t3 > D2 > D1 > t2 + t3 + to and since t1 < t3 f(t +t+to -D2) = f(t2+t3+to -D2) = o Hence, P( (J1,J2,J3), t) = f(t1+t2+t3+tO-D3) 11)t -D2+t 3 t + t2 + t3 + to - D3 since D3 < D2 < to + t1 + t3 and P( (J3,J2J1),to) = f(t3+to-D3) + f(ti+t2+t3+to-D1) = f(t3+to-D3) + t1 + t2 + t3 + to - D1 Clearly P( (J3,J2J1),to) < P( (JlJ2,J3),to) if f(t3 + to-D3) = 0 since D3 < D1. Therefore, assume f(t3 + to -D3) = t3 + to - D3 > 0O P( (J3,J2, )to) = t3+t -D3+t +t2+t +t -D1 or = t3+to-Dl1 P( (J1,J2,J3),to)

-53But DL > t3 + to + t2 and hence P( (J3,J2,Jl)to) P( (J1 J2,J3) t o) (b) For tl < t2 and D2 < D1, D3 < D2 < toutl+t3 < to+t2+t3o D1 P( (J1,J2,J3),to) = f(tl+t2+to-D2) + tl+t2+t3+t-D3 P( (J3,J2,J1),to ) = f(t3+to-D3) + to+t2+t 3-D + f(tl+t2+t3+to-D1) P( (J1,J2,J3),to) - P( (J3,J2,J1)'to) = [f(t +t2+to-D2) + t1 - D3] - [f(t3+to-D3) - D2 + f(tl+t2+t3+t -D1) ] or = t1 + (D2-D3) + f(tl+t2+to-D2) - f(t3+t -D3) - f(tl+t2+t3+to-D1) In the worst case, assume f(tl+t2+to-D ) = 0 and that the terms preceded by negative signs are positive. Then P( (Jl J2 J3) t o) - P( (J3,J2Jl1),to) > t1 + (D2-D3) + D3-t3-t+D-t-t2 -t3 -t - (Dl-t2-t3-to) + (D2-t3-to) > 0 since D1 > to+t2+t3 and D2 > to+t3 Hence, P( (J3,J2,J1),to0) < P( (1,J2,J3),to)

-54Induction: Consider the schedule, S = (J1 o ~Jn) where n-2 J1 << (t (t ) J2 << (t+ t) Jn i=l and Tn 1 > to + t2 + ~~ + tn-1_ Following the reasoning used in the proof for the initial case, tl < tn, D D< D1, and D1 - tn > to + t2 +... + tnl1 or D1 > to + t2 +.. + tn_l + tn and hence, D1 > to + t +.oo + tnl. It will now be shown that for any job Ji c {J2. eJn_1} either J1 or Jn may be interchanged with Ji resulting in a schedule S1 where P(Sl,to) < P(Sto) or Di > D1. In the former case, J1 and Jn can be interchanged to obtain the schedule S' where P(S',to) < P(S1,to) by the induction hypothesis and hence P(S',to) < P(S,to). If the latter case holds for all Ji c {J2.. Jn-4, then each of these jobs is early in the schedules, S = (J1.~.Jn) and S' = (JnJ2...Jn-lJl) and n-l P( (Jloo.Jn)to) = L(l,n,to, E ti) i=2 P( (J J2 o Jn-lJl),to) = L(n,l,t1, z ti) i=2 and P(S',to) < P(S,to) since Tn 1 > to Now for Ji E {J2. ooJnl}, consider the following possibilities: (1) If Ji << J1, then J1 and Ji can be interchanged to form S1 and the proof is complete. (2) If T 1li exists and T 1,i < to, then J1 and Ji can be interchanged to form S1. (3) If Til exists, then Ti,1 = D1 - ti

-55n i-i But D - ti > to + Z tk > to + tk k=2 k=2 k/i and the induction hypothesis can be employed to interchange 3J and J. resulting in the schedule S1 The only remaining possibilities are: (4) If J1 << Ji, then D1 < Di (5) If Tli exists, then DB1 < Di and ti < tl Hence, either J1 and Ji are interchanged, or Di > D1. Corollary: Given a set of jobs, J = J1... Jn} with time dependent equivalence classes E1 << E2 << << Ep. If Ji, Jk C Em are time dependent with Tki and Tk,i > ~ tp + tq P3JpCEi q, -JqcEm i=l o.m-l q/i,k then there is an optimal schedule in which Jk precedes Ji Proof: Consider any schedule, S, in which Ji precedes Jk, and by the above lemma a schedule, S', can be defined in which Jk precedes Ji and P(S') < P(S) o In summary, the following statements can now be made concerning an optimal schedule for a set of jobs, J = {JloooJn} where fi = fk = f is of the form f(x) = ax, x > 0 (1; If J is decomposed into time dependent equivalence classes, Elo.Em, where E1 << E2 <<... << Em, there is an optimal schedule of the form (E1 o..o Em)

-56(2) Consider an arbitrary time dependent equivalence class Ei o If, in a schedule of the form given above, the jobs in the set Ei are to be scheduled in the time interval. (to, to + Z tq ) q3JqcEi then there is an optimal schedule) (Ji ~ 1 i ) for these jobs having the following propertieso k-l1 (a) Jik<< (t0 + z t ) Ji rk r kl=l (b) If Jik Jq then Jik precedes Ji ~ (c) If Jik and Jiq are time dependent with T iq, k < to then Jik precedes Ji ~ Further, Jik precedes Ji if min(q,k)-l T ii <to + ~ ti~ qlik r=l r qk r=l q rqk,q then Jiq precedes Jik o Further, Jiq precedes Jik max(k, q) if q,k t + z ti r=k r~k, q

-57Unfortunately, as is shown in the next section, the conditions specified above do not always determine an optimal schedule. An optimal schedule can always be found by brute force (i.eo generate and evaluate all schedules satisfying the conditions specified), but this approach is not recommended. The best techniques should arise from the use of these restrictions in conjunction with dynamic programming methods such as those given by Held and Karp.(5) 4,4 Schedules for Three Jobs In order to demonstrate that there are non-optimal schedules satisfying the conditions of the previous section, consider the problem of sequencing three jobs, J1, J2, and J3, starting at t = 0 Without loss of generality, assume that t1 < t2 < t3, and there are six general cases to consider which depend on the relative values of the due-dates. (1) D1 < D2 < D3 The schedule, (J1,J2,J3), which satisfies the conditions of the previous section, is always optimal. All other schedules violate these conditions. (2) Di < D3 < D2 J1 << J2 and J1 << J3, but J2 and J3 are time dependent. Hence, either (Jl1J2,J3) or (J1,J3,J2) is optimal depending on the relative values of L(2,3,t1,O) and L(3,2,t1,0). The optimal schedule so selected satisfies the conditions specified while all other schedules will violate at least one of them.

-58(3) D2 < D1< D3 J2 << J3 and J1 << J3, but J2 and J1 are time dependent. This case is similar to (2) and a unique optimal schedule, consistent with the stated conditions, is determined by the ordering of J1 and J2 at t O= 0 (4) D2 < D3 < D1 J2 << J3, but J1,J2 and Jl, J3 are time dependent pairs with 72,1 = D1- t2 D1 - t3 = 73,1 There are three schedules consistent with J2 << J3 S1 = (J1,J2J3) S2 = (J2,J3,J1), and S3 = (J2,Jl'J3) (a) If T3 1 < 0, then J1 precedes J3 and either S1 or S3 is optimal. This is determined by the ordering of J1 and J2 at t = 0 which produces the only schedule consistent with the conditions of the previous section. (b) If T3,1 > 0, then 7T21 > 0 and J2 << (0) J1 This is similar to the above case except that S2 or S3 is selected as the optimal schedule on the basis of the ordering of J1 and J3 at t = t2 o (5) D3 < D1 < D2 J1 << J2, but J1,J3 and J3,J2 are time dependent pairs with 3,1 = D1 - t3 < T3,2 = D2 - t3 There are three schedules consistent with J1 << J2 ~ S1 = (J1,J2J3), S2 = (Jlj3,J2), and S3 = (J3,1,J2)

-59(a) If 7T31 -< 0, then J1 precedes J3 and either S1 or S2 is optimal depending on the ordering of J2 and J3 at t = tl (b) If T 3,2 > tl, then J3 precedes J2 and either S2 or S3 is optimal depending on the ordering of J1 and J3 at t =0 (c) If T 3,1> 0 and T3,2 < tl, then J3 << (O)J1 and J2 << (tl)J3 which means that only S1 and S3 need be considered. Further, S1 and S3 both satisfy all the conditions specified in the previous section. P(Sl,O) = f(tl-Dl) + f(tl+t2-D2) + f(tl+t2+t3-D3) = 0 + f(tl+t2-D2) +tl+t2+t3-D3 and P(S3,0) = f(t3-D3) + f(tl+t3-D1) + f(tl+t2+t3-D2) = f(t3-D3) + tl+t3-Dl + t1+t2+t3-D2 since 0 < D1 - t3 < D2 - t3 t1 or t1 < t2 t3 < D1 < D2 < t t1 + t3 and D3 < D1. P(Sl,O) - P(S3,0) = f(tl+t2-D2) - f(t3-D3) + DL - D3 + D2- t - t3 and there are four possible sets of values for this difference: (1) (D2-tl-t3) < 0 + (D1-D3) > 0 if f(tl+t2-D2) = f(t3-D3) = 0 (2) (D2-tl-t3) < 0 + (D1-t3) > 0 if f(tl+t2-D2) = 0, f(t3-D3) $ 0

-60(3) (D1-D3) > 0 + (t2-t3) < 0 if f(tl+t2-D2) f 0, f(t3-D3) = 0 (4) (Dl-t3) > 0 + (t2-t3) < 0 if f(tl+t2-D2) $ 0, f(t3-D3) 0 o In each case, the value of the difference can be positive or negative depending on the relative values of the variables involved. Computationally, then, there is no simple rule for selecting S1 or S3 as the optimal schedule. A second example of this type of situation is shown in the last case. (6) D3 < D2 < D1 All pairs of jobs are time dependent pairs with T3,2 = D2 - t3 < T3,1 = D1 - t3 < T2,1 = D1 - t2 Consider the schedule, (J1,J3,J2) If J1 << (o)J3, then T3,2 < T3,1 K 0 and J2 << (tl)J3. Hence, (J1,J3,J2) cannot be optimal. This leaves five possible optimal schedules. S1 = (J1,J2,J3); S2 = (J2,J1,J3); S3 (J2,J3,J1) S4 = (J3,JlJ2) and S5 = (J3 J2,J1)(a) If T3,1 < 0, then T 32 < 0 and J1 and J2 precede J3 X Hence, either S1 or S2 is optimal depending on the ordering of J1 and J2 at t = 0 (b) If T 31 = D1 - t3 > t2, then2,1 = D1 - t2 > t3 and J2 and J3 precede J1. Hence, either S3 or S5 is optimal depending on the ordering of J2 and J3 at t = 0 (c) Thus, assume 0 < T3,l = DL-t3 < Dl-t2 = 2,1 and 3,2 = D2t3 < 3,1 = Dl-t3 < t2

-611. If T3,2 < 0, then J2 precedes J3 and it is only necessary to consider S1, S2, and S3. But for T 2,1 > 0 J2 << (O)J1 and for 3,l < t2 J1 << (t2)3 Hence, S2 is optimal. 2. If 3 2 > tl, then J3 precedes J2 and it is only necessary to consider S and S5. But for T21 < t3 J1 << (t3)J2 and S4 is optimal. 3. Hence, assume 73,2 = D2-t3 > 0 and 73,2 < tl Since J3 << (O)J2, J2 << (O)J1, and J1 << (t3)J2, the only schedules that need be considered are S2 = (J2,J1J3) and S4 =(J3,'Jl2) both of which satisfy the conditions of the previous section. P(S 2O) = f(t2-D2) + f(t2+tl-D1) + f(tl+t2+t3-D3) = 0 + f(t2+t1-D1) + tl+t2+t3-D3 P(S4,O) = f(t3-D3) + f(t3+tl-Dl) + f(tl+t2+t3-D2) = f(t3-D3) + f(t3+tl-Dl) + tl+t2+t3-D2 P(S4,O) - P(S2,O) = f(t3-D3) + f(t3+tl-D1) - f(t2+tl-Dl) + (D3-D2)

and there are six possible sets of values for the difference: (1) (D3-D2) < 0 if f(t3-D3) = f(t3+tl-D1) = f(t2+tl-D1) = 0 and P(S4,0) is optimal. (2) (t3-D3) + (D3-D2) = t3-D2 > 0 if f(t3-D3) O0 f(t3+tl-Dl) f(t2+tl-D1) = 0 and P(S2,0) is optimal. (3) (t1+t2Dl1) > o + (D3-D2) < 0 if f(t2+tl-Dl) I 0 f(t3-D3) = f(t34t1-D1) = 0 (4) (tl+t2-D1) > 0 + (t3-D2) < 0 if f(t3+tl-D1) 0 f(t3-D3), f(t2+t1-Dl) / 0 (5) (t3-t2) > 0 + (D3-D2) < 0 if f(t3-D3) = 0 f(t3+tl-D1)J f(t2+tl-D1) 0 (6) (t3-t2) > 0 + (t3-D2) < 0 if f(t3-D3) f(t3+tl-D1), f(t2+tl-D1) / 0 In the last four cases, the value of the difference can be either positive or negative depending on the relative values of the variables involved. The problem of sequencing just three jobs, then, provides several examples where a non-optimal schedule can satisfy the conditions of the previous section. Further, there appears to be no straight-forward way of selecting the optimal schedule other than by evaluating each schedule satisfying those conditions.

CHAPTER 5 AN n JOB, ONE MACHINE SEQUENCING ALGORITHM FOR MINIMIZING THE NUMBER OF LATE JOBS 51l Introduction The problems considered in this paper belong to a general class of problems in which a finite set of jobs must be sequenced through a single facility, minimizing some function of the lateness penalties incurred. Reference 5 contains one algorithm for solving these problems, a dynamic programming technique formulated by Held and Karp, While this method is applicable to the problem defined here, it is generally computationally infeasible for problems of 20 jobs or more. The algorithm developed here, however, consists of only two sorting operations performed on the total set of jobs, and a maximum of n(n+l)/2 additions and comparisons. Consequently, this method will be computationally feasible for very large problems and can be performed manually on many smaller problems. The problem is to sequence n jobs, J1.~. Jn,with known processing times, t1 - tn, and due dates D1... Dn, through a single production facility in such a way as to minimize the number of late jobs. The processing times, which are defined to include set-up and tear-down times, are independent of the sequence. It is assumed that the n jobs are available for production throughout the scheduling period and that the production facility operates continuously until all jobs are completedo No lot-splitting is allowed, so that production on a job continues from start to finish without interruption. -63

It is assumed that each job can be completed by its due-date defined relative to t = 0, if the processing of that job were to begin immediately (i.e., Vi, ti < Di). If this were not the case, the problem size would be reduced by eliminating those jobs that cannot be completed on time regardless of the sequence. 5.2 The Algorithm The algorithm consists of a decision rule for dividing the set of jobs into two subsets. This division produces an optimal schedule if the jobs in the first set are sequenced according to their duedates, and are followed by the jobs in the second set in any order. Step 1: Order the set of jobs according to the shortest processing time rule and call the resulting ordering ((Ji]... Jin ) where til <... < tin) the current sequence. Step 2: Using the current sequence, find the first late job, Jiq, and go to Step 3. If no such job is found, the algorithm terminates with an optimal schedule obtained by ordering the jobs in the current sequence according to their due-dates followed by the set of jobs that have been rejected in any order, Step 3: Reorder the jobs, Jil... Ji, according to the duedate rule producing a sequence of the form: 11... J qJ Jiq+ l Jin where D1 <.. < D

-65There are two cases to consider: (1) If all the jobs in the subsequence J1 ~~~ J~q are early, define the total sequence as the current sequence and go to Step 2. (2) Otherwise reject the job Jq and remove it from the sequence J~1.0~ ~ J J i, q. Ji ~ Now go to Step 2 with the resulting sequence as the current sequence. Example The process is illustrated in the following numerical example~ Job J1 J2 J3 J4 J5 J6 J7 J8 Processing Time 10 6 3 1 4 8 7 6 Due-Date 35 20 11 8 6 25 28 9 Step 1: Ordering the jobs according to the shortest processing time rule results in the following current sequence: Job J4 J3 J5 J2 J8 J7 J6 J1 Processing Time 1 3 4 6 6 7 8 10 Due-Date 8 11 6 20 9 28 25 35 Completion Time 1 4 8 -- Step 2: The result of computing the first 3 completion times indicates that J5 is the first late job. Therefore, Step 3 is performed. Step 3: The result of re-ordering the first three jobs according to the due-date rule is shown below.

-66Job J5 J4 3 2 8 7 6 1 Processing Time 4 1 3 6 6 7 8 10 Due-Date 6 8 11 20 9 28 25 35 Completi.on Time 4 5 8 14 20 As indicated above, the first three jobs are now early. Hence, Step 2 is re-performed using the above sequence Step 2:e The first late job is now J8 as in shown in the table above. Consequently, Step 3 is performed. Step 3: Ordering the first 5 jobs according to the due-date rule gives the sequence below: Job J5 J4 J8 J3 J2 J7 J6 J1 Processing Time 4 1 6 3 6 7 8 10 Due-Date 6 8 9 11 20 28 25 35 Completion Time 4 5 11 14 20 Since J8 and J3 are both late, J8 is rejected and the new current sequence is shown below: Rejected Current Sequence Jobs Job J5 J4 J3 J2 J7 J6 J J8 Processing Time 4 1 3 6 7 8 10 6 Due-Date 6 8 11 20 28 25 35 9 Completion Time 4 5 8 14 21 29

Step 2: The first late job is now J6 Step 3: Re-ordering the first six jobs according to due-dates gives the following sequence: Rejected Current Sequence Jobs Jobs J5 J4 J3 J2 J6 J7 J1 J8 Processing Time 4 1 3 6 8 7 10 6 Due-Date 6 8 11 20 25 28 35 9 Completion Time 4 5 8 14 22 29 -- Since J7 is late, J6 is rejected, resulting in the following new current sequence. Rejected Current Sequence Jobs Jobs J5 J4 J3 J2 J7 J1 J8 J6 Processing Time 4 1 3 6 7 10 6 8 Due-Date 6 8 11 20 28 35 9 25 Completion Time 4 5 8 14 21 31 -. Step 2: All jobs in the current sequence are now early and the algorithm terminates with the above sequence (J5,J4J3nJ2nJ7` ilJ 89J6J as an optimal schedule. On the final step of the algorithm it is not necessary to re-order the current sequence according to the due-date rule, Doing so, however, will still produce an optimal schedule. The proof for the optimality of this algorithm follows.

-685.3 Theoretical Development Definition: A schedule, S, is defined as a specific ordering, (Jil''' Jin), of the set of jobs to be processed. For each schedule, S, two sets, E and L, are defined as follows: Ji c E iff Ci < Di Ji E L iff Ci > Di where Ci is the completion time of job Ji in the schedule S Definition: For a given schedule, S, let A denote the ordered set defined by ordering the elements of the set E according to their order in the schedule S. Similarly, let R denote the ordered set defined by ordering the elements of the set L according to their order in the schedule S Finally, let it denote an arbitrary permutation of the set R and P denote the resulting ordered set. LEMMA 5.1 Given an optimal schedule, S, for a set of jobs, J = {J1 Jn} with known processing times, t1.oe tn, and duedates, D1... Dn, the schedule, S* = (A,R), is optimal as is any schedule of the form, S** = (A,P). Proof: Suppose S = (Jil *.. Jin) is not of the form (A,R). Then there must exist a pair of adjacent jobs, Jik and Jik+l, in S such that Cik+l < Dik+l

-69and Cik > Dik Hence, define a new optimal schedule, S1, by interchanging Jik and Jik+l and S willhave Al = A and R1 = R If S1 is of the form (A,R), we are done. Otherwise, operate on S1 to obtain S2, etc. This process terminates in a finite number of steps with an optimal schedule Sq where Aq = A, Rq = R, and Sq (A,R) = S*. LEMMA 5.2 (Smith) Given an optimal schedule, S, of the form, (A,R) a new optimal schedule, SD = (AD,R), may be defined by re-ordering the set A according to the due-date rule. Proof: Suppose A = (Jil... Jip) is not ordered according to the due-date rule. Then there must exist a pair of adjacent jobs, Jik and Jik+l in A such that kk+l But E tij < Dik+l < Dik j=l Consequently, the jobs Jik and Jik+l may be interchanged to obtain a new optimal schedule S1 = (A1,R). If Al = AD, we are done. Otherwise, operate on S1 to obtain S2, etc. This process terminates in a finite number of steps wihh a schedule Sq where Aq = AD and hence Sq = SD.

-70Corollary (Smith)j: Given a set of jobs, J = fJ1... Jn}, with known processing times, t1... tn, and due-dates, D1..oo Dn, there exists a schedule, S, having no late jobs if and only if there are no late jobs in the schedule, Sd, defined by ordering the set of jobs according to their due-dates. Proof: "If" Obvious'tOnly If" If there exists a schedule, S, having no late jobs, then L =, (the empty set) and consequently S = A o Now apply lemma 5.2 to obtain SD = AD = Sd which is still optimal. These lemmas show that there is associated with any optimal schedule, S, an optimal schedule SD = (AD,P). Consequently, the original problem can be restated as one of producing a schedule of this form where the cardinality of the associated set, ED, is maximal. LEMMA 5.3 Suppose there exists an optimal schedule, S, for a set of jobs, J, where L is not empty. Let J* denote any subset of L. Then for any optimal schedule, S' = (A',R'), for the set of jobs J' = J-J*, an optimal schedule, S" = (A",P"), for the complete set of jobs, J, can be defined by letting A" = A' and L" = J*uL' Proof: Without loss of generality, assume that S is of the form (A,R) and consider any optimal schedule, S' = (A',R'), for the set of jobs J' If the cardinality of the set El, denoted /E'/, equals the cardinality of the set E, denoted /E/, we are done. Therefore assume /E'/ $ /E/

-71(a) If /E'/ < /E/, we can define a new optimal schedule, Sl = (A1,P1), for the set of jobs, J', by letting Al = A and L1 = L - J*. But /E1/ = /E/ > /E'/. Hence, S' is not an optimal schedule for the set of jobs J' — a contradiction. Therefore, /E'/ > /E/ (b) If /E'/ > /E/, we can define a new optimal schedule, S2 = (A2,P2), for the set of jobs, J, by letting A2 = A' and L2 = L'U J But /E2/ = /E'/ > /E/ and hence S is not an optimal schedule for the set of jobs, J — a contradiction. Consequently, /E'/ = /E/ The problem is now one of developing an algorithm which will find a job, Ji E J, such that Ji E L for some optimal schedule when not all of the jobs in the original set can be completed on time (i.e., the schedule Sd has at least one late job). An optimal schedule for the original set of jobs can then be obtained by repeatedly applying the algorithm and eliminating the jobs found. The process will terminate at some point where the set of jobs, J* = (Ji1 o'o Jiq) has been elimanated, but the algorithm fails to find a job in the set J - J* An optimal schedule for the original set of jobs, J, is now defined by letting: E = J - J* L = J* and hence S = (AD, P) where AD is the ordered set obtained by ordering the set E according to the due-date rule. One can show that this schedule is optimal by repeatedly applying lemma 5.3. (i.e., Clearly, Sq = AD is an

-72optimal schedule for the set of jobs J - J*. Then, using lemma 5o3, Sq-1 = (AD, Ji ) is an optimal schedule for the set of jobs {J-J*} U {Jiq}, etc)o Selection Algorithm: Given a set of jobs, J = {J1.. Jn}, with known processing times, t1... tn, and due-dates, D1... Dn, the following algorithm will find a job Ji E J and Ji C L for some optimal schedule if such a job exists. Define the initial current sequence as (J... Jn) assuming that tl < t2 <_... < tn ~ Start: Find the first late job, Jq, in the current sequence where q > 1. (If no such job exists, the current sequence has no late jobs and the algorithm terminates.). Re-order the previous q-l jobs according to the due-date rule to obtain a sequence of the form: (Jil ~.. Jiq-' Jq... Jn) where tq <... <tn Dil <' < Diql and tq > tij for j = 1 o.. q-l Insert Jq into the sequence (Jil... Jiq-1) according to the duedate rule, obtaining the sequence: (Ji... Jik w Jq Jikl... Jiq1 Jq+l Jnj where

-73There are three cases to consider: (1) If all jobs in the sub-sequence (Jil... Jik, Jq Jik+l... Jiq 1) are early, resume the analysis at start using the sequence just defined as the new current sequence. (2) If Jq is late in the above sequence, then Jq C L for some optimal schedule for the jobs in the current sequence. Proof: Consider an optimal schedule of the form, SD = (AD,P) for the set of jobs in the current sequence. If Jq E P, we are done. Hence, assume Jq C AD and that SD is of the form: SD = (Jil... Jr.... Jt' P) and Di1 < D_ < D < Djrtl < * Dit where Jr = Jq In the current sequence tq <... < tn and tij < tq for j = 1... q-1. Thus, the summation of the processing times for the jobs in the set {Jil... Jik} is minimal over the class of all subsets of k jobs from the current sequence having due-dates less than or equal to Dq o Hence, since Jq is late in the sequence (Jil.. Jik Jq), it will be late in all due-date ordered sequences in which it is the k + 1st job. But J1r = Jq is early in the schedule SD and therefore r must be strictly less than k + 1. Further {~ral ~' Jt} n {Ji... Jik}=

Since Dij < D fo j 1...k a Dq < D for j = 1... k and Dq = r + 1. t Thus, there are (k - (r-l)) > 1 jobs in the set of jobs {Jil.o. Jik} that are members of L for the schedule SD. A new optimal schedule, SD, can now be defined. Order the set of jobs, {Jil ~* Jik, according to the shortest processing time rule and select the first r jobs from this set. Replace the initial sequence, (J~1'~r Jjr), in the schedule SD by this set of jobs ordered according to the due-date rule and call this schedule SD. The job Jr = Jq is now a member of LI for the optimal schedule SD (3) If Jq is not late in the above sequence but at least one of the jobs in the sequence (Jik+l... Jiq-1) is late, then Jq E L for some optimal schedule for the set of jobs in the current sequence. Proof: As in the proof for 2), assume we have an optimal schedule, SD, of the form: SD = (Ji1 -' Jr... Jet, P) and D 1< < D r < Dlr+l <... < Dit where Jir = Jq (a) If r < k + 1, the same optimal schedule, SD, can be defined as in (2) where Jq C L* (b) If r > k + 1, the argument developed in the proof for (2) can be used to show that the completion time of Jr = Jq in the schedule SD is greater than or equal to its completion time in the sequence (Jil... Jik, Jq) ~ Therefore, since at least one of the jobs in the set {Jik+l. Jiq l} was late in the original sequence, at least one of these jobs, say Jim, must belong to the set L for the schedule SD

-75But tim < tq and Dim > Dq Hence, Jq can be removed from AD and replaced by Jim to form a new optimal schedule SD where Jq C L This process continues and each succeeding job is either accepted into the initial due-date ordered sub-sequence or is rejected. The algorithm terminates with a due-date ordered sequence of early jobs, (Jal *'' Jat), and a set of jobs, Jat+l... Jan that have been rejected. An optimal schedule to the original problem will be SD = (Jai... Jat, P) where P is defined for the set of jobs, {Jat+l. Jan} 5.4 Minimizing the Maximum Deferral Cost* Retaining the assumptions made in the introduction, a new problem can be formulated. Associated with each job is a continuous, bounded, monotonically non-decreasing function, Pi, called a deferral cost where Pi(t) represents the cost of completing the job at time t. Sequencing problems with these cost structures have been considered by McNaughton(9) and Lawler.(7) Their results, however, concern the problem of minimizing the sum of the deferral costs. The objective here is to find a schedule for which the maximum deferral cost incurred is minimal. This extension was suggested by Professor E. L. Lawler, Department of Electrical Engineering, The University of Michigan.

-76Definition: For each Pi, define a function Pi as follows: (1) Pi (y) = P_'(y) if Pil(y) (the inverse) exists (2) Otherwise (a) Pi(y) = max {tlPi(t) = y} if ~t 3 Pi(t) = (b) Pi(Y) = o if t, Pi(t) > y (c) Pi(y) = + Xo if V t, Pi(t) < y The corollary to lemma 5.2 can now be used to find an optimal schedule. For arbitrary y > 0, let Di = Pi(y) and let SD(y) be the schedule obtained by ordering the jobs according to the "due-dates" so defined. In that the Pi's are bounded, there always exists y > 0 such that SD(Y) has no "late" jobs. Since the Pi's are monotonically non* decreasing, the Pi's are monotonically non-decreasing and hence there exists y > 0 such that: (1) SD(Y*) has no "late" jobs. (2) For all y < y, SD(y) has at least one late job. This value can be found to whatever accuracy is desired by a binary search technique. The schedule SD(y*) has minimal maximum deferral cost. Actually the bounded condition is stronger than necessary. It is sufficient if there exists y > O such that SD(y) has no "late'" jobs.

CHAPTER 6 DYNAMIC PROGRAMMING ALGORITHMS FOR THE SEQUENCING OF JOBS SUBJECT TO DEADLINES* 6.1 Introduction The previous chapter considered the problem of finding a sequence for a finite set of jobs that minimized the number of late jobs. It was shown that there is an optimal schedule in which the early jobs precede the late jobs and the former are ordered according to their due-dates. For this problem and others similar to it, a certain "consistency principle" holds. This principle can be stated as follows: Given a set of jobs which are to be processed subject to deadlines, there exists a well ordering of the complete set of jobs, such that for any subset of these jobs, an optimal sequence exists in which the ordering of the jobs completed on time is consistent with the well ordering. Problems of this type can always be solved using the Held and Karp(5) algorithm, but this chapter demonstrates how this consistency principle can be exploited to obtain more efficient solution methods. Three types of problems considered are (1) one machine and jobs with individual deadlines and priorities (previous chapter); (2) two machines in series and jobs with priorities and a common deadline (Johnson(6)); and (3) a single machine and jobs with priorities, *The dynamic programming formulations of the problems in this chapter were suggested by Professor E. L. Lawler, Department of Electrical Engineering, University of Michigan. -77

a common deadline, and individual linear deferral costs (Mc Naughton(9) and Smith(l5)). These problems are dealt with in sections 2, 3, and 4 respectively. A final section discusses multi-machine generalizations. 6.2 One Machine, Multiple Deadlines and Priorities Consider a set of jobs, J = {J1.~. Jnj, where ti and Di represent the processing time and due-date for job io In addition, let ri > 0 denote a reward that is earned if job Ji is completed on time. The problem is to find a sequence for these jobs which maximizes the sum of the earned rewards. The previous chapter dealt with the special case of this problem where rl = r2 =... = rn and the first two lemmas developed there also apply here. Hence, there is an optimal schedule in which the early jobs precede the late jobs and the former are ordered according to their due-dates. The problem, then, is that of selecting the jobs which are to be completed on time. A dynamic programming formulation of this problem is given below where it is assumed, without loss of generality, that D1 _ D2 <... < Dn Definition: Let fi(t) = the maximum attainable total reward for a selection of jobs from the set of jobs {J1... Ji), subject to the constraint that no job is completed later than time t. A recursion relation for fi(t) can be formulated as follows: (1) Consider t < Di. If there exists an optimal schedule in which Ji is completed on time, then Ji can be the last job and fi(t) = fi-l(t-ti) + ri. If no such schedule exists, then fi(t) = fi-l(t)

-79(2) For t > Di >.o > D1, fi(t)= fi(Di) and can be computed in the manner indicated above. Hence, for i = fi(t) = fi(Di) for t > Di fi(t) = max {fi-l(t), ri + fi-l(t-ti)} for 0 < t < Di subject to the boundary conditions fi(O) = 0 fo(t) = 0 fi(t) = - X for t < 0 The maximum attainable total reward for the complete set of jobs is given by fn(Dn) Assuming that all of the processing times and deadlines are integers, the length of the computation grows no more rapidly than n(Dn), i.e., proportional to the product of the number of jobs and the longest deadline. 6.3 Two Machines in Series, Common Deadline The problem just discussed can be viewed as that of a single machine, individual starting times, and a common deadline. We now deal with the problem of two machines in series, a common starting time, and a common deadline where each job must be processed by both machines, in sequence. Consider a set of jobs, J = {J1... Jn} where for each job Ji, ai denotes its processing time on the first of two machines, and bi its processing time on the second, Let ri denote

-80a reward which is earned if job Ji is completed on the second machine by a deadline T (common to all jobs). The problem is to find a sequence for the set of jobs which maximizes the sum of the earned rewards" It follows from the results of the previous chapter and those of Johnson(6) that there exists such a maximal sequence in which (1) the jobs completed on time precede the tardy jobs; (2) the jobs are processed in the same order by both machines; and (3) the on-time jobs are ordered according to the following relation: Job Jp precedes Jq only if min {ap,bq} < min {aq,bp} Once again, the problem consists of making a selection of the jobs which are to be completed on time. Given such a selection, the ordering of these jobs is determined by Johnson's relation;(6) the ordering of the remaining jobs which follow is arbitrary. Without loss of generality, assume that min {ajbj+l} < min {aj+lbj} for j = 1... n-l Let fi(tl,t2) = the maximum attainable reward for a selection of jobs from among J1... Ji, subject to the constraint that the completion time of no job is later than time tl on the first machine or t2 on the second. Following the type of argument used in the previous section, a recursion relation for fi(tl,t2) where tl, t2 < T can be formulated as follows. For i = 0. 1, 2,..., n: fi(tlt2) = max {fi_l(tlt2) ri + fil(min {tl-ai, t2-ai-bi}, t2-bi})}

-81Subject to the boundary conditions: fo(tlt2) = 0 fi(0 0) = 0 fi(tlt2) = - X (t1 < 0 or t2 < 0) The maximum attainable total reward for the complete set of jobs is, of course, given by fn(T,T). Assuming all processing times are integers, the length of the computation grows no more rapidly than n(T2) 6.4 Single Machine, Common Deadline, Linear Deferral Costs As a final example of the principle of consistency, consider the problem of a single machine, a common deadline, and linear deferral costs. Consider a set of jobs J = {J1.. Jn, where for each job Ji, ai denotes its processing time and ri a reward which is earned if the job is completed by a deadline T (common to all jobs). In addition, let Pi denote a linear deferral cost coefficient. Thus, if the job Ji is completed at a time t < T, the net profit earned for that job (reward minus deferral cost) is ri - pi t. As a possible example of such a problem, consider the position of a contractor who is free to accept or reject jobs. For each job Ji which is accepted and completed prior to the deadline, a reward ri is earned. However, the deferral of the job causes a cost to be incurred which is determined by the coefficient Pi. What selection of jobs maximizes profit? Given any selection of jobs, such that the sum of their processing times is no greater than T, the jobs should be ordered

-82according to the ratios Pi/ai, the job with the largest ratio being processed first. This result has been found by McNaughton(9) and Smith. (15) Without loss of generality, assume (pi/ai < pi+l/ai+l). Let fi(tl, t2) = the maximum attainable net profit for a selection of jobs from among J1,J2...Ji, subject to the constraint that the starting time of the first job is t1 and the last job is to be completed before t2 An appropriate set of recursion equations is as follows for tl,t2 0O and t2 < T. fi(tl,t2) = max {fil(tl,t92) ri - Pi(tl+ai) + fi-l(tl+ai, t2)} subject to the intial conditions fi(O,O) = 0 fo(tl,t2) = 0 fi(tl,t2) = - oo for t1 > t2 The maximum attainable total profit for the complete set of jobs is given by fn(0JT). Assuming all processing times are integers, the length of the computation grows as n(T). There is an interesting variation of this problem in which the deadline is not controlling. E.g., T is as large as the sum of all the processing times. In this case, the selection of jobs is controlled solely by the question of whether or not the jobs can be completed before their deferral costs exceed their rewards.

-836.5 Machines in Parallel Each of the problem formulations and solution methods given above can be extended in a very natural way to the situation in which there are many machines, or sets of machines, in parallel, and any given job can be processed by any given machine. In each such extension, the jobs that are assigned to any given machine are processed in an order which is consistent with the ordering obtained by solving the associated single machine problem. Consider the extension of the problem of multiple deadlines (section 6.2 above). Let there be given a set of jobs, J = {Jl.. Jn} For each job Ji, let ai,k denote its processing time on the kth of m machines and ri,k a reward which is earned if processing of the job is performed on machine k and completed prior to the deadline for the job, Di. As before, we assume, without loss of generality, that Di < Di+l, for i = 1,... n. (Note that it is feasible to have different deadlines on different machines. However, it must be the case that Dik < Di+l,k for all k.) Let fi(tl... tm) = the maximum attainable reward for a selection of jobs from among J1,J2..Ji, subject to the constraint that the completion time of no job is later than tk for machine k. Now we have, for i = O, 1,..., n fi(tl. tm) = fi(tl,t2.. tk-l, Di,tk+l.a. tm), if tk > Di = max {fi-l(tl, t2,.. tm) max {ri,k + fi-l(tl t2..., tk -ai,k,.~,' tm)} } otherwise

-84subject to the boundary conditions fo(tlnt2'o' tm) = 0 fi(O,O,...,O) = 0 fi(tl,t2... tm) = - CO if any tk < The length of the computation implied by these recursion equations grows as m x n x Tm. Some saving, of course, can be realized through exploitation of symmetry in the case in which all machines are identical, i.e. aj p = ajq and rj p = rjq for all j, p, q. The formulation of recursion equations for the extensions of the problem described in sections 6.3 and 6.4 is quite similar, and results in computations which grow as m x n x T2m and m x n x Tm, respectively. One should compare the extension of the deferral cost case with the solution method given by RothKopf(12) for the multi-machine deferral cost problem without deadlines. In that case, a computation growth of m x n x T 1 is possible. We note that in the generalizations of the problems of sections 6.3 and 6.4, it is possible to have some variation in the characteristics of the individual machines, provided the existence of a single linear ordering is not interfered with, In the case of sets of two machines in series, it must be possible to find a linear ordering such that, for all k, min(aj,k,bj+l,k) < min(aj+l,k,bj k) and in the case of linear deferral costs, Pjk < Pj+l,k aj k - aj+l,k The rewards, r;,k, need be related in no particular way.

APPENDICES -85

APPENDIX I BASIC MATHEMATICAL' THEORY FOR CHAPTER 2 In the first chapter it was assumed that the penalty functions were continuous and non-decreasing. In order to obtain some of the results presented, however, it is necessary to assume that the functions are absolutely continuous. The assumption of absolute continuity rules out certain unusual functions for which the fundamental theorem of calculus does not hold. This is shown by the following theorem. THEOREM: A necessary and sufficient condition that a continuous nondecreasing function, f, be absolutely continuous is that f f'(x) dx = f(b) - f(a) a An excellent discussion of.absolute continuity can be found in (10) Natanson who also provides an example of a function that is continuous, non-decreasing but not absolutely continuous. THEOREM: Given two absolutely continuous, non-decreasing functions, f and g, where f > g, f' > g', and f' is non-decreasing. For arbitrary 0 < xl < x2, O < x3 < x4 where x4 > x2 and (x4 - x3) > (x2- xl), f(x4) - f(x3) > g(x2) - g(xl) Proof: Using the fundamental theorem of calculus, /X4 f'(x)dx = f(x4) - f(x3) 1X2 g'(x)dx = g(xe) - g(xl) 1 -86

-87Let x.i = max(x2,x3) and xk = min(x2,x3), then f(x4) - f(x3) - [g(x2) - g(xl)] fX4 f'(x)dx - f g'(x)dx x. xl Xi X1 Now use the following theorem from real analysis: THEOREM: If f is an integrable function, a and f are real numbers, and E is a measurable set such that, for x in E, a < f(x) <, then L ~ (E) < f fdji < K3i(E) E Hence y 4 f'(x)dx > f'(xi) (x4 - xi) since f' is non-decreasing. x Further, X k xkx - f gt(x) dx _ f f'(x) dx < f'(xk) (xk - xl) x x 1 1 since f' > g' and f' is non-decreasing. But f'(xi) > f'(xk) since xi > xk and (x4 - xi) > (xk - xl) since (x4 - x3) > (x2 - xl) Hence, fS4 f'(x) dx - jxk g'(x) dx > x x i 1 f'(xi) (x4 - xi) - f'(xk) (xk - x1) > 0 or f(x4) - f(x3) > g(x2) - g(x). It(E) denotes, of course, the measure of the set E in the underlying measure space.

APPENDIX II BASIC MATHEMATICAL THEORY FOR CHAPTER 3 THEOREM: Consider an absolutely continuous, non-decreasing function, f, and O < x < ~. If lim f (x+y) 1 f'(y) as y - 0, then for all O < xl < x2 < x3 where xl + x2 > x3 and f(xl) + f(x2) < f(x3) there exists y > 0 such that: f(x1 + Y) + f(x2 + y) > f(x3 + y) + f(y) Proof: Consider y > x3 and using the fundamental theorem of calculus xl+y x2+Y f(x +y) + f(x +y) = l+ f'(x)dx + f f'(x)dx 1 2 o o fX1 f'(x)dx + fY f'(x)dx + fXl+Y f'(x)dx o X1 Y + fJ3 ft(x)dx + fx2+Y f'(x)dx o X f(x3+y) + f(y) = fx3+Y fl(x)dx + fY f'(x)dx O O fx3 f'(x)dx + 1X2 Y f'(x)dx + fx3+Y f,(x)dx O x3 x2+Y + fX1 f'(x)dx + Y f'(x)dx 0o X1 Hence if there exists y such that f x+Y f'(x)dx > fS3+Y f (x)dx Y x2+Y the theorem is proved. -88

-89 - For f' non-decreasing, fXl+Y f'(x)dx > xlf'(y) Y fX3+Y f'(x)dx < (x3 - x2) f'(x3 + y) x2+Y But f (x3+y) lim ( -) 1 as y - oo f'(y) Hence, for every E > 0, there exists M > O0 such that f'(y+x3).(.Y - 1 I < e for y > M f' (y) x1 Now let c = ( 1) > 0 since xl + x2 > x3 x3-x2 or x1 > x3 x2 and select y so that or f' (y+x3) < f' (y) x3-x2 Hence, fX3+Y f'(x)dx < f'(y+x3) (x3-x2) < f'(y)xl < f fy (x)dx x2+Y and f(xl+Y) + f(x2+Y) > f(x3+Y) + f(y)

APPENDIX III ADDITIONAL MATHEMATICAL THEORY FOR CHAPTER 3 THEOREM: For an absolutely continuous, non-decreasing function f, f(x) > 0 for x > 0, if f'/f is non-increasing then for every set of points 0 < xl < x2 < x3 where f(xl) + f(x2) > f(x3), f(xl+y) + f(x2+y) > f(x3+y) for all y > 0 Proof: Since f'/f is non-increasing, fXl+Y f'(x) x x2+y'() dx > fx3+Y f'(x) dx x) 2f(x) - f(x) or log f(x) > log f(x) > log f(x) 2x3+3 xl X2 x3 or log f(xl+y) - log f(xl) > log f(x2+y) - log f(x2) > log f(x3+y) - log f(x3) f(xl+y) f(x2+y) f(x +y) or log (x log > log f(xl) f(x2) f(x3) f(xl+y) f(x2+y) f(x +y) or > > f(xl) f(x2) f(x3) f(x2) f(x3+y) f(x2) f(x3+y) Hence, f(x2+Y) > f(xl)+ f(x3) - f(xl)+f(x2) or f(x2+y) [f(xl) + f(x2)] > f(x2) f(x3+y) -90 -

9 Lor f(x2+y) (x + f(x2+y) > f(x3+y) f (x2) f(xl) But f(xl+y) > - f(x 2+y) f(x2) ) Therefore f(xl+Y) + f(Xe+y) > f(x3+Y)

APPENDIX IV PROOF OF SINGLE TIME DEPENDENCE FOR IDENTICAL QUADRATIC LOSS FUNCTIONS THEOREM: Given a function, f, of the form ax + bx2(a,b > 0) and four points O < x4 < x1 < x2 < x3 such that x1 + x2 > x3 + x4 and f(x ) > f(x3) + f(x4) then for all y > 0 f(xl+y) + f (x2+y) > f (x3+y) + f(x4+y) Proof: f(x1+Y) + f(x2+y) = a(xl y) + b(xl2 2x1y +y2) + a(x +y) + b(x 2 + 2x y + y2) - f(xl) + f(x2) + 2ay + 2by(xl+x2) + 2by2 and in like manner, f(x +Y) + f(x4+y) = f(x3) + f(x4) + 2ay + 2by(x3+x4) + 2by2 But f(xL) + f(x2) > f(x3) + f(x4) and 2by(x1 + x2) > 2by(x3 + x4) Hence f(xl+y) + f(x2+y) > f(x3+y) + f(x4+)oy) -92

APPENDIX V PROOF THAT R(t) IS TRANSITIVE Consider jobs Jr and Js. There are three possible reasons for Jr << (t) Js ~ A. If tr < ts and Dr < Ds' Jr << Js B. If ts < tr and Dr < Ds and t < T r,s = Ds - tr then Jr << (t) Js C. If tr < ts and Ds < Dr and t >Ts = Dr - ts then Jr << (t) Js Now consider Ji << (t) Jk and Jk << (t) Jq. There are three general cases to consider, each having three subcases. A. Consider Ji << (t) Jk for reason A above, and hence, ti < tk, Di < Dk 1. If Jk << (t) Jq by A, then ti tk< t Di <Dk < Dq and Ji << (t) Jq for reason A 2. If Jk << (t) Jq by B, then tq < tk ~ Dk < Dq, and t < Dq tk Hence Di < Dk < Dq If ti < tq, then Ji << (t) Jq for reason A o Therefore, assume t < ti -93

But t < Dq - tk Dq- ti and consequently Ji << (t)Jk for reason B e 3, If Jk << (t) Jq by C, tk < tq, Dq < Dk and t > Dk - tq ~ Hence ti < tk < tq If Di < Dq, then Ji << (t) Jq for reason A Therefore, assume Dq < Di < Dk But t > Dk - tq > Di - tq and hence Ji << (t)Jq for reason C B. Let Ji << (t)Jk for reason B and hence tk < ti, Di < Dk and t < Dk- ti 1. If Jk << (t) Jq by A, then Di < Dk < Dq and tk < tq If ti < tq. then Ji << (t) Jq for reason A Hence, assume tq < ti. But t < Dk - ti < Dq - ti and Ji << (t) Jq for reason B o 2, If Jk << (t) Jq by B, then tq < tky Dk < Dq, and t <Dq - tk' Hence tq < tk < ti and Di < Dk < D q But t < Dk - ti < Dq - ti and Ji << (t) Jq for reason B.

-953. If Jk << (t) Jq by C, then tk < tq, Dq < Dk, and t > Dk - tq Hence, Dk - tq < t < Dk - ti and ti < tq If Di < Dq, then Ji << (t) Jq by reason A Hence, assume Dq < Di But t > Dk - tq > Di - tq and Ji << (t) Jq for reason C. C. Let Ji << (t) Jk for reason C and hence, t. <tk Dk < Di and t > Di - tk.'I f Jk << (t) Jq by A, then ti < tk < tq and Dk < Dq If Di < Dq, then Ji << (t) Jq for reason A Hence, assume Dq < Di But t > Di - tk > Di - tq and Ji << (t)Jq for reason C. 2. If Jk << (t) Jq by B, then tq < tk Dk < Dq, and t Dq - tk Hence, Di - tk < t < Dq - tk and Di < Dq. If ti < tq, then Ji << (t) Jq for reason A

-96 Therefore, assume tq < ti But t < Dq - tk < Dq - ti and Ji << (t) Jq for reason B o 3~ If Jk < (t) Jq by C, then tk < tq) Dq < Dk; and t > Dk - tq Hence ti < tk < tq and Dq < Dk < Di. But t > Di - tk > Di tq and Ji << (t) Jq for reason C o

BIBLIOGRAPHY 1, Conway,, R. W., Maxwell, W. L,, and Miller, L., Theory of Scheduling, Addison-Wesley (In Press)o 2, Dantzig, George B.,, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 1963.o 3. Eastman, Willard L,, "Comments on a Paper by Schild and Fredman," Management Science, Vol, 11, No. 5 (March, 1965), pp. 754-755. 4. Halmos, Paul R, Measure Th D.eor, Van Nostrand Company, Inc., Princeton, New Jersey, 19500 5. Held, Michael, and Karp, Richard M., "A Dynamic Programming Approach to Sequencing Problems," Journal of the Society for Industrial and Applied Mathematics, Vol. 10, No. 1 (March 1962), pp. 196-210. 6. Johnson, S. M., "'Optimal Two- and Three- Stage Production Schedules with Set-Up Times Included," Naval Research Logistics Quarterly, No. 1 (1954), pp. 61-68. 7. Lawler, Eugene L., "On Scheduling Problems with Deferral Costs., Management Science, Vol. 11, No. 2 (November 1964), ppo 280-288. 8. Manne, A. S., "On the Job-Shop Scheduling Problem," Operations Research, No, 8, Vol. 2 (March-April 1960), pp, 219-223. 9. McNaughton, Robert, "Scheduling with Deadlines and Loss Functions," Management Science, Vol, 6, No. 1 (October 1959), ppo 1-12. 10. Natanson, I, P., Theory of Functions of a Real Variable, Frederick'Ungar Publishing Co,, New York, 1961. 11 Root, James Gordon, "Scheduling with Deadlines and Loss Functions and the Optimal Selection of Resources to Perform Tasks," th.D,. Thesis, Ohio State University, 1963. 12. Rothkopf, Michael Ho,'Scheduling Independent Tasks on Parallel Processors,"' Management Science, Vol, 12, No, 6 (January, L966), pp. 437-447. 13. Schild, Albert, and Fredman, Irwin JO, "On Scheduling Tasks with Associated Linear Loss Functions'," Management Science, Vol 7, No. 3 (April, 1961), ppo 280-285. 14 and, "'Scheduling Tasks with Deadlines and Non-Linear Loss Functions," Management Science, Vol, 9, No, 1 (October, 1962), pp. 73-81o -97

-98 - 150 Smith, Wayne E,, "Various Optimizers for Single-Stage Productions Naval Research Logistics Quarterly, Volo 3, Nos, 1-2, March-June, 1956, pp. 59-66. 16. Weisner, Louis, Introduction to the Theory of Equations, The MacMillan Company, New York, 1956.