ALLOCATION OF VARIABILITY AMONG COMPONENTS FOR OPTIMAL SYSTEM PERFORMANCE Steven J. Erlebacher and Medini R. Singh Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 93-16 July 1993

Abstract Consider a system of n components that operates only if each of its components operates properly. An inherent variability is present in each component that affects its performance; a component operates properly only if its performance is below a prespecified level. The objective ji to maximize the probability that the system operates given an overall level of variability. T'Ihis problem has applications in several areas including assembly line design, inventory control, semiconductor manufacturing, and quality control. We investigate how the level of variability in the system affects system performance and how variability should be allocated among the components of the system. We demonstrate that an allocation that spreads the variance equally among all components is not optimal and in fact may be the worst allocation if the level of variability in the system is large. These results are helpful for improving system performance ivila variance reduction. 1 Introduction Consider a system whose performance depends upon the successful operation of all of its n components;. A component operates properly if its performance is at or below a prespecified level. An inherent variability is present in each component which affects its performance. Reducing a component's variability will improve both the component's and system performance. However, the best improvement in system performance may only be achieved by reducing the components' variabilities in a certain fashion. Given that variance reduction effort is expensive, it is important to understand which component provides the best leverage for improving system performance. More generally, given an overall level of variability in the system, one is interested in finding the best possible allocation of variability among the components such that optimal system performance is achieved. This problem arises in a number of areas including assembly line design, inventory 1

control, semiconductor manufacturing, and quality control. These examples are discussed below. A synchronous assembly line is a serial production system without intermediate buffers where products move from one station to the next in a synchronized fashion every T time units. Such lines are prevalent in the assembly of automobiles and household appliances. The amount of time taken to complete the work at a station is variable due to a number of factors such as different options installed on each product, unforeseen disruption due to machine or tool failures, variation in worker skill and experience across different tasks, natural human variation in a repetitive manual work environment, and decline in operator speed due to fatigue, etc. Because of the variability in processing time, operators may not be able to complete the work at a station in the prespecified cycle time, T. In such cases, either the entire assembly line is stopped until each station completes its work or the product with unfinished work is tagged and reworked at the end of the line. In the first case, one is interested in maximizing the probability of not stopping the line, while in the second case one is interested in minimizing the probability that a product requires any rework. Suppose Xi is the stochastic processing time at station i, then the objective in both cases is to maximize n Pr{Xi T}. (1) i=l For performance improvement of synchronous assembly lines, one must understand how decreasing the processing time variability at a station improves the overall system performance given by expression (1). More generally, given an overall level of variability, it is critical to understand how variance should be allocated across the stations on the line such that the performance measure given by expression (1) is maximized. In the context of inventory control, consider a joint replenishment problem where a portfolio of n items are simultaneously replaced. If an item runs out before the scheduled replenishment time, a high penalty is incurred due to emergency shipment. The objective is to minimize the possibility 2

of emergency shipment or equivalently maximize the probability that the run-out time for all items exceeds the scheduled replenishment time. If the demand for each of the aitems is random, theis n it is important to assure that the none of the items stock out prior to replenishment. It is imperative for the purpose of scheduling replenishments to understand how the variability of the demand for each item affects the likelihood of a stock-out before the portfolio of items is to be ordered again. In semiconductor manufacturing, different types of chips are often ordered in sets. A set correspondcs to a specific number of chips of several different types required to assemble a product, e.g. a circuit board. Due to the complex nature of the wafer fabrication process, the yields for chips are uncertain. One is interested in configuring a batch of wafers such that at its completion a complete set is obtained. Crucial to this batch configuration problem is an understanding of how the variability in yield affects the probability of completing the set. Reducing process variation is an important aspect of quality control and process improvement in other industries as well. In environments where several processes must be under control to ensure the production of a quality product, it is important to understand how the variability of each component production process affects the overall product quality. For example, Motorola's MicroTac cellular phone has 400 parts (Krajewski & Ritzman, 1993). It is imperative that the process producing each part of the phone is extremely reliable to insure the manufacture of a reliable phone. Understanding how to improve product quality by reducing process variation among such a large number of processes is not straight forward. Given limited resources to reduce the system variance, it is important to know the best allocation of variance if one is to get the maximum benefit from variance reduction efforts. The focus here is to develop an understanding of how variability across the components affects the system performance. The variety of examples presented above indicates the specific problem instances where such understanding is of crucial importance. The impact of processing time vari 3

ability on the performance of a synchronous assembly line, the impact of demand variability on the run-out probability of a portfolio of items, the effect of yield variability on the completion probability of a set of chips, and the impact of process parameter variability on the quality of final product, each of these problems can be analyzed in the framework presented here. The model presented here addresses the following question: When an inherent amount of variability exists in a system of components, what is the best way to spread the variability through the system to optimize system performance? That is, what is the optimal allocation of variability? We demonstrate that an allocation that spreads the variance evenly among the components is optimal only if the total amount of variability is below a critical variance level. Alternatively, if the total variability exceeds this critical level, an equal allocation is not optimal and may in fact be the worst allocation. While these results may appear surprising, intuition is provided for this behavior. The model also assists in answering the following question: When the overall system variability can be reduced at a cost, which components of the system should be the focus of the variance reduction effort? Thus, our results can guide continuous improvement efforts by determining which components of a system provide the most leverage for improving system performance via variance reduction. The remainder of the paper is organized as follows. Section 2 provides a description and mathematical formulation of the problem and show that it is essentially a resource allocation problem with a special structure. Section 3 contains a discussion of the literature related to the problem. The analysis in Section 4 develops results for the simple case of a two component system and serves as a building block for the analysis of the n component system in Section 5. In Section 6 we discuss the results, indicate areas for further research, and apply the results to an assembly line design problem. Section 7 contains a summary and conclusions. 4

2 Problem Description and Formulation A component of the system is defined to be working if its performance is at or below a prespecified threshold, 7. Since the system works if and only if each component of the system is working, the probability that the system is reliable is given by expression (1). The exact nature of expression (1) is dependent upon the distributions of the Xi. We assume that each Xi is normally distributed with mean Gpu and variance vi, and that the total variability in the system is a constant, V. To examine the effects of the overall level and allocation of variability in the system, let the vi be decision variables, and I,T, and V be parameters to the model. To insure that each component works on average, we require that the mean performance of each component of the system is below the prespecified threshold, T, i.e. p/ < T. If a component has variance v and Xi is normally distributed, then the probability that the component works is given by g(v)-P T-A) where P(-) is the standard normal cumulative distribution function. The function g (v) is continuous and differentiable with limrg (v) = 1 and lim g(v) = 0.5. A graph of g is shown in Figure 1. V —O V —*oo00 Therefore, the problem of allocating a fixed amount of variance V among n components of a system to maximize the probability that the system works can be expressed by the following optimization problem: n (PI) Maximize G(v1, v2,,vn) I g (vi) i=l n subject to E vi = V i=l Vi O, Vi 5

The objective function G is nonlinear and separable but not additive. However n log (G (vl, v2,., vn)) = E log (g (vi)) i=l is nonlinear, separable, and additive. Since the log function is strictly increasing, any results derived for log ( (vl, v2,.., vn)) will apply to C (vl,, v2,', vn) as well. If log(g(v)) was concave, then (P1) could be transformed to a special case of the resource allocation problem addressed by Zipkin (1980) and Luss & Gupta (1975) who maximize a concave nonlinear-additive objective function with single linear constraint using an efficient ranking algorithm. In that case the problem (PI) would be trivial to solve since each nonlinear function is the same. To show that log (g (v)) is not concave, consider its second derivative y2log g (v) = (T-)Z( ) ( (T-b A)2 T-\ TIT-u T - _ (2) 9dv2 -) 4v5/2p2(I^) (( V ) ) T ' ( T )) () where Z(.) is the standard normal probability density function. Letting x = (T - l)//iv then log g (v) is not concave if C (x) = (3 - 2) P (x) - xZ (x) > 0 for some positive x. It is shown in Appendix A that for positive x, C (x) has exactly one positive root, A, and that C (x) > 0 if and only if x < t. Therefore log(g (v)) is not concave and hence the techniques of Zipkin (1980), Luss & Gupta (1975) and others are not applicable even though the problem (P1) has a resource allocation structure. 3 Related Literature Since a variety of problems can be analyzed in the framework presented in Section 2, we discuss representative work from a number of application domains. 6

Sniedovich (1983) considers a class of variance constrained problems with finite strategies, such as stochastic knapsack problems. His work differs from ours since he addresses a discrete optimization problem while we solve a continuous optimization problem. Additionally, Sniedovich is concerned with the selection of variance while we are concerned with the allocation of variance. Lau (1992) uses a simulation study to investigate how the allocation of processing time variance on an asynchronous assembly line affects throughput. His work is similar in spirit to the work here in the s;ense that it addresses the effect of variance allocation on the performance of a system, though we use analytic techniques to address our problem, while Lau uses simulation. The allocation of higher order moments has also been considered by Lau (1986) and Lau & Martin (1987). Lau (1986,) concludes for a two-station line without buffer that it is only necessary to balance the mean and skewness of each station. For longer lines, Lau & Martin (1987) conclude that stations with higher positive skewness should be positioned towards the middle of the line and stations with lower positive skewness should be positioned towards either end of the line. Several authors including Carraway (1989), Henig (1986), Sniedovich (1981), and Kao (1976) have considered a stochastic version of an assembly line balancing problem. Given a cycle time for the line, a set of tasks with stochastic processing times, and precedence constraints among these tasks, their objective is to assign tasks to stations to minimize the number of stations on a synchronous assembly line. Additionally, the assignment of tasks to stations must be such that the probability that each station completes its assigned tasks within the cycle time exceeds a prespecified minimum probability. In the context of assembly line design, the problem we address differs in that we take the number of stations on the line as given, put no constraint on the performance of each station, and explicitly account for the likelihood of rework or stopping the line. Furthermore, we are concerned with the assignment of tasks to stations only to the extent that it leads to an overall allocation of variability. 7

In the context of inventory control, Karmarkar (1981) considers a joint replenishment problem where the objective is to maximize the time until the next replenishment, or run-out time, and the demand for each item is constant and known. Our problem in the context of joint replenishment considers the case where time between replenishments is given and the objective is to maximize the probability that no item runs out before replenishment. Mendelson, Pliskin & Yechiali (1980) consider a similar problem where inventory of an item is allocated to several locations to meet demands at each location. The demands occur at epochs, the time between epochs is given by i.i.d. random variables, the demand for each item is random, and the objective is to maximize the number of epochs until the inventory of at least one item runs out. Their analysis is concerned with long run performance when the demand during a single epoch is small relative to the overall inventory available. Our framework considers only a single period problem rather than long run performance. 4 The Two Component Problem The problem (P1) is first examined for a two component system to obtain crucial insights that will aid in solving the n component problem. For a system of two components, the problem (Pl) reduces to: (P1.1) Maximize G (vl, V2) g (v1) 9 (v2) subject to v + V2 = V (3) VlV2 ~ 0 The problem (P1.l) has two decision variables, three constraints (including the non-negativity constraints), and is symmetric in vl and v2. The following theorem states, however, that the 8

symmetric solution is not always optimal. Theorem 1 For the problem (P1.1) there exists a lower critical variance level, V2, such that an unequal variance solution is optimal if the total variance V > V2. The lower critical variance level is given by V2 = 2( ) 0.70696 (T -)2 where x; 1.68196 is the unique solution to (3-x2)P(x) =xZ(x). Moreover, an optimal unequal variance solution either occurs at an extreme point or satisfies the equation ' (v1) =?(2) (4) where y(v) = ( )(5) 2v3/2 P (e) Due to symmetric nature of the problem (P 1. 1 ), this result is somewhat unexpected. Theorem 1 states that the equal variance solution, (vl = V/2, V2 = V/2), is not optimal when the total variance is large enough, i.e. greater than the lower critical variance V2. (We call V2 the lower critical variance since we will later introduce an upper critical variance.) To give a "back of the envelope" proof of this theorem, consider a two component system that has a significant amount of variance. In such a system, if all of the variance is allocated to one component, the smallest that G(vi, V2) could be is 0.5. However, for an equal variance solution, G(vI, v2) could be as small as 0.25, in the limit. Therefore, once one component has been allocated some variance, it is better to keep putting more 9

of the variance at the same component because the detrimental effect of marginal variability on a component that already has large variance is much less than the detrimental effect of marginal variability on a component that has little or no variance. The function y (v) plays a crucial role in determining the optimal allocation. The graph of -y (v) is shown in Figure 2. Clearly -y (v) is unimodal and approaches 0 as v goes to infinity. Additionally, the following property of 7(v) will be used later. Property 1 The function (v) achieves its global maximum at V (T- 2/2 V/2, is increasing on (O, V), and is decreasing on (V, oo). Proof of Property 1: See Appendix B. Equations (3) and (4) define the first order conditions for the problem (PI.1). The intersection of y(vl) with its reflection about the line vl = V/2 gives the points satisfying the first order conditions. The graphs of these two functions are shown in Figures 3a and 3b for different values of V. One first order point will always be at vi = V/2. The other first order points, if any, always appear as a symmetric pair. Whether there is one first order point or three depends on whether the point of reflection, V/2, is greater than or less than V as illustrated in Figures 3a and 3b, respectively. The behavior of the corresponding objective functions G(v, V - v) is shown in Figures 3c and 3d. Proof of Theorem 1: The problem (P1.1) is transformed into a single variable optimization problem by substituting V - vl for v2. (In what follows we drop the subscript on vl since it is the only decision variable.) The new optimization problem is given by: (P1.1') Maximize R (v, V) - g (v) g (V - v) 0<V<V 10

The function R is the objective function from (Pl.l) with V - v substituted for V2. The first component receives v units of variance and the second component receives the remaining V - v units of variance. Differentiating R with respect to v and using equation (5) we obtain OR a = R (v, V) ( (v) - - v)). Clearly the first derivative vanishes at all v satisfying (v) =y (V - v). (6) Therefore, an optimal solution to problem (P1.1') must either be an extreme point or satisfy equation (6), which is equivalent to equation (4). If a solution does satisfy equation (6) then either it is the equal variance solution or an unequal variance solution. We show that for V > V2, the equal variance solution is a local minimum and hence that an unequal variance solution must be optimal. First note that v = V/2 satisfies equation (6), and as a result must be either a local minimum, a local maximum, or a point of inflection. But V/2 T-p (T- TaV2 = 52 V5/2 ( ( 7) 72) Letting z = (T - )/vV/772, then the right hand side of equation (7) becomes 5Z (z) 3 2 p (x)XZ () So, the equal variance solution is a local maximum only if C (x) = (3 - x2) P (x) - xZ (x) < 0 for some positive x. Recall that C (x) has exactly one positive root, x, and that C (x) is positive when 11

x < x and negative when x > I. (We can not solve for x since C (x) = 0 is transcendental.) Since V2 = 2 (T - p)2/2, this implies that the equal variance solution is a local maximum only when V < 2 (T - p)2/t2. Hence, for V > V2 the equal variance solution is a local minimum and therefore can not be a global maximum. This proves for V > V2, an unequal variance solution is optimal. Theorem 1 gives us a rule that helps determine the nature of the optimal solution to the problem (P. 1). For V > V2, an extreme point unequal variance allocation or an unequal variance allocation (v1,v2) satisfying equations (3) and (4) is optimal. For V < V2, Theorem 1 does not guarantee that the optimal solution is obtained simply by allocating the variance equally. We have shown this to be true computationally, however. Additionally, when V < V, an equal variance solution must be optimal. This result follows from the facts that the optimal solution must satisfy the first order condition 7(v) = ^y(V -v), 7 (v) is strictly increasing on (O,V), and 7(V -v) is strictly decreasing on (O, V) when V < V. When an unequal variance allocation (vl, v2) is optimal, its symmetric pair is also optimal. The following conjecture claims that there is only one such pair. Conjecture 1 For any V > V2 there is a unique unequal variance allocation pair (Vl, V2) satisfying the first order conditions given by equations (3) and (4) ~ Justification of Conjecture 1: Consider (vl + v2) as a function of - such that - (Vl) = 7y (v2) and vl # V2. This plot is shown in Figure 4. Observe that for any E (0,7 (V)) there is a unique (vl + 2). As a result for any V > V2 there can be only one unequal variance allocation (vl, v2) satisfying equations (3) and (4), ignoring (v2, v), the symmetric solution. 0 An important consequence of Conjecture 1 is that for V < V2 the equal variance solution must be optimal to the problem (P 1.). This result follows from the fact that vl + v2 > V2 if 'y(vl) = 7y(v2) and vl # v2, as clearly seen in Figure 4. That is, if an unequal variance solution 12

satisfies the first order conditions, then the total variance is above the lower critical variance level. Hence, if the variance is below the lower critical variance level, the equal variance solution must be optimal since it is the only point that satisfies equation (4). We now address the issue of finding the worst solution to the problem (PI.1). When the total variance exceeds the lower critical level V2, the equal variance allocation was found to be a bad solution. The following theorem indicates that it may be the worst solution. Theorem 2 For the problem (Pl.l) there exists an upper critical variance level V2 > V2 such that the equal variance allocation is worse than an extreme point allocation if and only if V > V2; moreover, under Conjecture 1, the equal variance allocation is the worst allocation. The upper critical variance level is given by V T - A 2 V2= ( ) where Z is the unique positive solution to P() =P2 (xV). (8) Consider G (v,V-v) as a function of v for a given V > V2. As demonstrated in the proof of Theorem 1, the equal variance solution is known to be a local minimum for this function. Whether or not it is a global minimum depends upon whether the total variance exceeds the upper critical variance level, V2, according to Theorem 2. This theorem is illustrated graphically in Figures 5a and 5b. Proof of Theorem 2: Define gEV (V) = g2 (V/2) and gEP (V) = g (V). For the equal variance (EV) solution with total variance V, the probability that a two component system works is given 13

by gEV (V) while gEp (V) is the probability that a two component system with total variance V works for an extreme point (EP) solution. If we let x = (T - p)/VV then the equation EP (V) = gEv (V) reduces to equation (8). We are only concerned with positive roots to equation (8) since positive V have a one-to-one correspondence with positive x. To show that x is the unique positive root to equation (8), let D (x) = P (x)- P2 (x2). First note that since D (0) = 0.25 and lim D(x) =X —*00 then there is at least one root to D (x) = 0, implying that there is at least one root to equation (8). To show that there is only one root to D (x) = 0, we show that there exists a number 6 such that D' (x) < 0 if and only if x < 6. This fact guarantees that there is only one root to D (x) = 0. So, differentiating D(x) we obtain D' (x) = Z (xV2) (et - 22_P (V)) It is clear that DV (x) > 0 if and only if es > 2Xv P (xV). But there is only one positive root to esT = 2/2P (xV), which we call 6. Therefore D' (x) < 0 if and only if x < 6. Thus we have shown that there is exactly one root to D (x) = 0. (This in turn implies that there is exactly one root to the equation gEV (V) = gEp (V).) So, gEV (V) will decrease at a slower rate than gEP (V) until V = (T - L)2/62 and then gEV (V) will decrease at a faster rate than 9EP (V), cross gEP (V) at V2 (> (T -,)2/62) and gEV (V), will be less than gEp (V) thereafter. 14

The fact that the equal variance solution is the global minimum under Conjecture 1 when V > V2 follows from the fact that the only other possibilities for the global maximum is either an unequal variance solution satisfying the first order conditions or an extreme point solution. We have just shown that the equal variance solution is worse than an extreme point solution. Under Conjecture 1, only one unequal variance solution pair can satisfy the first order conditions to problem (P1.1). Since V > V2 > V2, we know by Theorem 1 that this unequal variance pair must be the global maximum. U 5 The n Component Problem Building on the results from Section 4, we now address the problem (P1) for the n component system. The first order conditions for the problem (Pl) are given by g' (vi) G (vl, Vn) _ = 0i g (vi) EVi-V = 0 i=l which yields 7(vi)G(vi,(,vn) = 7y,Vi (9) EVi - = 0 (10) i=l Without loss of generality we assume that vl < v2 < * * * < vn. Suppose (v, v2, * * *,, ^y*) satisfies the first order conditions, then - (v)G(v;v, v n,. v) = y*,Vi. (11) 15

One approach for finding first order points is to select a "potential" value for -*, find the vi that solve equation (11) and then see if any combination of these vi satisfy equation (10). This approach is difficult since the equation y (vi) G (vi,., v,) = y is transcendental. Instead, we will examine some of the properties of y (v) that will allow us to characterize the optimal solution to the problem (P1). The following lemma characterizes a local optimum to the n component problem. Lemma 1 Any local optimum for the problem (P1) is such that the variance allocated to each component must take one of the two values, vi or Vh, with vi < V and Vh > V. Proof of Lemma 1: If (v1..., vn, y*) satisfies the first order conditions, then it must be true that (vi) G (v, * *,1 ) = -y* for all i. Given (vt,**,v ) we can treat G (v, -,v ) as a constant r. We now show that y (v) = Ky* has at most two roots. From Property 1 we know that y (v) is increasing when v < V and decreasing when v > V. Thus given a value of 7*, it is clear that (vi) G (v, * *, v*) = has at most two solutions vi and vh. By Property 1 we know that vi < V and vh > V since the maximum of 7(v) occurs at V. 1 A result that follows from Lemma 1 is that the optimal solution will have k of the n components with variance vj, and the remaining n - k components with variance vh. Since kvi + (n- k)vh = V, the optimization problem (PI) can be reformulated as (Pl') Maximize g (vl)k g (v - k )n.. subject to 0 < vi < V/n 1 k<n-1 k integer. 16

i The problem (P1') is a nonlinear-integer program with two decision variables-vi, the low variance level, and k, the number of components with the low variance level. The remaining n-k components each has the high variance level Vh = (V - kvl)/(n - k). Note that if vi = V/n then Vh = V/n and we get the equal variance solution. The problem (P1') is significantly simpler than problem (PI) since for a given value of k, the optimal value of vl can be determined using a univariate search. Let v [kl be the value of vi that maximizes the objective function in problem (P1') for a given k. The optimal solution to problem (P 1') can then be found by comparing the objective function value for all (k, v [k]) pairs. Unfortunately, for a large number of components, this procedure becomes tedious since it must be repeated n- 1 times. The following result allows us to perform the search only once and still obtain the optimal solution. Theorem 3 If an unequal variance solution is optimal to the problem (PI), then all but one component have low variance. Proof of Theorem 3: We prove the result by contradiction. Suppose that an unequal variance solution is optimal, k components have low variance, and k < n - 1. Then at least two components have high variance. Now form a two component subproblem by considering components k + 1 and k + 2 in isolation. Define V(k) - Vk + Vk+l. Clearly V(k+l) = Vk+l + vk+2 = 2Vh > 2V = V2. Thus by Theorem 1, an unequal variance solution is better for the two component subproblem of components k + 1 and k + 2 considered in isolation. But if the solution to this two component subproblem can be improved, this implies that the original solution to the n component problem is not optimal. (This fact is a result due to the separability of G.) Thus we have a contradiction that k < n - 1. 17

The consequence of Theorem 3 is that given an instance of the problem ( 1I), we can determine the structure of the optimal solution. Either an equal variance solution is optimal, or an unequal variance solution is optimal with n - 1 components having the low variance level. In either case, to find the optimal values of the vi, we only need to apply a univariate search to the problem (Pi"), (P1") Maximize 0 (vl) -g (vl)n~' g(V - (n - 1)vl). O<vi<V/n That is, if v* is the optimal solution to (PI"), then vi = vl for i = 1,...,n - 1 and v, = V - (n- 1) v. An important question still remains to be answered. When is an unequal variance solution optimal for an n component problem? Recall that for a two component problem, an unequal variance solution is optimal if V > V2. We now give an analogous result for the n component problem. Lemma 2 For the problem (PI), there exists a lower critical variance level Vn - nV 0.35348 n (T - )2 such that an unequal variance solution is optimal if V > Vn. Proof of Lemma 2: We prove the result by contradiction. Suppose V > Vn but the equal variance solution is optimal. Consider any two components in isolation. The total variance from these two components is 2V/n > 2V = V2. So for this two component problem considered in isolation, an unequal variance solution must be optimal by Theorem 1. But since G is separable, this means that the equal variance solution can not be optimal to the problem (PI). * 18

Theorem 3 characterizes the structure of an optimal unequal variance solution but says nothing about when such a solution may be optimal. Lemma 2, on the other hand, gives a sufficient condition, V > V, for an unequal variance solution to be optimal. Together, they provide a sufficient condition for an unequal variance solution to be optimal with n - 1 components having the lolwl variance level. However, if V < V,, we are unable to conclude from these two results that the equal variance solution is optimal. That is, V < Vn is not a sufficient condition for the equal variance solution to be optimal. We now explore the differences between the solutions to the two component problem and the n component problem. Consider the function P2 0() which gives the optimal lower variance to a two component problem for any variance V. That is I = v2 (V) implies that VI must satisfy y (v1 ) = (V - vf). The function p2 (V) has a unique value for each V as shown in Figure 6. For V < V2, p2 (V) is linear with slope 0.5, denoting the equal variance solution. When V > V2, p2 (V) is decreasing. Now consider all first order points which are candidates for the optimal solution to an n component problem as characterized by Theorem 3. Let /n(') be a relation that gives all vl which satisfy the first order conditions of problem (P1"), that is 7- (v7) = -y (V - (n - 1) iv). So, oVn(V)=v 7y(v () 7-y(V -(n-1) v) y 7(v') = ((V + v) -n v) V^ v = - pn+i(V+vf). 19

Equating v* with n( (V) we obtain Pn (V) = ni+l (V + (On (V)) By induction it is easily shown that k+2 (V + k (2 (V)) = (2 (V). (12) In words this identity says that if v7 is the optimal solution to a two component problem with total variance V, then it satisfies the first order conditions to a k + 2 component problem with total variance (V + k v7). Graphs of p3 and (p4 obtained from equation (12) are also shown in Figure 6. An interesting phenomenon occurs for n > 2. The graph of <Vn "bends back" at Vn implying nonuniqueness of candidate optimal solutions for a range of V just below Vn. That is, for a V in this range, there are several potential vH satisfying equation (12), each of which is pairwise optimal for all two component subproblems. The equal variance solution in this range may not be optimal. To illustrate this point consider a three component problem with T = 10, g = 8 for which the critical variance is V3 s 1.06045(T-p)2 = 4.24178. Suppose the total variance is V = 4.2. There are three points that satisfy the first order conditions-the equal variance solution, (4.2/3,4.2/3,4.2/3) and two unequal variance solutions, (1.3703,1.3703,1.4594) and (0.9091,0.9091,2.3818). The equal variance solution is a local maximum, the point (1.3703,1.3703,1.4594) is a local minimum, and the point (0.9091,0.9091,2.3818) is the global maximum. For the specified T and /, the optimal solution to a two component problem has equal variance if V < V2 w 0.70696(T - 1)2 and has unequal variance otherwise. Notice that all pairwise combinations formed from these solutions are optimal. 20

We now summarize our findings for the n component problem. If V > Vn, then an unequal variance solution is optimal with n - 1 components having a low variance, VI, and the remaining component with a high variance level vh = V - (n - 1)vi. For V < V, we can make no claim whether an equal or unequal variance solution is optimal. However, if an unequal variance solution is optimal, it must be true that n - 1 components are allocated a low variance level. In any case, the optimal solution can be found by solving the univariate optimization problem (P1"). These results and the related analysis provide an answer to the question of what allocation of variance is the best. The following result, analogous to Theorem 2, indicates when the equal variance allocation is the worst. Theorem 4 For the problem (P1) there exists an upper critical variance level, Vn > Vn, such that if V > Vn, then the equal variance allocation is worse than an extreme point allocation; moreover, under Conjecture 1, the equal variance allocation is the worst allocation. The upper critical variance level is given by Vn = (n- 1) V2. Proof of Theorem 4: The proof is in two parts. We first show that an extreme point solution can not be the global maximum. We then show that under Conjecture 1, an interior point, unequal variance solution can not be the global maximum. To show that an extreme point can not be the global maximum when V > Vn, we first observe that an extreme point solution has vl = 0. (Recall that we have assumed without loss of generality that vi < vi+l, i = 1,..., n - 1.) Furthermore, it must be true that vn > V/(n - 1) > V2. If we form a two component subproblem by considering components 1 and 7n in isolation, then the total variance for this subproblem is greater than V/(n - 1) > V2. By Theorem 2, the equal variance solution to this two component subproblem has a smaller objective function value than an extreme point solution. Since (vl, Vn) is not the global minimum to this two 21

component subproblem, then the original extreme point solution can not be a global minimum to the problem (Pl). We now show that an interior point, unequal variance solution can not be the global minimum under Conjecture 1. Suppose that an interior point, unequal variance solution is the global minimum. Then by Lemma 1, it would have k components with variance vl and n - k components with variance vh. Since it is an unequal variance solution, 1 < k < n - 1. If vl + vh > V2, then by Theorem 2 under Conjecture 1, the equal variance solution must be the worst solution to this two component subproblem which implies that this unequal variance solution to the n component problem can not be the global maximum. We now show that vl + vh > V2. Let vt = V/n- A. Then vh = V/n + kA/(n - k). So vl + Vh = 2V/n + (2k - n) A/(n - k). First consider the case when 2k - n < O. Then I + Vh = 2V/n + (2k - n)/(n - k) 2V (2k-n)V V > -+, since A < -- n (n - k)n n V (n - 1) n-k n-k > V2 Now consider the case when 2k - n > 0. Then 2V (2k - n)A vtl+vh =.+ n (n - k) 2V n 2 (n- 1) > -V2 n 2 (n - 1) > V2 since > I for n > 2 n 22

While we have demonstrated that placing all of the variability at one component is a good policy if the total variability is large enough, this strategy may not always be possible in practice. The following corollary demonstrates that for these levels of variability, allocating most of the variance to as few components as possible is a good strategy. Corollary 1 For the problem (P ), if the total variance V exceeds the upper critical variance Vn, then the optimal solution where k components have low variance and n - k components have high variance is worse than the optimal solution where k + 1 components have low variance and n- k - 1 components have high variance. Proof of Corollary 1: See Appendix C. 6 Discussion and Application The analysis we have presented characterizes the optimal structure of the allocation of variance in a system of n components based upon four parameters-n, T, p/, and V. In fact these four parameters can be combined into a dimensionless value by which all of the results can be stated. For example, an unequal variance solution is optimal if rT ~~- ki jr~<x ((13) Similarly, the equal variance solution is the worst solution if T-/I -' <x 23

where x is the solution to equation (8). Thus, the results depend not on the individual values of the parameters n, T, A, and V, but on the fundamental dimensionless quantities expressed above. This observation further strengthens the validity of Conjecture 1. The quantity (T - u)/\/V7/n reflects the level of variability in the system relative to the difference between the threshold level and mean performance level of a component. We now consider the behavior of the optimal objective function value as a function of the total variance V. As V increases beyond Vn, v7 -- 0 and vh -+ V. The optimal objective function value for large values of V can be approximated as G(0,..,0, V); 0.5. That is, for high levels of variance the optimal probability that the system functions properly is approximately 0.5 and is (relatively) independent of the number of stations in the line. As noted earlier, if the variance is sufficiently large, the equal variance solution may be the worst solution. Graphs of objective function values as a function of V are shown in Figure 7 for the optimal solution as well as the equal variance solution for T = 10,,/ = 8, and n = 2. As the variance increases beyond the lower critical variance, the difference between the optimal solution and the equal variance solution increases. Our analysis assumes that performance of each component is described by a normally distributed random variable. An interesting question is to what extent the results depend on this assumption. At this time, similar efforts to develop analogous analytic results for other probability distributions has proved intractable. However, computational results for a number of distributions including beta, lognormal, and Weibull demonstrate that similar results hold for a more general class of distributions. 24

In Section 1 we noted that our problem had applications in several areas including assembly line design, inventory control, semiconductor manufacturing, and quality control. We now interpret our results in the context of an assembly line design problem. In the framework presented here, a component corresponds to a station on the assembly line and the performance of component corresponds to the processing time of job at a station. As described in Section 1, the operators at each station on a synchronous assembly line each have T time units to complete their assigned work, where T is the cycle time of the line. If an operator at any station is unable to complete his/her work within the cycle time, then either the job is tagged and the work is completed at a rework station at the end of the line or the line is stopped until all workers have completed their work. The detrimental impact of variability on an assembly line is linked to worker utilization, p = P/T. In designing a synchronous assembly line, one can minimize the likelihood of reworking jobs or stopping the line by giving each station a small mean processing time relative to the cycle time. This approach will lead to poor worker utilization however. Conversely, if the amount, of work assigned to each station is such that the mean processing time approaches the cycle time, then worker utilization will be high but the probability that a job requires rework or that the line must be stopped will become significant. If it is desired to have a worker utilization of p, then the condition for an unequal variance solution to be optimal, V > nV, is equivalent to p-P c > -, (14) P where c = /V/7n/i is the per station coefficient of variability. According to equation (14), as the desired level of worker utilization increases, the level of system variability that necessitates an unequal allocation of variance decreases. Suppose it is desired by the assembly line designer to have a worker utilization of 90%, i.e. p = 0.9. In that case, an unequal allocation of variance is 25

Task Time Option Expected (seconds) Penetration (%) Time A 30 100 30 B 30 100 30 C 60 50 30 D 60 50 30 Table 1: Task Data for 4 Task Example optimal if the average coefficient of variation of the processing time at a station is greater than 0.066. This level variability is smaller than those cited in other studies of assembly lines with variable processing times (Conway, Maxwell, McClain, & Thomas, 1988), (Hillier & Boling, 1979), and (Carnall & Wild, 1976). In this case, the designer of the line should try to allocate as much of the variability as possible to one station. In an environment where the variability in processing time is due to different options available on a product, it may be advantageous to assign the tasks that are related to options to the same station. To demonstrate this consider the following simple example based on the four tasks shown in Table 1. Option penetration refers to the percentage of jobs that require that task. Now consider two alternative lines. In the first alternative, Line 1, tasks A and C are assigned to Station 1 and tasks B and D are assigned to Station 2. In this case not only is the total mean processing time at Stations 1 and 2 the same, 60 seconds, but so are the distributions of processing time. In the second alternative, Line 2, tasks A and B are assigned to Station 1 and tasks C and D are assigned to Station 2. In this case, each station has a mean processing time of 60 seconds but Station 2 is the only station with any variability in processing time. If the cycle time of the line is between 60 and 90 seconds, then the probability that a job on Line 1 requires rework is 0.75 while the probability that a job on Line 2 requires rework is only 0.25. (These are also the probabilities of stopping the line due to one or both stations requiring processing beyond the cycle time.) If the cycle time of the line is between 90 and 120 seconds then jobs on Line 1 will never require rework while the probability 26

that a job on Line 2 will require rework is still 0.25. Thus as T, and hence T - [, increases, an equal allocation of variance becomes optimal. This behavior is predicted by Theorem 3. Recall that Theorem 3 states that if the total variability exceed the lower critical variance, an unequal allocation of variance is optimal. An alternative interpretation of Theorem 3 is that given ai and V, an equal allocation of variance will be optimal as T increases. Note that in this example, the range of T that corresponds to an equal variance allocation being optimal corresponds to a range of worker utilization between 50% and 67%. The range of T that corresponds to an unequal variance allocion being optimal corresponds to a range of worker utilization between 67% and 100% Another source of variability in a synchronous assembly lines is machine or tool failure. In this environment, one must understand how decreasing these failure rates improves the overall system performance given by expression (1). Theorem 3 and Corollary 1 state that this process should be on station at a time. That is, it is better to improve the performance of the line by concentrating process improvement efforts one station at a time. 7 Conclusions We have analyzed the effect of variability on a system of components that works if and only if each component works. The results demonstrate that if the total variance in the system is sufficiently large, an unequal allocation of the variance is optimal and that an equal allocation of the variance can be the worst solution. Additionally, the optimal allocation can be found using a simple univariate optimization technique. Practically, variability should be isolated at as few components as possible. The results can guide continuous improvement efforts in variance reduction. That is, if the variability of the system can somehow be reduced, these results can serve as a guide to where the most leverage is in terms of variance reduction. 27

A Roots of C(x) Consider the function 0 (x) = (3 - x2) P (X) -XZ (X). C (x) = 0 has exactly one positive root if and only if (3 - x2) p (X) = XZ (X) has one positive root. Now, call the left side of this equation L (x) and the right side of this equation R(x). Clearly 1. 0 < R(x) < for x > 0. 2. L (x) < 0 for x > V3 E Y 3. (3-X2)/2 I L (x) I 3 -X2 for O < x < T. Since (3 - x2)/2 > x/V'2-for O < zx< (V FT - 1)/V'22:x then L (x0) = R (x0) if and only if x 0E I = X ) Since R () =._Z(x) < (3 -_X2) p (X) = L (x), R (Y) = TZ (-) > (3 - 12) P (7) = L (T) and both L (x) and R (x) are continuous on I, there must be at least one solution to L (x) = R (x) on I. We now show that there is exactly one solution on I by showing that L' (x) < R' (x) < 0 on I. By differentiating L (x) and R (x) we obtain L'(x) = -2xP(x)+(3 X2)Z(X) R'(I) = (1 X2)z(X) 28

Since 1 - x2 < 0 on I and Z (x) > 0 on 1, then R' (x) < 0 on I. So, L'(x) < -2xP() + (3-x2) Z(x) x -2.35 R' (x) > (1 - ) Z (X); -0.31 Therefore L' (x) < R' (x) < 0. Thus, R (x) and L (x) will intersect exactly once on I. This proves that C (x) = 0 has one positive root, which we call. Now, since C (x) is continuous for positive x, lim C (x) = 1.5 and lim C (x) = -oo we know that C (x) is positive when x < Z and x —O+ X-.-00 negative when x > t. Since C (x) = 0 is a transcendental equation, we can not solve for x explicitly, though through numerical techniques we have found that Z - 1.68196. B Proof of Property 1 The derivative of y(v) is given by T - Ar Z (((T - ) 3P - + T tz (T -&A 4v5/2P2 TyV ) ( d d/ )) /V Therefore, y(v) is positive (negative) if C(x) = (3 - x2) P() -xZ(x) is negative (positive), where x = (T' - /s)/IV. In Appendix A it was demonstrated that there exists a number ~ such that C(x) > 0 if and only if x < x. Hence C(x) is increasing if and only if v > V _ (T - )2/2, i.e. -y(v) is decreasing if and only if v > V. That is, (v) is increasing on (o, V), achieves its maximum at v = V, and is decreasing on (V, oo).. 29

C Proof of Corollary 1 Consider the optimal solution to the problem where k components have low variance and n - k components have high variance. Let this solution be V(1)= (Vk(),,Vl(k),V h( k)<, h(k))' k n-k Now consider v(2) which is a solution to the problem where k + 1 components have a low variance of lk) and n - k - 1 components have high variance vh(k) = (V - (k + l)Vl(k))/(f - k - 1). That is, f)(2) = (V *) '" ) (Vk),**,V(k) Vh(k), " Vh(k)) k+l n-k-I These two solutions have the same variance assigned to each of the first k components. Therefore, consider the n -k component subproblem with total variance V' = V - kvTk) formed by considering the last n - k components of either one of these solutions. The solution to this subproblem formed from v(1) is Vl (= (V,. h(k)). n-k The solution to this subproblem formed from v(2) is (= (Vl Vh(k). I Vh(k)) n-Nk- l Note that v(l) is an equal variance solution to this subproblem. If we can demonstrate that the total variance for this subproblem, V', is greater than the upper critical variance level for this subproblem, then by Theorem 4 it will be true that (2) is a better solution to the subproblem than (/1) 30

But, V' V - kv) > Vn-kvt(k) = (n - 1)V2-kvk) > (n-l)V2 - kV2 =(n - - )V2 n-k Hence the total variance for the subproblem is greater than the upper critical variance for the subproblem. Therefore v') is a worse solution to the subproblem than (2). This implies that v(l) is a worse solution to the original problem than v(2), i.e. G(V(')) < G(u(2)). Recall that v(2) is a solution which has k + 1 components with low variance and n - k - 1 components with high variance. If we let V(3) (Vi(k+ l)-' V(k+l), Vh(k+ i)' *. Vh(k+I)) k+1 n-kc-l be the optimal solution to the problem with total variance V and k + 1 components with low variance and n - k - 1 components with high variance then it must be true that G(vi(2)) < G(i(3)). Therefore G(D(1)) < G(.(2)) < G(v(3)), i.e. the optimal solution with k components having low variance is worse than the optimal solution with k + 1 components having low variance. o 31

References Carnall, C.A. and Wild, Ray. (1976),.The Location of Variable Work Stations and the Performance of Production Flow Lines", International Journal of Production Research 14, 703-710. Carraway, Robert L. (1989), "A Dynamic Programming Approach to Stochastic Assembly Line Balancing", Management Science 35, 459-471. Conway, Richard, Maxwell, William, McClain, John 0., and Thomas, L. Joseph. (1988), "The Role of Work-in-Process Inventory in Serial Production Lines", Operations Research 36, 229-241. Henig, Mordechai I. (1986), "Extensions of the Dynamic Programming Method in the Deterministic and Stochastic Assembly-Line Balancing Problems", Computers and Operations Research 13, 443-449. Hillier, Frederick S. and Boling, Ronald. (1979), "On the Optimal Allocation of Work in Symmetrically Unbalanced Production Line Systems with Variable Operation Times", Management Science 25, 721-728. Kao, Edward P. C. (1976), "A Preference Order Dynamic Program for Stochastic Assembly Line Balancing", Management Science 22, 1097-1104. Karmarkar, Uday S. (1981), "Equalization of Runout Times", Operations Research 29, 757-762. Krajewski, Lee J. and Ritzman, Larry P. (1993), Operations Management: Strategy and Analysis, Addison-Wesley, Reading, Massachusetts. Lau, Hon-Shiang. (1992), "On Balancing Variances of Station Processing Times in Unpaced Lines", European Journal of Operational Research 56, 345-356. Lau, Hon-Shlang. (1986), "A Directly Coupled Two Station Line", IIE Tansactions 18, 304 - 32

312. Lau, Hon-Shiang and Martin, G. E. (1987), "The Effects of Skewness and Kurtosis of Processing Times in Unpaced Lines", International Journal of Production Research 25, 1483-1492. Luss, Hanan and Gupta, Shiv K. (1975), "Allocation of Effort Resources among Competing Activities", Operations Research 23, 360-366. Mendelson, Haim, Pliskin, Joseph S., and Yechiali, Uri. (1980), "A Stochastic Allocation Problem", Operations Research 28, 687-693. Sniedovich, Moshe. (1983), "A Class of Variance-Constrained Problems", Operations Research 31, 338-353. Sniedovich, Moshe. (1981), "Analysis of a Preference Order Assembly Line Problem", Management Science 27, 1067-1080. Zipkin, Paul H. (1980), "Simple Ranking Methods for Allocation of One Resource", Management Science 26, 34-43. 33

D Figures 9(v) V Figure 1. Probability a component works, g (v). 34

Y(v) V Figutre 2. -y (v) 35

y(v ) y(v -v) Vl (a) ->V 2V= I<V (b) 2 v< V V< =2V 2 G(v, V- v1) I G(v1,V- v,) V, V1 V - >V 2 V-< V 2 V< V=2V (c) (d) Figure 3. (a) and (b). 7 (vl) and 7 (v2) = 7 (V - vl) for V > V2 and V < V2, respectively. (c) and (d). G (vl, v) = G (v, V - vi) for V > V2 and V < V2, respectively. 36

VI + V2 I I y(v) Figure 4. vl + v2 as a function of 7, y (vl) = y (v2), v\l v2. 37

G(v1, V- v1) (a) ~ < Vr< G Vv,,VI1 - VI I, (b) Figures 5a and 5b. G (vj,,v2) = G (vi, V - v1). N. B. Vertical axes not to the same scale. 38

2(V) (p,(V) p,4(V) V v, V 2 Figure 6. pon (V) as a function of V. 39

Probability System Works 0.95 0.9 0.85 Value of Optin 0.8 0.75 0.7 Value of Equal Variance Solution 0.65 2 4 6 8 Figure 7. O (vl*) and 8 (V/n) for n = 2, T = 10, / = 8. nal Solution 10 12 40