S ~Y Xa/. 7 l 4i a A (//W ^ X/9, ON EXTENDING THE SCOPE OF A BOUNDING TECHNIQUE FOR CLOSED QUEUEING NETWORKS M. M. SRINIVASAN Department of Industrial & Operations Engineering The University of Michigan Ann Arbor, Michigan 48109-2117 July 1985 Revised April 1986 ~\^Ng~~~ ~Technical Report 85-25.......!D~ ~(To appear in Large Scale NO Systems Journal) I,:...h )^ t'z

On extending the scope of a bounding technique for closed queueing networks Abstract Queueing network models are widely used to analyze the performance of automated manufacturing facilities and computer systems. These models aid the designers/operators of the system in evaluating the impact of various alternate configurations on the performance of these systems. In many instances, however, the exact analysis of these networks for the measures of performance may not be necessary; some bounds on the performance measures may be adequate. This has motivated research on obtaining bounds on performance measures, and a number of techniques have recently been developed. These techniques have generally obtained bounds on the throughput for networks with a single class of customers, where each node is either a single server fixed-rate type or is a delay (infinite server) type. In this paper we extend the scope of these techniques to networks where non-delay nodes are allowed to have some service rates which depend on the number of jobs present at the nodes. In addition, efficient means are developed for calculating bounds on the mean queue lengths forming at the nodes in these networks. 1

1. Introduction: The performance of complex systems such as flexible manufacturing systems (FMS) are often evaluated using queueing networks. Several studies using such analytical models of FMS have been made in the past [12,15,16]. These models aid the FMS designers in predicting the behaviour of these systems under different configurations. In the FMS context, closed queueing network models are usually preferred over open queueing network models as being more realistic [16], although open models are easier to analyze. The exact solution of these queueing networks is, however, infeasible in general unless certain assumptions are made. These assumptions give rise to a certain class of networks known as Product Form (PF) networks, or separable networks. For these PF networks, it is possible to obtain equilibrium performance measures with relatively less computational effort. Quite often, exact solutions of these queueing networks may not be needed. One such situation arises in the design phase of a system, where the workload parameters are themselves not usually known with reasonable accuracy. Similarly, for the case when many alternate configurations are to be evaluated, obtaining exact solutions may be unnecessary. In such cases, one would like to obtain approximate solutions fairly quickly. As an example, consider a flexible assembly system where components to be assembled move from one workstation to another, undergoing some operations at each point. Suppose that each workstation would operate-psroportiznabty faster depending on the number of assembly operators present at a station. Assuming, for this simple example, that all the available operators are equally capable of working at any station, the task of the supervisor here could be the assignment of additional operators to each workstation at the start of each shift, so as to maximize the throughput of assembled parts. In such a situation, the presence of a technique which can obtain quick, albeit approximate, estimates of throughputs 1

for several operator assignments would be very helpful to the supervisor. Needless to say these approximate solutions should take relatively little time to compute and the resulting errors in approximation should preferably be bounded. Recent years have witnessed a substantial amount of work on developing techniques for bounding the performance measures of closed queueing networks [3,5,7,11,13,14,17,19]. These bounding techniques require the PF assumption and they usually consider networks with a single class of customers where each node in the network is either single server fixed-rate type or is a delay (infinite server) type. The bounds obtained are for the cycle time (the mean system residence time) of a job, or alternately, the throughput of the system (number of job completions by the system in unit time). In this paper, it is shown how the scope of these bounding techniques can be extended in two ways: First, bounds are obtained on the cycle time/throughput of PF networks where nodes exhibit a certain kind of load dependent behaviour. Specifically it is required here that the service rate at these nodes is a nondecreasing function of the number of customers present at the nodes. Examples of such nodes, apart from delay nodes, are multiple server nodes, and flowequivalent nodes [2]. Secondly, an effective means of obtaining bounds on the mean queue lengths at the nodes in a PF network is also demonstrated. This method uses the Convolution algorithm [1] and a bounding technique [13] which obtains successively improving bounds (SIB) on the cycle time/throughput. Previous work: As noted before, work on obtaining bounds has usually been restricted to networks with fixed rate and delay nodes. The Balanced Job Bounds [19] (BJB) technique obtains a set of simple bounds on the cycle time. However, the bounds 2

obtained here can be quite loose. A bounding scheme based on the Mean Value Analysis (MVA) algorithm [10], is the technique due to Kriz [7]. However, these bounds are effective only when the network population becomes large, at which point the network begins to behave like an open network. The SIB technique, The Performance Bound Hierarchies [5] (PBH), and the Generalized Quick Bounds [17] (GQB) are all also based on the MVA algorithm. These techniques obtain a sequence of increasingly tighter bounds at the expense of increased computational effort. The GQB technique cannot handle delay nodes and does not appear to perform significantly better than the Balanced Job Bounds. The PBH technique produces a sequence of improving bounds which converge to the exact solution, but it requires considerable computational effort to produce tighter bounds, as it uses an iterative algorithm. The SIB technique, on the other hand, obtains closed form expressions for the bounds and is thus much easier to evaluate than the PBH bounds. Further, the SIB technique produces bounds which are usually tight for all population values. Recently, some results have been presented in [3,11,14] which obtain a lower bound on the throughput of networks with multiserver nodes. These techniques depend, in turn, on bounds obtained for a network consisting only of fixed rate and delay nodes. These approaches are reviewed in section 3. The outline of the rest of this paper is as follows: In section 2 the SIB technique is extended to include networks where non-delay nodes are allowed to exhibit a limited form of load dependent behaviour. Section 3 then illustrates how this technique can then be applied to networks with some more general load dependent behaviour, such as networks with multi-server nodes. Section 4 presents methods for obtaining effective bounds on the mean queue lengths forming at the nodes in networks consisting only of fixed rate and delay nodes. 3

2. Bounds for networks with a class of load dependent nodes: 2.1 Preliminaries: Consider a closed PF queueing network with M nodes and a customer population N. We adopt the following notation: Tm: mean value of service demand by a customer per visit to node m vm: mean number of visits to node m to service a typical request lm(n): service rate at node m when n customers are present at the node Pm(ilN): marginal queue size probability of i customers at node m with network population N Qm(N): mean queue length at node m with network population N w*(N): mean residence time at node m for a request at network population N W(N): the cycle time for a typical request (the sum of mean residence times at all nodes) at network population N XN: throughput of the network with network population N Each node in the network is allowed here to display some limited type of load dependent behaviour. Specifically, for n a O, this has the form pm(n) = n/(am + bm n); am, bm a 0. (2.1) It is implicitly assumed here that am+bm > O. We choose to term these nodes as nodes with a Parametrized Rate function or PR nodes. In general, for the networks considered here, each node m is thus associated with a set of four data parameters: am, bm, Tm, and vm. Note that if am = 0, equation (2.1) characterizes the behaviour of a fixed rate node, and if bm = 0, this equation characterizes the behaviour of a delay node. For different values of am and bm, a variety of service rate functions can be generated as indicated in Figure 2.1. In the following discussion, unless otherwise specified, the index for any summation, is over the range 1,..,M. We also implicitly assume that we are considering network populations of N a 2. 4

Figure 2.1 The mean residence time at any node in the network can be written as [10]: N Wm(N) = VmTm I (n/1m(n))pPm(n- IN-1). (2.2) n-1 From equations (2.1) and (2.2), we can write *,~ N wm(N) vmTm (am + bmn) pm(n-1 N-1) n-1 e m + Lm + Lm Qm(N (2.3) where Lm = bm VmTm' (2.4a) and Im = amvmTm. (2.4b) The values Lm and Im are, for convenience, referred to as the 'fixed rate' load, and the 'delay' load at node m. The cycle time, W(N), is then W(N) - Lm + Im + LmQm(N-1). (2.5) m m m Let Pm = Lm/((Lm+Im)), (2.6a) m am ' Im/((Lm+Im)), (2.6b) m e m + m' (2.6c) 5

and define a sequence of terms {Si}, i=1,2,.. by Si =- Pm O m (2.7) The expression for the cycle time given by equation (2.5) is then rewritten as W(N) = (j(Lm+Im)).(1 + 1(N-1)), (2.8) m where ~(K) pm-Qm(K), K a 0. m (2.9) For notational convenience, set DK = 1 + (K). (2.10) In the expression for the cycle time given by equation (2.8), the unknown term is ((N-1). A bound on O(N-1) then directly gives a bound on the cycle time. 2.2 The bounding technique: The mean queue length at node m at population K a 0, is given by Little's rule [9] as: Qm(K) = XKwm(K) = Kw(K)/W(K). (2.11) Hence, from equations (2.3),(2.8),(2.9) and (2.11), we express p(N-1) as O(N-1) = I Pm(N-1)-wm(N-1)/W(N-1) m ( N+ Pmam + PQm(N-2) (N-1) I ---------------, m 1 + 4(N-2) and, using equations (2.6c) and (2.10), this expression is rewritten as (N-1) 4(N-1) - (N-1)S2 + -----. l(N-2), DN-2 where (2.12) Y1(N-2) I p2.Q (N-2) - S2(N-2)). m (2.13) Let i AK,i = n XK-j. (2.14) j Theorem 2.1 first establishes an expression for the mean queue length forming at 6

a node with network population N in terms of the throughputs at populations 1 through N. Theorem 2.1: The mean queue length at a node m is given by Qm(N) = Qm(N) ~ 9m/Pm, (2.15) where N Q- (N) = XL1jAji, (2.16) Proof: From equations (2.11) and (2.3), for K > 1, Qm(K) XK-(Lm + Im) + XK.LmQm(K-1), and by a repeated application of the above expression for Qm(K), for N - K > 1, Qm(N) (L + I )'N + ** + (L + Im)L2*'ANN-2 L -lANN-2Qm(1) Noting that Qm(1) = Alwm(1) = A1(Lm + Im), we can write N Qm(N) = (Lm + Im) L-1AN,i The result follows by noting that (Lm + Im)/Lm = em/Pm. A set of simple upper and lower bounds on the cycle time is now established. These are termed level 1 bounds analogous to [13]. These bounds are obtained by noting that in equation (2.12), Y1(N-2) a O. To obtain these bounds, we need the following: Definition 2.1: Given a sequence X = {Xm}, m = 1,..., M, such that xM a XM_1... 2 x1 a 0, then a sequence Y = {Ym} m M 1,..., M, is said to have a Positive Correspondence with X, denoted as Y(PC)X, if for some k, 1; k S M, YM -... > Yk O0; Yk-l,..., Y1 S 0. 7

Lemma 2.1: Given sequences U = (um}, X = tXm } Y = ym, m = 1,..., M, such that (a) um a 0, m = 1,..., M, (b) xM >... > x1 > 0, (c) Y(PC)X, (d) I Um m (e) I umYm > O, m then I mumym a (I XmUm) ( UnYn). (2.17) m m n A proof of Lemma 2.1 is given in Appendix A. Note that if we let Q = {Qm(0)}, and p = {pm}, m = 1,..,M, then Q(PC)p. In equation (2.17), setting xm Pm' = m m' and m = Qm and notihg that 0m-Qm(K) = pm'Qm(K) we get the following result which we state as Theorem 2.2: I PmQm(K) > (I Pmjm) I PnQn(K). (2.18) m m n From Theorem 2.2, we can obtain a set of upper and lower bounds on DN_1 (and hence on the cycle time) which we term as the level 1 SI bounds. The bounds are expressed as Theorem 2.3. A proof of Theorem 2.3 is given in Appendix B. Theorem 2.3: Let u be the node such that Lu = max {Lm}. Then the level 1 SI bounds m on DNl are given by DN-1 < DN-1 N1 2.19)where DN-1 = 1 + E, (2.19a) 8

and 5N-1 s 0.5[1 + T + SQRT((O-1)2 + 4E)], (2.19b) with T = (N-1)pu (2.19c) i = (N-1)S2. (2.19d) Remark: If we set Im = 0 for m1,..,M, we have a network of fixed rate nodes. The Balanced Job (BJ) Bounds technique can obtain bounds on the term DN_1 in this case. These bounds require computation of the terms Lu and La, where La = I Lm/M, and these computations involve a total of about 2M operations. Computing the upper and lower BJ bounds on DN-1 then involves 2 more arithmetic operations for a total of 2M + 2 operations. Now consider the level 1 SI bounds on DN_1 obtained using Theorem 2.3: These bounds require computation of the terms S2 and Pu, which involves a total of 3M arithmetic operations; once these terms are obtained, computing DN_1 requires 2 more operations, and computing DN_1 requires about 9 more operations for a total of 3M+11 operations. Hence the level 1 SI bounds on DN_1 here require about M+9 additional operations over that needed by the BJ bounds. However, for this case it is easily shown [13] that the level 1 upper and lower SI bounds are both tighter than the corresponding BJ bounds. Further, the BJ Bounds technique do not handle delay nodes efficiently. 0 Define ao = S2, (2.20a) and i-1 ai = Si+2- Si+1-j jL, i>0. (2.20b) j = The term Y1(N-2) can now be expressed as the sum of a sequence of nonnegative terms involving the throughputs at populations N-1,..,1, and the terms ai' i > 0. This gives an expression for the term DN_1 which is expressed as 9

Theorem 2.4. This theorem makes a statement analogous to a similar statement proved in [13] and its proof is omitted here. Theorem 2.4: The term DNi can be written as: N-2 (N-2)(N-3) (N-2)..1 DN_1 = 1 + (N-1).(S2 + --— 'a + --------- a2 + +.. ------— aN2). (2.21) DN-2 DN-2'DN-3 DN-2~. D Equation (2.21) for DN_1 is used to obtain a sequence of increasingly tighter lower bounds which we term as Successively Improving bounds of higher levels. This is achieved by considering more and more terms from the expression for DN-1 in the bound. A level 1 lower SI bound uses the first two terms in the expression. For example, a level 2 lower SI bound, D2_1, makes use of the first three terms from the above expression for DN1 and is obtained from DN-1 > 1 + (N-1)-(S2 + (N-2)ai/DN_1. Solving this quadratic in DN_1 gives: D2_1 = 0.5*(1 + (N-1)ac + SQRT(T1)), (2.22a) where T1 = ((N-1)aO + 1)2 + 4-(N-1)(N-2)a1. (2.22b) Now, a sequence of upper SI bounds is obtained by noting that from equations (2.15) and (2.16) for the mean queue lengths, we can write PmQm(k) = PU(k) - X (Pu-Pm)PmQm(k) m m = puO(k) - (/DDk )(uS23) - (k/Dki ) (Pu-Pm)PmQm(k-1) m (2.23) and so on for an expression with up to k terms. Hence, including the first two terms from the above expression for 1p2Qm(k) in the expression for Y1(N-2) as given by equation (2.12), we get 4(N-1) (N-1)(N-2) DN-1 S 1 + (N-1)S2 + (N-1)(Pu-S2) ---- - -. --- — (P -S). N-1 DN-2 N-3 10

q(N-1) (N-1)(N-2) < 1 + (N-1)S2 + (N-1)(Pu-S2) -..- ------ (pS2-S3). DN-1 DN-1 DN-3 Here, DN-3 is given by the level one upper bound on DN_3 (equation 2.19b). Solving this resulting quadratic in DN_1 gives the level two upper SI bound. The level two bounds require a little more than 3M extra operations over that required by the level one SI bounds. The increase in computations here is mainly due to the calculation of the term S3 which requires about 3M operations. Thus a smooth tradeoff between accuracy and computational effort is achieved by these bounds. The use of these bounds is illustrated by a few examples in this and the following sections. Computational Remark: Suppose the network has a number of delay nodes. Then, in order to compute the exact cycle time/throughput, these nodes can be replaced by a single delay node whose load is the sum of the load at each delay node. Example 2.1: This is a network with 15 nodes. Table 2.1 gives the mean service time demand per visit, Tm, and the parameters am, and bm for each node. Note that nodes 1 through 5 are fixed rate nodes, and node 6 is a delay node. The mean number of visits to each node, vm, is assumed as 1. The throughputs were computed exactly, and also evaluated by the bounding technique using the level 5 SI bounds, for populations ranging from 1 to 50. Table 2.2 gives the results of these evaluations for some population values. Node 1 2 3 4 5 6 7 8 9 10 Mean Time 40 35 30 30 20 40 60 70 70 70 Demand am 0 0 0 0 0 1.50.50.40.35 bm 1 1 1 1 1 0.50.50.60.65 Table 2.1 Data parameters for example 2.1 11

Population Throughputs Values Exact Lower Bound Upper Bound 10 0.0128 0.0127 0.0128 20 0.0173 0.0169 0.0174 30 0.0193 0.0187 0.0196 40 0.0203 0.0197 0.0208 50 0.0209 0.0202 0.0217 Table 2.2: Throughput bounds for example 2.1.uj 0 3. Bounds for networks: nodes with non-decreasing service rates: In this section, we consider networks where load dependent nodes (other than the PR nodes we considered in section 2) have service rates which are non-decreasing functions of their queue lengths. Examples of such nodes are multiserver nodes and flow equivalent service centers. When these nodes are present in a queueing network, exact solution algorithms require O(MN2) computations to obtain the performance measures. In this section we develop bounds on the throughput/cycle time for these networks which take O(M) computations. Suppose it is desired to obtain a throughput bound for a given queueing network T. We now consider a new queueing network T which is constructed as follows: for each PR node, m, in T we create a PR node in T which has the same values for the fixed rate load, Lm, and the delay load, Im' (refer equations (2.4a) and (2.4b). However, each load dependent node which has a non-PR behaviour in T is replaced by one or more PR nodes in T. Suppose the replacement is made such that the throughput of T' can be guaranteed to bound the throughput of T. for the range of populations being considered. Then, using 12

the techniques developed in section 2, a lower bound on the throughput T can be obtained with O(M) computations and this gives a lower bound on the throughput of network T. In the same manner, an alternate network, T, can be constructed to obtain an upper bound on the throughput of T. In section 3.1 we develop the technique for obtaining throughput/cycle time bounds for these networks. In the special case of a network with multiserver nodes, an alternate scheme to obtain throughput/cycle time bounds is developed in section 3.2. 3.1 A bounding scheme: Consider a network T with M nodes where, for ease of discussion, we assume that exactly one of the nodes has a non-PR behaviour. The extension to cases where more than one node has a non-PR behaviour is straightforward. Without loss of generality, let the node with the non-PR behaviour be node M. We!! construct a new network T that also has M nodes. The first M-1 nodes in T have identical characteristics as the first M-1 nodes in T. The Mth node in T has the same mean service time and the same mean number of visits as that of the Mth node in T. However, the rate functions for these two nodes can be different. Let PM(n) be the rate function for the Mth node in T and let pM(n) be the rate function for the Mth node in T. Now, if we choose a function iM(n) that is non-decreasing in n and such that for n O, _pM(n) I pM(n), then it is possible to show that the throughput of network T is at most that of network T. This follows from a result obtained in [18] which in effect shows that in a network of nodes with non-decreasing rate functions decreasing the service rate at any node decreases the throughput. (Hence, from Little's rule, the cycle time of network T is at least that of network T). Similarly, we can construct a network T which is identical to T except that the non-decreasing service rate PM(n) for node M is such that pM(n) Z pM(n), n a O. Then the throughput of 13

f! network T can similarly be shown to be at most equal to that of network T Given the service rate pM(n), these functions 1M(n) and iM(n) are chosen from the parametrized class of functions given by equation (2.1), with suitable choices for aM and bM for the two cases. For example, suppose that node M is a multiserver node with 6 servers. However, suppose that for some reason, the effective service rate realized at the node is as follows: pM(n) = 1+0.6(n-1); n 1 1,..,6, (3.1) 4; n > 6. One possible way to obtain pM(n) using equation (2.1) would be to solve two equations for the two unknowns aM and bM obtained by setting M(1) = 1, and 1M(6) - 4. This gives aM-0.9, and bM=0.1. This choice ensures that pM(n) a UM(n) for all n. It is possible to choose values of the parameters aM and bM for the function jM(n) in a similar manner. Thus, a possible set of choices for the functions pM(n) and IM(n) which, for n ~ 20 ensures that _M(n) 5 pM(n) S pM(n) is given by m_(n) - n / (0.8333 + 0.2083n), (3.2a) PM(n) = n / (0.9000 + 0.1000n). (3.2b) Figure 3.1 plots the functions pM(n), _M(n), and pM(n) for this case. Figure 3.1 This is the approach taken here to bound the throughput/cycle time of 14

networks where one or more nodes are allowed to have non-PR behaviour. The method is best illustrated with an example. Example 3.1: It is desired to estimate the throughput of a network T with 5 nodes; the first three nodes in this network are fixed rate nodes with mean service time demands of 20, 24 and 28 units respectively. Nodes 4 and 5 have load dependent service rates of the form given by equation (3.1) and the mean service time demand at each of these nodes is 120 units. The mean number of visits to each node is equal to 1. The throughput is to be estimated for a network population of 20. It To evaluate the throughput bounds two alternate networks, T and T, are constructed: In these networks, nodes 1, 2 and 3 have the same mean service time demands as in the original network. However, in network T,, nodes 4 and 5 are PR 'I nodes with rate function as given by equation (3.2a); in network T, nodes 4 and 5 are PR nodes with rate function as given by equation (3.2b). The mean number of visits at all nodes is equal to 1 for these networks. Using the techniques developed in section 2, a level 5 lower SI bound on throughput is obtained for network T, and a level 5 upper SI bound on the throughput is obtained for I! network T. Table 3.1 shows the results for some population values. N XN AN XN 5 0.013 0.012 0.013 10 0.021 0.018 0.022 15 0.026 0.022 0.027 20 0.029 0.025 0.030 Table 3.1: Throughput bounds for example 3.1. 15

3.2. Throughput Bounds for Networks with Multiserver nodes: Here, we consider the special case of a network with multiserver nodes. The service rate of a multiserver node, m, with Km multiservers is given by lm(n) = n; n ~ Km, (3.3a) and pm(n) = Km; n > Km. (3.3b) Some bounding schemes have been proposed recently [3,11,14], which obtain lower bounds on the throughput of such networks with O(M) computations. These are reviewed below: 3.2.1 Previous work: In a recent paper, Shanthikumar and Yao [11] obtained a lower bound on the throughput for networks with multiserver nodes. This bound is obtained using some results developed by them on likelihood ratio ordering and its preservation under convolution, and essentially works as follows: Suppose each node in the network has Km multiservers (for fixed rate servers, K = 1). Let cm = Km-1, m 1,..,M, and set c = E cm. Now, consider an alternate network which consists only of fixed rate servers. Each node in this network has the same mean service time demand as that of the corresponding node in the original network, but has a constant service rate of Km. A lower throughput bound on this network at population N-c then gives a lower throughput bound on the original network at population N. Needless to say, we require that N-c > 0. This method is effective when N-c is not very small. The technique proposed by Srinivasan [14] operates as follows: Consider a network T with M nodes and suppose that node M is a multiserver node with KM multiservers. Let the throughput of the multiserver node, when considered in isolation, and with a network population N be given by XN(M). Now consider a network T where we replace this multiserver node by a set of JM fixed rate nodes with suitably chosen fixed rate loads, where JM is a large number. Exactly 16

one of these JM nodes has a fixed rate load equal to vM.TM/KM, and the remaining nodes each has a fixed rate load that is much smaller than vM.TM/KM, and such that the sum of the loads at these JM nodes is equal to vM.TM. Let the throughput, in isolation, of a Flow Equivalent Service Center (FESC) [2], M1, which is obtained by aggregating these JM nodes be XN(M1). It can be easily shown that XN(M1) ~ AN(M). Then, using the results obtained in [18], it can be shown that the throughput of T is bounded from above by the throughput of T. Hence, a lower bound on the throughput of T gives a lower bound on the it throughput of T. In a similar manner, it is possible to construct a network T where node M is replaced by a FESC, M2, with fixed rate loads chosen such that AN(M2) a XN(M). An upper bound on the throughput of T gives an upper bound on the throughput of T. It is straightforward to generalize this approach to obtain throughput bounds for a network with more than one multiserver node. Working independently, a technique similar to the above has been proposed recently by Dallery [3]. Here, each multiserver node with Km multiservers in the original network is replaced by a set of Km fixed rate nodes each having a mean service time demand of Tm/Km. The Balanced Job lower bound on the throughput of the resulting network gives the desired lower bound on the throughput of the original network. It may be observed that if the Balanced Job Bound is used, the techniques proposed by [14] and [3] would give the same results, and would involve essentially involve the same computational effort. One major problem, however, with the BJB technique is that it does not handle networks with delay nodes efficiently. Also, the BJ bounds are often quite loose. (This was the motivation for the development of alternate schemes which obtain tighter bounds at the expense of increased computational effort). The technique developed in [3] is intended, though, to be used only in conjunction with the BJ bounds. When 17

alternate bounding methods such as PBH or SIB are used, the lower throughput bound obtained using the technique given in [14] can be shown to be tighter than the bound obtained using the technique given in [3]. 3.2.2 The scheme for networks with multiserver nodes: The technique outlined here is essentially a modification of the technique proposed in [14]. For ease of discussion, assume as before that the network T consists of M nodes where nodes 1 through M-1 are PR nodes and node M is a multiserver node with KM multiservers, mean service time demand TM, and visit ratio vM. The extension to networks with more than one multiserver node is, again, straightforward. Obtaining the lower throughput bound: To obtain a lower bound on the throughput of network T, we use Theorem 3.1. Theorem 3.1: Consider two closed systems operating with n customers in each system. The first system consists of one node, M, having mean service time demand TM, and with KM servers. The second system consists of one FESC, M1, which represents the aggregation of a network consisting of one fixed rate node with mean service time demand TM/KM, and a delay node with mean service time demand TM(KM-1)/KM. Let XhM) and X(M1) respectively denote the throughput of these two systems. Then XAM) a XAM1). Proof: Consider the second system M1. From the Asymptotic Bound Analysis for networks with delay nodes [8], an upper bound on the throughput of this network at population n is given by An(M1), where 18

Xn(M1) = min(KM/TM, n/(TM/KM+TM(KM-1)/KM)) min(KM/TM, n/TM). (3.4) Now consider the throughput of the system with the multiserver node. This is given by n (M) = n/TM; n 4 KM, (3.5a) Xn(M) = KM/M; n KM. (3.5b) Comparing equation (3.4) with (3.5a) and (3.5b) the result follows. Now, if the multiserver node M is replaced by a FESC, M1, obtained as indicated above, then using the results obtained in [18], it can be shown that a lower bound on the throughput of the resulting network gives a lower bound on the throughput of the original network. In effect each multiserver node, m, with mean number of visits vm is replaced here by two PR nodes both with mean number of visits vm, and mean service time demands chosen as indicated by Theorem 3.1. Since the resulting network consists only of PR nodes, the bounds developed in section 2 can be used. Obtaining the upper throughput bound: A simple upper bound on the throughput is obtained using similar arguments as those used to obtain the lower throughput bound. Each multiserver node m, with Km multiservers is replaced by a fixed rate node working at rate Tm/Km. [3,11,14]. In some cases, alternate replacements can do better [14]. The effectiveness of the technique developed here to obtain lower throughput bounds for these networks is illustrated with a couple of examples. Example 3.2: There are 11 nodes in this network. The mean service time demands at these nodes and the number of servers at these nodes are given in Table 3.2. The mean number 19

of visits at each node is assumed to be 1. The exact throughput was calculated at various population levels. Table 3.3 gives the results of the evaluation for some population values. Node 1 2 3 4 5 6 7 8 9 10 11 Mean Time Demand 120 160 140 100 100 90 72 30 20 16 15 No of servers 6 5 5 4 4 3 3 2 1 1 1 Table 3.2: Data parameters for Example 3.2 Following the technique developed here, each multiserver node is replaced by a fixed rate node and a delay node with mean service time demands chosen as indicated by Theorem 3.2. A simple level 1 SI bound on the resulting network gives the desired throughput bound. (As noted earlier, this requires about M more arithmetic operations than the BJ bound does). This bound is termed the SI bound in Table 3.3. For comparison, the bound obtained using the technique outlined in [11] is also given (termed as SY bound in Table 3.3). The resulting network of fixed rate nodes obtained by the technique is also evaluated using the level 1 SI bound. Although this technique cannot obtain bounds at the initial population values this technique produces bounds comparable to the SI bound for large N. Finally, the bound obtained using the technique outlined in [3] (termed as the DAL bound) is also presented in Table 3.3. The resulting network of fixed rate servers obtained here was evaluated for the throughput using the level 1 SI bound in this case too. It must be noted, however, that this technique is only intended to be used in conjunction with the BJ bound (which in any case gives a poorer estimate of throughput than the level 1 SI bound). 20

N X (actual) SI bound SY bound D*t bound 10 0.011 0.011 - 0.009 20 0.021 0.018 - 0.014 30 0.026 0.023 0.016 0.018 40 0.029 0.025 0.023 0.020 50 0.030 0.027 0.026 0.022 Table 3.3: Throughput bounds for Example 3.2 Example 3.3: As an example of an application of the bounding technique in a manufacturing environment, consider again, the problem of the assembly line supervisor who has to man 5 stations with additional operators so as to maximize throughput. Each job completed in this shop requires an exponentially distributed amount of time from each station. These stations are already manned by one or more operators. Assume that each station would operate proportionately faster depending on the number of operators assigned to it. Due to certain limitations, 30 jobs can be in the system at any time, and completion of one job immediately triggers another into the system. The five stations have mean service time demands of 40, 32, 9, 8, and 6, respectively, and the mean number of visits to each station is 1. In addition, there is some time taken for material movement between stations which is modelled by a delay station, and for a typical cycle of operations, this takes 65 units of time. The number of operators assigned at present to each station (Configuration 0) are, respectively, 4y 3, 1, 1, and 1. The supervisor would like to obtain quick estimates of the improvement in throughput resulting from adding one extra operator either at station 1 (Configuration 1) or at station 2 (Configuration 2). Using the bounding technique developed above, the level 2 lower and upper 21

bounds on the throughput for the three resulting configurations are calculated and presented in Table 3.4. Configuration Lower bound Upper bound Exact throughput 0 0.0822 0.1092 0.0889 1 0.0845 0.1145 0.0913 2 0.0885 0.1174 0.0952 Table 3.4: Throughput bounds for example 3.3 For comparison, the exact throughput values are also presented in Table 3.4 To compare the computational effort involved, calculating the exact throughput for the three configurations involved a total of 62 milliseconds. The bounding technique took less than 2 milliseconds to obtain all the bounds. However, it is important to note that the bounding technique here would probably only require a hand held calculator. 4. Bounds on mean queue lengths: Techniques now exist for obtaining reasonably tight bounds on the throughput for networks with PR nodes. These bounds can be obtained with considerably less computational effort than that required in order to compute the throughput/cycle time exactly. This suggests the possibility of achieving good bounds on the mean queue lengths which form at the nodes. Needless to say, these bounds should require little computational effort compared to that required to obtain the exact values. In section 4.1, some bounds are presented which are very easy to evaluate once bounds on the throughput have been obtained. These bounds appear adequate for nodes which do not experience very high utilizations (e.g., over 80%). For 22

nodes with high utilizations, an alternate bounding method is developed in section 4.2. This latter method is based on an application of the convolution algorithm [1] and requires a relatively larger number of arithmetic operations (O(M) operations for each such node) to obtain the mean queue length bounds. The discussion below is restricted, for simplicity, to networks with fixed rate nodes. It is easily extended to networks with delay nodes and nodes with parametrized service rate functions of the type covered in section 2. 4.1 Obtaining simple bounds on mean queue length: For networks with only single server fixed-rate nodes nodes, the mean queue length at a fixed-rate node, m, is given from equations (2.15) and (2.16) as: N Qm(N) = AN,i-1- (4.1) i=1 For the networks considered here, it can be shown that for K > 0, we must have XK a XK-1' Hence N i=1 1 - (Um(N))N+1 = Um(N) ----- ____, (4.2) 1 - Um(N) where Um(N) = LmN, (4.3) is the utilization of node m. Let AN denote an upper bound on XN and let Um(N) = Lm XN denote the corresponding upper bound on the utilization at node m. Then it is easily seen that 1 - (Um(N))N+ Qm(N) S Um(N) Um(N) _. (4.4) 1 - Um(N) Let Lu denote the load at the node(s) with the highest load. As the network population increases, the network begins to behave like an open network with mean arrival rate 1/Lu. For this corresponding open network, any node m 23

with load Lm < Lu will experience a mean queue length Qm = Um/(1-Um) where U = Lm/Lu would be the utilization of this node in the open network. Hence, for large N the upper bound on mean queue lengths, as given by equation (4.4) would begin to get increasingly closer to the exact value for these nodes. Let there be k nodes with load Lu. For these nodes, at large N, an upper bound is: Qu(N) = 1/k ( N - Qim(N)) (4.5) mfu where Qm(N) is a lower bound on Qm(N) A lower bound on Qm(N) is given by Theorem 4.1. A proof of the theorem is given in Appendix C. Theorem 4.1: In a closed PF network with only single server fixed rate nodes and delay nodes, the mean queue length, Qm(N), at a fixed rate node m is bounded from below by the term f(Um(N)), where mU(N) is a lower bound on Um(N) and (N-1)(1 - Um(N)) + 1 f(U (N)) - U (N) * ----------- (4.6) (N-1)(1 - Um(N))2 + 1 4.2 Obtaining tighter bounds on mean queue lengths: Although the bounds presented in Section 4.1 are easy to evaluate, they can be quite loose for nodes which have high utilizations. This is apparent since the bounds in Section 4.1 were obtained as follows: (i) Bounds were developed to express the mean queue lengths at the nodes in terms of their utilizations and (ii) Some bounds on these utilizations themselves were used in the expressions developed in (i) above. It is, however, possible to obtain an exact closed form expression for the mean queue lengths in terms of a throughput with some additional computation. This is achieved by means of the Convolution algorithm [1]. For ease of

presentation, only networks with fixed rate servers are considered here. The extension to networks with delay servers is straightforward. The Convolution algorithm for a network T with M fixed rate single server nodes gives an expression for the normalizing constant g(N,M) as g(N,M) = g(N,M-1) + Lmg(N-1,M). (4.7) The throughput for this network, XN, is then given as XN = g(N-1,M)/g(N,M). (4.8) Suppose it is desired to obtain bounds for the mean queue length at some designated node. Without loss of generality, let this be node M. The mean queue length at this node can be shown to be [4] N Qm(N) = I Lj g(N-i,M)/g(N,M). (4.9) Now consider an augmented network T(M) with M+1 nodes where nodes 1 through M have the same loads as in T and node M+1 had a load LM. Let A(M) be the throughput of this augmented network. Theorem 4.1 then obtains an expression for QM(N) in terms of (M). Theorem 4.2 Given a network, T, of M single server fixed rate nodes, the mean queue length at the Mth node, Qm(N) is given by X(M). L QM(N) = M(4.10) N LM where X(M) is the throughput of the network T augmented by one additional node, M+1, with load LM. Proof: The normalization constant of the augmented network, T(M), is given by g(N,M+1) = g(N,M) + LM g(N-1,M+1), (4.11) 25

and the throughput of T(M) is (M) = g(N-1,M+1)/g(N,M+1). (4.12) By repeated application of (4.11) in equation (4.12), we get g(N-1,M) + LM g(N-2,M) +... + LNlg(O,M) X(M) --------------------------------- g(N,M) + LM g(N-1,M) +... + LMg(O,M) 1 + QM(N-1) - (4.13) 1 + QM(N) where the last equality is obtained using equations (4.8) and (4.9). From the MVA theorem an expression relating QM(N) with QM(N-1) can be obtained as: QM(N) = XN LM + XN LM QM(N-1) (4.14) Substituting the expression for QM(N-1) resulting from (4.14) into equation (4.13), and simplifying, the desired result in obtained. Corollary 4.1: Let W(M)(N) be the cycle time of the network, T(M). Then N LM QM(N) (4.15) W(M)(N) - NLM Proof: The proof is immediate from an application of Little's rule [9]. To illustrate how this method extends directly to include, for example, delay nodes, consider a network with M single server fixed rate nodes and one delay node. Let this delay node be labelled node 0, and have a load Lo. Then the throughput for this network is given by AN = (N/LO) * h(N-1,M)/h(N,M), (4.16) where h(*, *) is obtained recursively from [4] as h(n,m) = h(n,m-1) + (nLm/Lo). h(n-1,m); n a 1, m a 1; (4.17) with 26

h(O,m) = h(n,0) = 1; n,m > 0. It is easily seen from equations (4.16) and (4.17), that the mean queue length at a fixed rate node in this network is given in terms of an augmented network, as before, by equation (4.10) or (4.15). The mean queue length, Qo(N), at the delay node is directly obtained from QO(N) =XN * LQ. In general, in order to use the approach outlined above, a new augmented network is to be constructed for each fixed rate node with a distinct load, for which mean queue length bounds are desired. This augmented network should then be analyzed, using a bounding technique, to obtain bounds on its cycle time. Equation (4.15) can then be used to obtain the bounds of the mean queue length. Now suppose the SIB technique is used. Assume that the level two bounds on the throughput have been obtained. This means that the terms Si, i = 2,3 must have been calculated, where Si is defined by equation (2.7). Now, to obtain mean queue length bounds for a node, n, n ~ M, with load Ln, the augmented network with M+1 nodes is constructed. The values of the relative utilizations (n) m = 1,..., M+1, for this augmented network, are given by (n) Pm P(n) -- - L, m - 1,..., M+1. L + Ln Let M+1 Sn) -= I (n))i i = 2,3. (4.18) i=l (Si + Pni)/(' + Pn)i (4.19) Hence the terms Sn) are easy to evaluate, given the values for Si and the relative utilizations. The lower bound for the augmented network is then given by equation (2.22a) where the terms aO and a1 are replaced by an) and an). The terms acn) are defined as (n) = s(n) 0 O S2 27

i-1 (n ) S (n) - S(n) > 04 3si~+2 i L +1l-j '~ ' j=0 Using this level 2 bound on cycle time in equation (4.15), an upper bound on the mean queue length at node n is obtained. The use of the bounding techniques developed in this section is illustrated below with a few examples. In each case, the approach taken is to use bounds developed in Section 4.1 for nodes with lower utilizations, and use the bounds from Section 4.2 for nodes with higher utilizations. The basis for determining which to use was, somewhat arbitrarily, set as follows: from the upper bound on throughput, the upper bound on the utilizations at the nodes was determined using equation (4.3). Nodes with this upper bound value S 0.80 were evaluated for their mean queue length bounds by the method of Section 4.1, while the other nodes used the method of Section 4.2. The SIB technique was used in all test cases, and the level 5 SI bounds were used to calculate the throughput bounds. For comparison, exact values for mean queue lengths were also evaluated. The test cases were run on an AMDAHL 5860 running the MTS. The time taken by the exact analysis and the bounding technique were recorded in each case. Example 4.1 This is the example given in [19]. There are 49 single server fixed rate nodes, with loadings as follows: 1 node at 21, 10 nodes at 20, 4 nodes at 10, 4 nodes at 5, 3 nodes at 4, 11 nodes at 3, 1 node at 2, and 15 nodes at 1 for a total load of 343. The bounds were evaluated at a population of 100. The mean queue length bounds and the exact values for mean queue lengths for each node with a distinct load are given in Table 4.1. Even though many nodes had identical loads, bounds were evaluated on all 49 nodes. The exact analysis took 17 milliseconds, while the bounding technique took 2 milliseconds. 28

Mean Queue Lengths Load at node Exact Lower bound Upper bound 21 12.865 10.131 15.754 20 7.957 6.605 8.985 10 0.811 0.768 0.835 5 0.289 0.279 0.295 4 0.218 0.212 0.223 3 0.155 0.151 0.158 2 0.098 0.096 0.100 1 0.047 0.046 0.048 Table 4.1: Mean queue length bounds for example 4.1 Example 4.2: This example is taken from [5]. There are 50 nodes with loads as follows: 1 node at 20, 2 nodes at 19, 5 nodes at 18, 5 nodes at 15, 5 nodes at 10, 8 nodes at 7, 8 nodes at 5, 8 nodes at 4, 8 nodes at 2, for a total load of 417. The bounds were evaluated at a network population of 50. The exact analysis required 9 milliseconds while the bounding technique required 2 milliseconds. Table 4.2 compares results for some of these nodes, Again, for comparison purposes, bounds were evaluated for all 50 nodes. Example 4.3: The final example here is a very unbalanced network with 20 nodes, and loads as follows: 2 nodes at 30, 1 node at 17, 3 nodes at 14, 1 node at 12, 2 nodes at 11, 1 node at 10, 2 nodes at 8, 1 node at 7, 2 nodes at 5, 3 nodes at 4, 1 node at 3, and 1 node at 1 for a total load of 212. Table 4.3 compares results for some of the nodes. The comparisons were made for a 29

population of 45. The exact analysis required about 3.3 milliseconds while the bounding technique required about 0.7 millisecond of CP time. Mean queue lengths Load at node Exact Lower bound Upper bound 20 5.181 4.484 5.691 19 4.027 3.531 4.329 18 3.207 2.464 3.749 15 1.781 1.544 1.923 10 0.753 0.701 0.781 7 0.431 0.409 0.443 5 0.274 0.262 0.281 4 0.208 0.200 0.213 Table 4.2: Mean queue length bounds for example 4.2 Mean queue lengths Load at node Exact Lower bound Upper bound 30 18.540 16.650 18.681 17 1.228 1.145 1.308 14 0.831 0.796 0.875 12 0.637 0.617 0.667 10 0.480 0.468 0.500 8 0.350 0.344 0.364 7 0.294 0.289 0.304 4 0.149 0.147 0.154 Table 4.3: Mean queue length bounds for example 4.3 30

Remark: In many instances, it may not be necessary to evaluate mean queue lengths at all nodes. (This is certainly true if many nodes have the same loads.) It is especially in such cases that a bounding technique would have an edge over the exact analysis methods such as the MVA algorithm which would have to compute these measures for all nodes, in any case, and at all intermediate population values. In the above examples, however, to compare the computational effort involved, the mean queue length bounds were calculated for all the nodes. 0 5. Conclusions: In many situations, the use of bounding techniques for analyzing queueing networks is justified. The scope of bounding techniques for closed PF queueing networks has been extended to enable throughput bounds to be obtained for networks in which the nodes are allowed to display some limited forms of load dependent behaviour. The application of these bounding techniques in analyzing flexible manufacturing systems has been illustrated by some examples. A simple, but effective, means of obtaining bounds on the mean queue lengths that form at the nodes in such networks has also been presented. 31

Appendix A: Proof of Lemma 2.1: Lemma 2.1: Given sequences U = tu m, X = {Xm}, Y = {Ym, m = 1,..., M, such that (a) um O0, m = 1,..., M, (b) xM... > xl > 0, (c) Y(PC)X, (d) I um < 1, m (e) I umym > 0, m then xmumYm 2 (~ XmUm) ( UnYn). (A1) m m n Proof: Let Y = I umym. We need to show that. xmum(ym Y) 0. m m Since um < 1, it must be true that m umym > ( um) * ( UnYn). (A2) m m n Further, since Y 2 0, and um 0, there exists some i < M, such that ym Y for all m 2 i; and Ym < Y, m < i. Since xi20, multiplying both sides of (A2) by xi, XiUmYm > (U XiUm)Y. m m Hence i-1 M 0 < xium(ym - Y) xium(ym Y) + Xium(Ym - Y) m m=l m=i i-1 M i-1 M < XmUm(Ym - Y) + I xiUm(ym - y) I XmUm(Ym - Y) + xmUm(ym Y) m=1 m=i m=1 m=i where the first (respectively second) inequality follows from the fact that ym Y < 0 (respectively > 0) for all m < i (respectively > i). The result follows. O 32

Appendix B: Proof of Theorem 2.3: Theorem 2.3: Let u be the node with maximum load. Then the level 1 bounds on DN_, are: DN-1 DN-1 < 1N- (B1) where DN-1 = 1 + i, (Bla) N_1 = 0.5[1 + V + SQRT((i-1)2 + 4f)], (Blb) with E = (N-1)Pu, (B1c) i = (N-1)S2. (Bld) Proof: Equation (2.11) gives DN- 1 + ( = 1 + (N-) = 1 + (N-1)S2 + (N-) (N2)/DN_2. By construction, S2 > 0. Theorem 2.2 gives yl(N-2) > O. So a lower bound is: DN1 = 1 + (N-1)S2. To obtain DNiN' replace YI(N-2) by a larger factor, i.e., from equation (2.13), -y(N-2) = Pu I PmQm(N-2) - pm'.m (N-2)) (P - S2 ) -(N-2). Hence DN,1 < 1 + + (E - 5) <(N-2)/DN_2. Since S2 = PmOm < Pu I Om < Put it is clear that (' - >) > 0. m m From known results on the 'monotonic' behavior [18] of networks, it is possible to show DN_1 > DN_2. Hence, noting that Dk = 1 + p(k), it is seen that O(N-2)/DN_2 < ((N-1)/DN_1. So, noting that DN_1 = 1 + O(N-l), DN-1 < 1 + 5 + (' - E)<(N-1)/DN_1 = 1 + F + (' - T )(DN_1 - 1)/DN-l. (B2) Solving this resulting quadratic in DN_1, we have the desired upper bound. 0 33

APPENDIX C Proof of Theorem 4.1 Theorem 4.1: In a closed PF network with only single server fixed rate nodes and delay nodes, the mean queue length, Qm(N), at a fixed rate node m is bounded from below by the term f(Um(N)), where Um(N) is a lower bound on Um(N) and (N-1)(1 - U (N)) + 1 f(UmCN)) -= U(N). ------------------—. (C1) f(U(N)) U (N) (Cl) (N-1)(1 U m(N))2 + 1 Proof: From Little's rule [9] we can write: AN * W(N)/N = AN_iW(N-i)/N-i, 0 < i < N. A direct consequence of the results in 'monotonicity' in PF queueing networks [18] gives the relation W(N-i) < W(N), 0 < i < N. Hence XN-i AXN(N-i)/N, 0 < i < N. (C2) From equations (C2), (4.1) and (4.3), for a fixed rate node m, we can write N i N-j+1 N i N-j+1 Qm(N) > I n Um(N) --- > Um n(N) -- i=1 j=1 N i=1 j=1 N For notational ease, set U = U (N). Then we can write N i-1 N-j Qm(N) > g(U,N) = I Ui -—. (C3) i=1 j=0 N N Noting that I (1-j/N) = 0, the function g(U,N) can be rewritten as j=N i-1 N g(U,N) = i Ui n (1 - j/N) + UN+1 n (1 - j/N) i=1 j=0 j=0 N+1 i-1 = U + U I' Ui-1 n(1 - j/N), i=2 j=0 34

and, where setting k = i - 1, N k N k-1;(U,N) = U + U I Uk H(1 - j/N) = U + U i Uk(1 - k/N) IH (1 - j/N) k=1 j=O k=1 j=O = U + U g(U,N) - U/N h(U,N), (C4) N k-1 h(U,N) = k Uk TI (1 - j/N) k=1 j=0 N k-1 N k-1 I uk 1 (1 - j/N) + I (k-1)Uk n (1 - j/N) k=1 j=O k=1 j=0 3o, using equation (C3), and after some elementary algebra, N-1 i h(U,N) = g(U,N) + U i i Ui 11 (1 - j/N) i-1 j=0 N-1 i = g(U,N) + U2(1 - 1/N) + U(1 - 1/N) i iUi T (1 - j/N).,,:_ n f% and s < g(U,N) + U(1 - 1/N) N-1 (u + i=2 N 1=/ i-1 iUi nI ( j=1 i-1 J=u - j/N)) < g(U,N) + U(1 - 1/N) (U +. iUi 11 (1 - j/N)) i=2 j=1 g(U,N) + U(1 - 1/N) h(U,N). Hence h(U,N) < g(U,N)/(1 - U +U/N). Substituting the inequality for h(U,N) given by (C5) into equation (C4), U g(U,N) g(U,N) > U + U g(U,N) - ----------- N (1 - U + U/N) Collecting terms and simplifying the above expression gives (N-1)(1-U) + 1 g(U,N) > U ------ (N-1)(1-U)2 + 1. Equations (C3) and (C6) give the desired result. (C5) (C6) 0 35

REFERENCES 1. BUZEN, J.P., "Computational algorithms for closed queueing networks with exponential servers", C. ACM, v 16, no 9, Sept 1973, pp. 527-531. 2. CHANDY, K.M., HERZOG, U., and WOO, L., "Approximate analysis of general queueing networks", IBM J. Res. Dev., v 19, pp. 43-49. 3. DALLERY, Y., and SURI, R., "Approximate disaggregation and performance bounds for queueing networks with multiple-server stations", Joint Performance '86 and ACM Sigmetrics 1986 conference, Raleigh, 1986. 4. DENNING, P.J., and BUZEN, J.P., "The operational analysis of queueing network models", Computing Surveys, v 10, 3, Sept 1978, pp. 225-262. 5. EAGER, D.L., and SEVCIK, K.L., "Performance bound hierarchies for queueing networks", ACM TOCS, vol 1, No 2, May 1983, pp. 99-115. 6. GORDON, W.J., and NEWELL, G.F., "Closed queueing systems with exponential servers", Operations Research, v 15, no 2, pp. 254-265. 7. KRIZ, J., "Throughput bounds for closed queueing networks", Performance evaluation, v 4, 1984, pp. 1-10. 8. LAZOWSKA, E.D., ZAHORJAN, J., GRAHAM, G.S;, and SEVCIK, K.C., "Computer system analysis using queueing network models", Prentice Hall, 1984. 9. LITTLE, J.D.C., "A proof of the queueing formula L = AW". Operations Research, 9, 1961, pp. 383-387. 10. REISER, M., and LAVENBERG, S.S., "Mean value analysis of closed multichain queueing networks", J. ACM, 27, 2, April 1980, pp. 313-322. 11. SHANTHIKUMAR, J.G., and YAO, D.D., "Throughput bounds for closed queueing networks with queue-dependent service rates", July 1985. 12. SOLBERG, J.J., "Quantitative design tools for computerized manufacturing systems", Proc. Sixth North American Metalworking Committee, Florida, April 1978. 13. SRINIVASAN, M.M., "Successively improving bounds on performance measures for product form queueing networks", Tech. Report 85-2, Department of I & OE, University of Michigan, 1985. 14. SRINIVASAN, M.M., "Bounds on performance measures for closed queueing networks: networks with multiserver nodes", Tech. Report 85-37, Department of I & OE, University of Michigan, 1985 15. STECKE, K.E., "Production planning problems for flexible manufacturing systems", Ph.D. dissertation, School of Industrial Engineering, Purdue University, W. Lafayette, Indiana, 1981. 16. SURI, R., and Hildebrant, R.R., "Modelling flexible manufacturing systems using mean value analysis", Tech. Report, August 1983. 36

17. SURI, R., "Generalized quick bounds for performance of queueing networks", Computer Performance, vol 5, No 2, June 1984, pp. 116-120. 18. SURI, R., "A concept of monotonicity and its characterization for closed queueing networks", Operations Research, May/June 1985, v 33, no 3, pp. 606-624. 19. ZAHORJAN, J., SEVCIK, K.C., EAGER, D.L., and GALLER, B., "Balanced job bound analysis of queueing networks", C. ACM, 25, 2, Feb 1982, pp. 134-141. 37

a-.oS, b:'o5 20 a a=. 1, b=. US 15 ~.2 b a =. b=05 a=. 3, b=. 05 10 - - ^ ' / V /7, 0 5 10 15 20 Population Figure 2.1: Service rate functions for some PR nodes

6 4 Lower Upper Actual o 2 0 0 4 8 12 14 n Figure 3.1: Bounding service rate for a node with rate given by equation 3.1