THE UNIVERSITY OF MICHIGAN INDUSTRY PROGRAM OF THE COLLEGE OF ENGINEERING ANALYSIS OF SYSTEMS OF QUEUES IN PARALLEL Erhan glnlar A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the University of Michigan Department of Industrial Engineering 1965 November, 1965 IP-723

(' i i 11

Doctoral Committee: Associate Associate Associate Professor Professor Professor Professor Ralph L. Disney, Chairman Professor A. Bruce Clarke Professor John N. Darroch Herbert P. Galliher Robert M. Thrall Richard C. Wilson

ACKNOWLEDGEMENTS I wish to express my sincere gratitude to Professor R. L. Disney who has been a most inspiring counselor. Thanks are also due to Professor A. B. Clarke and Professor J. N. Darroch who read a preliminary version of the manuscript and offered several helpful suggestions. The Industry Program of the College of Engineering at the University of Michigan has provided help in the preparation and printing of this manuscript, and I would like to acknowledge their assistance. iii

TABLE OF CONTENTS Page ACKNOWLEDGEMENTS................................................ iii LIST OF ILLUSTRATIONS........................................... v INTRODUCTION................................................... 1 PART Io DECOMPOSITION OF A SEMI-MARKOVIAN STREAM INTO R STREAMS Chapter I. PRELIMINARIES..................................... 22 II. DECOMPOSITION OF A SEMI-MARKOVIAN STREAM UNDER A MARKOVIAN DECISION RULE.............................. 35 III, DECOMPOSITION OF A SEMI-MARKOVIAN STREAM UNDER A STATE DEPENDENT DECISION RULE........................ 59 PART II. BALKING IN THE QUEUEING SYSTEM SM/M/1 IV. QUEUEING SYSTEM SM/M/1 WITH BALKING.................. 78 Vo STREAM BALKING FROM THE QUEUEING SYSTEM SM/M/1........ 98 VIo GENERALIZATIONS AND SPECIALIZATIONS.................... 122 PART III. APPLICATIONS VII. A SIMPLE SYSTEM: AN EXAMPLE...........,.............. 141 BIBLIOGRAPHY................................................ 158 iv

LIST OF ILLUSTRATIONS Figure Page 1. The Overflow Problem.................................. 4 2, A System of Parallel Queues............................ 6 3o Form of Some of the Matrices........................... 85 4. Form of the Matrix P................................... 87 5 Matrices in the Overflow Problem...................... 135 6, A Simple System....................................... 141 V

INTRODUCTION 1. HISTORICAL BACKGROUND Considerable attention has been given to the theory of simple queueso The easy problems have been solved and the methods that have been developed are sufficiently powerful to analyze simple models consisting of a group of servers with one waiting line in front of them under some special types of inputs and service mechanisms. We refer to treatises by L. Takacs [31] and T. L. Saaty [26] and their impressive lists of references. Although the subject of queues seems to be approaching a state of maturity, the general problem of a network of queues remains in its infancy. A network of queues is an arrangement of queues in parallel and in series. So far most of the interest has been on queues in series. The solution of such a system requires the knowledge of the output process of a queue, and furthermore since this output becomes the input to the next queue, it is required (for reasons of analytical ease) that the output process have independent increments. This greatly limits the kind of inputs and service times that can be assumed. Invariably, it is assumed that the input to the system is Poisson, all the servers are negative exponential, and all the queues are of infinite length, so that as P. Burke [4] shows, the outputs are also Poisson processes. Since very little is known about queues with general inputs, and since the output is a dependent process in general, no substantial breakthrough has been made in this field of queues in series. We refer to [10,11,12,13,14,21,24,27] for discussion of series systems. -1 -

The field of parallel queues is equally barren. Aside from studies of Palm and Disney on the overflow problem (which we will mention below) the only work is of Kingman [17]. Kingman has studied the queueing properties of a system of two queues in parallel subject to a recurrent input where the arriving customers join the queue of shortest length (in case of a tie, decisions are based on the outcome of a coin tossing experiment.) Each of the two queues has one negative exponential server and a waiting room of infinite size. No jockeying (changing the queues after joining one) is allowed. Kingman shows that a steady state exists and gives the joint distribution of the queue sizes in the steady state by means of a double generating function. A different approach to queueing networks is taken by R. Syski and V. E. Beneso They put forward the idea that a network should be treated as a whole, and not as a combination of separate queues. They argue that this type of approach has an advantage in that it would eventually formulate general laws governing the behavior of the whole complex system of queues. For example, Syski [30], introduces the "generalized congestion process" {X(t), t > 0} whose random variable takes values in the abstract space i of "patterns". A pattern describes the state of the network taking into consideration actual situations prevailing at each queue of the network (number of customers, their waiting times, number of rejected customers, input law, queue disciplines, structure of interconnections, etc.). Partial order is then introduced into the space 3- of all patterns, and the operations

of addition and scalar multiplications of patterns are defined. It is shown that I- forms a a-complete vector lattice under order topology, and the expectation of X(t) is defined by means of the McShane integral. In the same vein but with different interests, Benes [1,2], considers the mathematical problems related to the theory of connecting systems. He pays special attention to partial order, and goes on to study the algebraic and topological properties of connecting networks. He introduces three topologies (induced by partial ordering by inclusion, its dual, and by a distance function on the set of states) and uses them to characterize the properties of non-blocking and rearrange ability. Although this approach.to queueing networks may in time prove to be valuable, it will not offer any analytical tools for studying the characteristics of individual queues. Ideally, what is needed is a body of analytical methods that somehow enables one to study each queue in the network individually and yet is also able to bring about the system characteristics such as correlation among the queues. In this paper we contribute several such methods that can be used successfully in the analysis of parallel queues, The approach we have used may be called the method of decomposition. Instead of considering a system of R queues in parallel as a whole, we decompose the system so that each queue may be studied in isolation, This idea of decomposition seems to be the implicit motive in the studies on the decomposition of a Poisson process into R

streams by independent trials or by every other principle, etc. On the same line, Palm [20] has studied a system with no storage of R single server negative exponential queues in parallel subject to a recurrent input where the overflows from the i-th queue become the arrivals to the i + 1 st queue. He did this by finding the distribution of the time between overflows from the i-th queue as a function of the distribution of the time between arrivals to that queue. Since the overflows from the i-th queue are the arrivals to the i + 1 st queue he was able to repeat this for i = 1, 2,,, R starting with the arrivals into the first queue which is the same as the arrivals into the system. _ --- —T --- —.-. — - -K -— l w..... } --- II, Figure 1. The Overflow Problem. Disney [6] has generalized this problem by allowing storage in the system. He emphasized the usefulness of a decomposition approach to study such systems of parallel queueso He has shown that the overflow stream from a finite queue with negative exponential servers subject to a recurrent input is itself a recurrent process by a qualitative proof. He found the distribution of the time between overflows in the case of Poisson input, negative exponential servers, and a waiting room of size N. However, the problem of determining the distribution of the time between overflows in the case of a recurrent input has been left untreated. Since the overflows from a negative exponential server form a recurrent process

which is not Poisson even if the input is Poisson, one cannot use Disney's methods for the second queue or beyond because of the unavailability of the distribution function for the overflows from the second queue. Disney's problem (and therefore Palm's problem as its special case) happens to be a special case of one of the problems investigated in this paper, so that we supply the distribution function needed, thus completing Disney's treatment. 2. PROBLEM IN GENERAL Analysis of a system of parallel queues by the method of decomposition is essentially a study of the decomposition of the arrival process of that system. In general this decomposition of the arrival stream occurs as a result of a series of decisions made either by the customers themselves, or by the system, or by both. Thus the general decomposition problem is actually a series of decompositions affected one after another. To illustrate these ideas we refer to the figure below. A circle represents a point of decision making, either by the customer or by the management. Squares indicate queues, a queue consisting of several servers with a waiting room in front of them. Assume that at decision point D1 customers are sent in one of the three possible directions by some rule such as "every other" rule, At decision points D2, D3, D4 customers decide on whether or not to join the queue they are before on the basis of queue size there. For example, given that he has arrived at D3, a customer decides to join the queue number three,

-6 - Figure 2. A System of Parallel Queues. Q3 with a probability that depends on the number of people already there in Q3; if he decides not to join Q3, then he proceeds to D4 and the same things are repeated there. The customers arriving at D5 will include, in general, some customers who needed to be served in one of the queues Q2, Q3, Q4 but got rejected for some reason such as the unavailability of space in waiting room. Assume the system directs such customers towards D6 for serving them in that branch. Notice that the customers arriving at D1 go through a series of decision makings until they finally join a queue. These decisions are of a sequential character: only those customers who have been through points D1, D2, D3, D4 are allowed to make a decision at point D5. Hence, the general decomposition problem can be thought as a series of decompositions affected one after another. Then, the

-7 - problem reduces to the investigation of the decompositions affected at each one of these sequence of decision points. For example, assuming that the service times in each queue are independent of each other and no recycling is permitted, we would proceed to analyze the system of Figure 1 as follows. First, given the process of arrivals at D1, we would determine each one of the three streams emerging at D1. Then, the process of arrivals at D2, D7, D8 are all known. Consider only the stream arriving at D2. Some of the customers will join the queue number 2, and some will move to D3. Here we need to analyze the properties of the queueing processes going on at Q2, as well as characterizing the stream of customers that did not join Q2 ~ Once this balking process is characterized, then the process of arrivals at D3 is known, and everything that is done at point D2 and Q2 can be repeated there. The same can be done for D4 and Q4 to analyze the processes of interest in Q4, and to characterize the stream of customers who balk at Q4 to become arrivals at D5. Next, given this arrival stream and the decomposition rule used at D5, we can obtain the streams emerging from D5; and continue the analysis further. This method of analysis requires that one determine each of the streams emerging from a decomposition of a known stream by a known rule. Clearly, decompositions by different rules require different tools of analysis. In general, we distinguish two types of decomposition rules.

-8 - Decision points D1, D5 are examples of what may be called "junctions". Decisions at such points are about "what direction to take". Although this decision may be dependent on the kind of service(s) offered in subsequent queues at each direction, we assume that it is made without the actual knowledge of the states of the queueing processes going on. This assumption is quite realistic in cases where the direction to be taken is dictated by the needs of customers for a special kind of service which may be offered only in the queues at certain directions, (which is usually the case in industrial applications.) Or, sometimes it may be economically infeasible, or even physically impossible, to obtain precise information about the sizes of the queues, etc. in the subsystems in each direction. The second type of decomposition rules are exemplified by D2, D3, D4 ] Such decisions are about "whether to join the queue". It is assumed that upon arrival, a customer makes his decision as to whether to join the queue or balk on the basis of number of people already present in the queue. Although it is convenient to think of the decision as customer's decision, in many actual cases, it is the system which makes the decision. If we assume that the queue discipline everywhere is first come, first served, then balking on the basis of queue length is equivalent to balking on the basis of expected waiting times, since the expected waiting time is a constant times the number of people already there. In the present paper we concentrate on the problem of characterization of the stream emerging from decompositions of these two types, as well as the queueing properties of queues with such inputs.

-9 - Before going on to describe the problems attacked in this paper we should point out some of the difficulties to be encountered. Notice that if we are to be general as to the order of decision points of different types, our analysis will have to assume that arrivals to a decision point are of the most general type of all streams encountered within such systems. In the system of Figure 2, for example, even if the arrivals to D2 form a Poisson process, the arrivals to D5 will form a general stream with dependent interarrival times. Therefore we should be prepared to deal with dependent arrival streams at points D5, D6,... Another requirement to be met with respect to arrivals is the need to identify the customers with different needs and experiences. Returning to the system of Figure 2 for motivation, let us note that the customers arriving at D5 will include, in general, some customers who needed to be served in one of the queues Q2' Q3, Q4 but got rejected for some reason such as the unavailability of space in waiting room. Assume the system directs such customers towards D6 for serving them in that branch. Then we need to somehow identify such customers at point D5. One way to do this is to attach to each customer a tag on which the states of the queues from which he balked (or was rejected) are recorded. In this way we are able to keep a record of the needs of a customer as well as his experiences within the system. Such a system of identifying customers with their needs and experiences is invaluable for three reasons. First, it is of value whenever the system wants to act (or customers themselves make their

-10 - decisions) on the basis of conditions existing in the queues previous to the decision point. Second, it makes it possible to study the correlation between the queues of a system. For example, in the system of Figure 2, the conditions in the queue Q6 are closely related to those existing in queues Q2, Q3, Q4, since all of the customers that needed to be served at one of Q2, Q3, Q4 are directed to Q6 if they cannot be served in the queues of their first choice. Clearly, the load on Q is directly correlated with the load on Q2, Q3, Q4, and it may be of interest to study the nature of that correlation. Lastly, this identification of customers with their experiences brings an order to otherwise helplessly complicated streams within the system. We have already mentioned that arrivals to D5 in Figure 2 will in general form a dependent stream even if the arrivals to D2 form a Poisson stream. But if the service times in Q2, Q3, Q4 are all negative exponentially distributed, then the two dimensional stochastic process composed of the times between arrivals to D5 and the queue sizes in Q2, Q3' Q4 at instants of these arrivals to D5 turns out to be semi-Markovian. All ordered combinations of possible experiences make up the state space of this semi-Markovian process, and the times between arrivals can be called the times between transitions of the process. If we assume that the arrivals into the system form a semiMarkov process with a finite space, and if only finite lengths of queues are allowed, then all the semi-Markov processes within the system will have finite state spaces. So that the states of a particular SMP can be numbered 1, 2, 3,..., M where M is finite. Then we

-11 - can talk about "customers of type j" in referring to those customers whose needs and experiences are identically the same as the combination numbered j Some decision rules will treat customers of different types differently. An example to this was at D5 where certain types of customers were directed in one direction with a probability equaling unity. Another example would be the balking type of decomposition where the probabilities of balking would depend not only on the queue size encountered but also on the type of customers. Dependence or non-dependence on the arrival process is a second criteria in classifying decision rules. Put together with the first classification according to dependence on the queueing processes, we obtain the following four different types of decision rules in general. 1) Independent of arrival process and the queueing processes. Examples are: "every other" rule, assignment by independent trials, assignment by Markovian trials as in D2, etc. 2) Dependent on arrival process but independent of the queueing processes. Examples are: assignment by customer type as in D5, etc. 3) Independent of arrival process but dependent on some of the queueing processes. For example, in Palm and Disney's overflow problem customers join the queue if they can find space; i.e., whether they join the queue or overflow is dependent on queue size. Faced with R queues, customers may decide to join the queue with a minimal length. Faced with a queue as at D3 of Figure 2, customer may decide to join the queue or move on to the next queue dependent on the size of the queue there.

-12 - 4) Dependent both on the arrival process and on the queueing processes. Example are: balking from a queue with a probability that depends both on the type of the customer himself and on the queue size at the instant of arrival. Another example would be balking from a queue on the basis of the virtual waiting time at the instant of arrival where different types of customers react differently to the same waiting time. In the present paper we are concerned with decompositions under these four different types of decision rules as well as the queueing processes in a system with these types of decision rules. Specifically we consider systems of parallel queues with following characteristics: a) Arrivals into the system form a semi-Markov process with a finite number of states. A semi-Markov process with only one state is equivalent to an ordinary recurrent process; so that renewal type inputs are special cases of the general arrival process we assume. b) Queue number j has r. servers whose service times are J negative exponentially distributed with mean j.; the waiting room is of size N., N. finite. (We also give a generalization for a a general independent service times.) c) Decision points are of the following four types: 1. Sequence of decisions forms a Markov chain which is independent of both the arrival process to that point and the queueing processes in the subsystems. 2. Dependent on arrival process but independent of the queueing processes.

-13 - 3. Independent of arrival process but dependent on the size of the queue. Only one queue at a time is considered for possible entry. 4. Same as in 3, but the decision as to balk from the queue is dependent on the customer type also. d) Travel time between two sequential decision points is constant for all customers. e) No two streams are ever superimposed to make one stream. This then is the general problem of the paper. As we have mentioned earlier our method of attack is to study the decomposition problem under each different rule separately. Precise descriptions of these problems and a summary of major results will be given in the next section. 3. ORGANIZATION OF THE PAPER AND SUMMARY OF RESULTS The processes of interest in this paper have finite state spaces, so that we are able to use matrix theory and notation profitably. In the first half of Chapter I, we define and review concepts such as convergence of matrix sequences and series, derivatives and integrals of matrices over function spaces, etc. In the second half of the chapter, rigorous definitions and some properties of Markov Renewal processes and semi-Markov processes are given and some terminology regarding such processes are introduced. We make no claims of originality in this chapter; our object is to introduce some terminology, and to put together some material to make later references to them easier,

In the remaining chapters streams of arrivals to decision points are taken to form semi-Markovian processes. We assume they have M states, (i.e., there are M types of customers,) the first M1 of which are ergodic and the remaining ones are transient states. Now consider a decision point, and starting with an arrival, number the consecutive arrivals as 0, 1, 2, 3,.... Let X denote the time between the n-th and the n - 1t arrivals, and let Zn be the type of the n-th customer arriving at that point. We take the arrival process as well defined, i.e., we assume a) the initial distribution PO(j) = Pr {Zo=j}; po(j) > O, (j = 1, 2,.., M); j Po() = 1 and b) the mass functions Aij(x) = Pr {Z=j, X < x Z-l=i}; A (X) > 13 'n-01' Aij () = 1; (i,j = 1, 2,.., M; x > 0) are given. In Chapter II, the problem of decomposing such an arrival stream into R streams according to a Markovian decision rule is investigated. Let Yn be the random variable associated with the n-th arrival so that Yn = j if and only if the n-th customer is sent in the j-th direction, (j = 1, 2,..., R.) Then, we assume {Yn} is a Markov chain each one of whose R states are persistent non-null. We

-15 - further assume that tht decisions neither depend nor have any effects on the arrival process; i.e., we assume Pr {Yn=j Yo 1' *', Yn-; Zo ''" Zn; X1 '.., Xn} = Pr {Yn= I YnThen, we show that each one of the R resulting streams is generalized Markov renewal processes with the same state space as the arrival process. Initial distributions and transition matrices are derived for each stream (thus making them well defined), and a complete classification of their states is given, showing that the resulting streams also have M1 ergodic states and M-M1 transient states. Some steady state results and the mean occupancy times are also given. The decomposition rule is a fairly general one. As its special cases it includes the often used rules such as "every other" discipline, (Yn=j if and only if n = j(mod R),) and the independent assignments rule. With such special rules and Poisson arrivals our results agree with the well known results. Chapter III treats a decomposition problem of the second type, i.e., the decision rule depends on the arrival process. Arrivals to the decision point are again taken to form a SMP with M1 ergodic and M-M1 transient states. The decision process, {Yn; n = 0,1,2,...} is such that Pr {Yn=j YoI.., Yn-1; Zo, **. 'Zn; X1, *...* XJ = Pr {Ynj Zn } (j = 1, 2,., R).

-16 - We then show that each one of the R resulting streams is again SMP with the same state space as the arrival process. Initial distributions and transition matrices are derived for each stream, a complete classification of their states is given. Steady state results and the mean occupancy times are shown in their relation to the original arrival stream. The decomposition rule is quite general in its dependence on the type of customer. Some special cases of this rule, for example when Pr {Yn=j I Zn} = Pr {Yn=j}, overlap with some special cases of the problem of Chapter II, and we note and compare these results. In Chapter IV we examine the queueing properties of a finite queue (of maximum length N) with a single, negative exponential server subject to a SMP input. As usual the SMP is taken to have M (M is finite) states some of which may be transient. Customers are permitted to balk, i.e., upon their arrival at the decision point they may or may not join the queue. This balking process is taken to be of the most general type, namely a customer of type j balks with a probability b(j,k) if the number of people in the queue at the instant of his arrival is k. Of course, b(j,k) equals unity if k = N, i.e., if the queue is full. We embed the queue size process at epochs of arrivals and write the transition matrix for the resulting semi-Markov process {Zn, Sn, Xn where Sn is the number of people just before the arrival of the n-th customer. Then, {Zn,Sn} is a Markov chain, and we

-17 - investigate the properties of that chain extensively; we classify its states, show the existence of a limiting distribution. In this way we have the joint distribution of the queue size with the customer type. Marginal distribution of the queue size is then easy to obtain. Aside from the queue size process, we examine and give the distribution of waiting times under first in first out servicing discipline. We also investigate the law of the busy period. We give the distribution of the length of a busy period that was started by a customer of type j, (j = 1, 2,..., M), and find the absolute distribution of the length of a busy period from that. A special case where the arrival process is a recurrent input (a SMP with M=l) has been partly investigated by Finch [9] previously. Finch has shown the existence of the steady state distribution of the queue size and gave that distribution in terms of generating functions. A special case where balking probabilities depend only on the queue size can be investigated by setting b(j,k) = b(k) everywhere. A further special case where b(j,k) = 0 whenever k < N gives the properties of a finite queue with no balking. Chapter V concentrates on the stream of customers who have balked from the queue of Chapter IV. If the k-th balking customer is the n-th arriving one, we set 5k = Zn t k = Sn; i.e., ik is the type of the k-th balking customer and *k is the state of the

-18 - queue size process when he arrived. Further letting 0k be the time between the k-th and the k - 1st balkings we prove that Pr {fk=i, kk=j, Gk < x, *o, k-1; 1' '' Gk-1; o '*k-l = Pr {(k=i k = k i ' k x I k- k-l; i.e., the balking stream is a Markov renewal process whose state space is the Cartesian product of the state spaces of the arrival process and the queue size process. The one-to-one mapping (Mk'rk) - *k M + ~k maps this space into the set of positive integers, and we can call 4kM + 5k = Zk as the type of the k-th balking customer. We completely define the balking process, i.e., give the initial distribution and the transition matrix. Then, we give a complete characterization of its states, give some steady state results, investigate the absolute distribution and the mean of interbalking times, and derive the distribution of number of customers joining the queue within a balking interval. This, as far as we know, is the first time the stream of customers balking from a queue has ever been investigated. We do this under quite general assumptions. The question, "when is the balking stream of renewal type?" is answered by giving necessary and sufficient conditions. In Chapter VI the results of Chapters IV and V a:re extended in some respects, and some special cases are reviewed. Th1 results of IV and V were for a queue with a single negative exponential server. First, we generalize the results to the case where there are r servers

-19 - (r < N) in the queueing system. Then we discuss the problems introduced by letting the single server of queue IV have general independent service times. We show that the processes of interest can still be studied by introducing a dummy variable called unexpended service time at epochs of arrivals. Next, some special cases of the problem of Chapter V are discussed. Palm's overflow problem [20] is reviewed and his formulas are recovered. Disney's extension [6] of Palm's problem is discussed, and the distribution of the time between overflows is obtained first in the notation of this paper, and then by a formula using no special notation. Chapter VII is reserved for illustrating the use of the method of this paper in the analysis of a system of parallel queues. A simple system that can nevertheless illustrate the methods of each of the Chapters II, III, IV, V is chosen; and the queueing processes, etc. are analyzed. 4. CONCLUSION Notice that in all the decompositions studied inputs are taken to be SMP's and it turns out that each decomposed stream is also a SMP. So that, no matter what the order of decision points within the system is, we are able to decompose the network stream by stream and queue by queue. Furthermore, in doing this we lose no information that may be used in studying the correlation between the queue sizes of various queues. In this we are indebted to balking customers for carrying the information about the size of the queue from which they balked in addition to retaining the information they had when they arrived.

-20 - Note that we did not need to keep a record of directions taken at decompositions of types examined in Chapters II and III, since for any point within a system of parallel queues there exists only one combination of such directions that lead to that point. This of course is a result of not allowing any two streams to merge. Although this paper attacks quite a few of the most common problems encountered in parallel queueing systems, there remain several. problems to be solved. One is the general problem of R queues in parallel where customers join the queue of shortest length. Another is the investigation of the queueing properties of a single server queue subject to a MRP input of impatient customers, and the characterization of the stream of customers that renege from that queue. The superposition problem for SMP remains to be solved since in many systems several streams are merged to make one stream. To date, this problem is not even defined.

PART I DECOMPOSITION OF A SEMI-MARKOVIAN STREAM INTO R STREAMS

CHAPTER I PRELIMINARIES 1. INTRODUCTION In this paper we make considerable use of the theory of semiMarkov processes and Markov renewal processes. We assume semi-Markovian arrivals into the systems we study. We show that the balking stream, or decomposed stream etc., is also semi-Markovian. Therefore, we give a rigorous definition and a listing of the more important properties of these processes in Section 3 of this chapter. The processes of interest in this paper have finite state spaces. Therefore, we are able to use matrix theory and matrix notation profitably. A brief review of those parts of matrix theory which we use will be given next in Section 2. We make no claims of originality in this chapter. The section on matrix theory is largely from Mirsky [19], whereas the section on Markov renewal and semi-Markov processes is essentially a condensation of papers by Pyke [22,23] and Smith [28,29] on these topics. 2, SOME RESULTS IN MATRIX ANALYSIS Our objective here is to define and develop concepts such as convergence of matrix sequences, matrix series, etc., as much as we make use of in our work. Proofs and further work on these topics are found in Mirsky [19]. -22 -

-23 - Throughout the following we speak of square matrices of degree M over the field of complex numbers. a On the location of eigenvalues Let A be any matrix. The polynomial f(x) = det(xI-A) is called the characteristic function of the matrix A. Its zeros are called the eigenvalues of A. A vector y that satisfies yA = %y is said to be a left hand eigenvector of A (corresponding to the eigenvalue X.) Let A and B be two matrices of the same degree M over the field of real numbers. Then we will write A < B if the inequality sign holds for all corresponding elements of A and B, i.e., if A = [Aij] and B = [Bij], then A < B if Aij < Bij for all i,j A matrix A is non-negative if every element of A is a non-negative real number. Let A = [Aij] be a complex matrix, and let B = [Bij] be a non-negative matrix. Then B is said to dominate A if |Aij| < Bij, (i,j = 1,2,...,M). (Notation: A < B). Theorem 1.2.1. Let A = [A..] be a non-negative matrix with EZ Aij < 1 for all i. Then all eigenvalues of A are less than unity in absolute value. Theorem 1.2.2. Let A be a complex matrix and let B be a non-negative matrix such that A is dominated by B, (A < B.) Let X and i be eigenvalues with maximum absolute value of A and B respectively. Then, IX\ < 4i.

-24 - b. Convergence of a sequence of matrices (n) Definition. The sequence {AJ- (A = [Aij ]) of matrices converges to A = [Aij] (notation: An - A as n ->, or lim An = A) n -oo if lim Ain) Ai (i j = 1,2,...,M). A sequence that does not converge is said to diverge. Theorem 1.2.3. Let A be a matrix such that A << B. Then, Am -O if Bm -. Theorem I.2.4. Let A be any matrix. Then, Am - 0 if and only if all the eigenvalues of A are less than unity in absolute value. c. Convergence of a series of matrices Definition. The series of matrices Ao+Al+A2+... is said to converge to, or to have the sum, S if the sequence of partial sums {SnJ, Sn = Ao+Ai+.. +An, converges to S as n ->. Clearly, the convergence of En An is the same as convergence of series En A(n) ij for all i,j. Almost all the series we will encounter will be power series. An easy but very useful result is the following. Theorem 1.2.5. Let A be any matrix such that Am - 0 as 2 3 m ->. Then I-A is non-singular, and the power series I+A+A +A3+... converges to (I-A)1. More generally, there exists a very close relation between 00 the matrix power series m=o fm Am and the corresponding scalar power series f m series Zm=o fm z

-25 - Theorem 1.2.6. If all the eigenvalues of A lie within the 00 circle of convergence of the power series 4(z) = m=o fm zm, then the matrix power series E=o f Am converges absolutely. If at least one 00 eigenvalue of A lies outside the circle of convergence of 2m=o f zm then ~~ f A diverges. m=o m This theorem is due to Weyr (1887). A more complete result is the following theorem found by Hensel (1926). Theorem 1.2.7. Let A be a complex matrix. Then the matrix 00 m power series 2m=o fm A converges if and only if all eigenvalues of A lie within or on the circle of convergence of the power series $(z) = z f zm and satisfy the further condition that, for every m=o m k-fold eigenvalue X on the circle of convergence, the power series $(k-l)(A) is convergent, where 4(n)(z) is the n-th derivative of I(z). Definition. Let A be any matrix such that Zm fm Am converges, and let $(z) = Z fm zm Then, J(A) is defined as the sum of the series Z f A. m m d. Differentiation and integration of matrices Let A(x) = [Aij(x)] be a matrix valued function. Another way of thinking of A(x) is as a matrix whose (i,j) entry is a single valued function Aij(x) Definition. The matrix A(x) is said to be differentiable if all its elements Aij(x) are differentiable. Its derivative is then defined by the formula d=i- A(xdA(x)

-26 - dzn I The simple formula dz = nz generalo Instead we have n fails to hold for every A(x) in d dA(x) }A(x dB(x) dx (A(x) B(x)) = dAx B(x) + A(x)dB The following formula can be obtained by successive use of this formula; we have, for n > 1, n-1 d,n dx A( x) = k=o A()n-k-l dA(x) A(x)k A(x) dx Adx) (I.2.1) Let A(x) be a matrix such that A(x)n > 0 as n -> o uniformly for all x. Then by Theorem 1.2.5 we can write -1 2 3 (I-A(x)) = I + A(x) + A(x) + A(x) +...; then, by using the formula (1.2.1) we obtain d 1 = -dA(x) -1 dx (I-A(x) = (I(x ) -A(x)) dx (I-A(x)) (1.2.2) Integration of matrices is defined in a way similar to that of differentiation. We denote by, A(x) dx the matrix whose (i,j) entry is A. (x) dx. The matrix A(x) is called integrable, conlJ tinuous, or bounded respectively if all its elements have the property in question. One special integral is very helpful. Let A(x) be any matrix such that Laplace-Stieltjes transforms of its elements exist.

-27 - Then 00 A*(s) = / e dA(x) o is called the Laplace-Stieltjes transform of A(x). e. Kronecker product Let A = [Aij] and Kronecker product of A B = [Bij] be any two matrices. and B, denoted A $ B, is defined Then the as 11B.M B *~ ~* *0*** A B = A B M1 can be shown, defined, then ID. A B (cf. Thrall and Tornheim [32],) that if AC It and BD are (A ~ B)(C 0 D) = (AC) ~ (BD) (1.2.3) fo Notation Throughout this paper Roman capital letters A, B,... will denote non-negative matrices; the notation A(x), B(x),... will be used for matrix valued functions defined over non-negative real numbers. The Laplace-Stieltjes transform of a matrix valued function will be denoted by the same letter with an asterisk above it, e.g., the Laplace-Stieltjes transform of A(x) will be A*(s). Occasionally we will use L{A(x)} for the same thing.

-28 - degree. important vector of The identity matrix will be denoted by I irrespective of its E will denote a column vector of ones. Where its size becomes we will write it as a subscript; e.g., En is an n x 1 column ones. 3. SEMI-MARKOV PROCESSES AND MARKOV RENEWAL PROCESSES WITH FINITE STATE SPACES Consider a stochastic process {z(t), t > O} which moves from one to another of a finite number of states with the successive states forming a Markov chain, and the process staying in a given state a random length of time, the distribution function of which depends on the initial state as well as on the one to be visited next. Such a process is called a semi-Markov process. Associated with the process described above is a Markov renewal process which records at each time t, the number of times the process {z(t)} has visited each of the possible states up to time t. We will formalize these descriptions below. We define a {Z,X} process as any two dimensional stochastic process {Zn, Xn; n > O} defined on a complete probability space (Q, [,P) that satisfies Xo = 0 a.s. and, Pr {Zo=j} = Po(j), j = 1,2,..., M; PO(j) 0 M Po() = j=E Po(j) = 1; j=l0 and, Pr {Zn=j, Xn < x ZoZ,...,Zn_1; = Pr{Zn=j Xn<x X, X2,..* * Xn-} = Zn-l} ~

-29 - The row vector Po = [po(i) Po(2)... po(M)] is called the vector of initial probabilities. Next let Aij(x) = Pr {Z=j, X < x Zn = i} n > 1; and let A(x) be the matrix whose (i,j) entry is A..(x), i,j = 1,,...,M; x > 0. Then the Aij(x) are mass functions satisfying zM Ai j() = 1, i= 1,2,...,M j=l1 Further define Hi(x) = ZJ Ai(x) j=l ij 00 = of x dHi(x) Aj = Aij (o) -1 Fij(x) A j Aij(x) Then clearly, Hi(x) = Pr {Xn < x Znl= i} Fij(x) = Pr {Xn x Zn = i Z j Aij = Pr {Zn = j Znl = i}. Now let Tn = Xo + X1 + X2 +... + Xn, n > 0. The instants Tn are called regeneration points for the process {Zn, Xn}

-30 - Next, define an integer valued stochastic process {N(t), t > 0} by N(t) = sup {n: Tn < t}, i.e., N(t) is the number of regenerations that have occurred within the interval (0,T ). We further define N.(t) as the number of times n J Z = j for 0 < n < N(t). Let J'C(t) = [Nl(t) N2(t)... NM(t)] The stochastic process {J(t), t > O} is called the Markov renewal process defined by (po, A(x)). Related to this Markov renewal process is the stochastic process which records the state of the process at each instant. Let Z(t) = ZN(t) t > 0. Then, the process {Z(t), t > 0} is called the semi-Markov process defined by (po, A(x)). Clearly there is a one-to-one correspondence between any pair of the processes {Zn, XnJ, {fA(t)}, and (Z(t)}, so that given one the other two can be obtained easily. The process {Zn, Xn} is the easiest to formulate and therefore it is used more often. We will say the process {Zn, Xn} is equivalent to a semiMarkov process if there exists a semi-Markov process {Z(t)} defined by (pO, A(x)) where po is the vector of initial probabilities and A(x) is the transition matrix for the process {Zn, Xj} Some of the properties of semi-Markov processes are given below as theorems. Proofs of these and further results are found in Pyke [22,23] and Smith [28,29].

-31 - Theorem 1.3.1. process, and the process for 1< j < M n> O The process {Zn, Tn; n > O} is a Markov {Z; n > O} is a Markov chain. In particular Pr {Zn+l = j Tn+ < t Zo,...,Zn; Ti...,Tn} = AZ j (t-Tn) - ~~~~~~~~nJ and, Pr {Zn+l = j I Z,Z1,...,Z = AZ n () = AZj. n ~~~~~~~n n chain for {Zn' Xn} above. The process {Zn} is said to be the corresponding Markov the process {Z(t)}. The following properties of the process can easily be obtained from the definitions and the theorem Pr {Xn+ < x I ZoZZ..Zn} = Pr {Xn+l < x J Zj = Hz (x) (I.3.1) Pr {Xn+1 < x I Z Zi )Z Z'n+l} = = Pr{X lx I Z Zn+l}= FZ (x) n+l - n ZnZn+l (1.3.2) Pr {X n1 < X1 Xn2<2 * Xnk < xk Zn n > O} = = Pr {Xnl x1,.' Xnk < xk I Zo, Zl,.. Zn} = Pr {Xnl _ x1 Znl_,Zn}... Pr {Xnk < Xk I Znk-lZnk k = H F i=l Zn -1 Zn 1 1 (Xi) ~ (I.3.3) Property given by (I.3-3) is of great importance; it states that Xn,Xn. Xn are mutually conditionally independent" given 1 2 k Znl-l Znl; Zn2-1 Z;.; Znk*1Znk

-32 - a. Classification of states We now will give a classification of the states of a Markov renewal process or semi-Markov process in much the same way as is done for Markov chains. We try to retain the terminology for Markov chains as introduced by Feller [8]. For t > 0 let Bij(t) = Pr {Z(t) = j Zo = i, Gij(t) = Pr {N(t) > Z = i}, and 00 ij = f t dG (t). ij o i j According to these definitions, Bij(t) is the probability that a semiMarkov process initially in state i, is in state j at time t. On the other hand, Gi.(t) is the probability distribution of the first passage time from state i to state j. Then, hij is the expected value of the first passage time from state i to j; in particular, kii will be called the mean recurrence time of state i Defintions: a) State j is said to be reachable from state i if Gij(~~) is positive. b) States i and j are said to communicate if they are reachable from each other. c) Communication is an equivalence relation, and the equivalence classes are called classes.

-33 - d) A class is said to be closed if no state outside that class is reachable from any state inside. e) State i is persistent if Gii(0) = 1. A state i is transient if it is not persistent. f) State i is said to be ergodic if it is persistent and Xii is finiteO As can be seen easily, the properties defined here for a semiMarkov process or a Markov renewal process are very closely related to those of the corresponding Markov chains, namely the processes {Zn. In fact we have Theorem 1.3.2. In a semi-Markov process, (i) a state j is persistent (transient) if and only if the state j is persistent (transient) in the corresponding Markov chain; (ii) a class is closed if and only if it is closed in the corresponding Markov chain; (iii) a state j is ergodic if and only if state j is ergodic in the corresponding Markov chain, and ok is finite for all states k in the same class as j o bo Special cases A semi-Markov process with X1 = = X3 =... 1 is equivalent to a Markov chain. On the other hand, if Xn are independent and identically distributed as 1 - e, then the process {Z(t)} becomes a Markov process. If the number of states M is equal to one, then the Markov renewal process {f(t)} becomes equivalent to an ordinary renewal process.

c. Generalized semi-Markov processes In some cases the process {Zn' Xn} is such that the first step transition probabilities ij(x) Pr {Z1 = j, X < x Z = i} are different than the probabilities Aij(x) = Pr {Zn = j, Xn x Zn- = i n 2 In such a case, the process {Z(t)} is called a generalized semi-Markov process; the process {Zn, Xnj is said to be equivalent to a generalized semi-Markov process; and the process {J(t)} is called a generalized Markov renewal process defined by (po, A(x), A(x)). 4. CONCLUSION In this chapter we have given some definitions and results that will become useful in our work in succeeding chapters. We have limited ourselves to those subjects that are recurring often enough in the paper, leaving theorems of a special nature to later chapters where they are needed.

CHAPTER II DECOMPOSITION OF A SEMI-MARKOVIAN STREAM UNDER A MARKOVIAN DECISION RULE 1. INTRODUCTION Consider a semi-Markov process {Z(t), t > 0}. Let the times the process {Z(t)} makes its transitions be T,T1,TT2,.. Assume that at each of the points {Tr one of R possible events, El, E2,..., ER, can happen where the occurrences of successive events form a Markov chain that neither depends nor has any effect on {Z(t)}. For a fixed r, let the times the event Er happens be T1,T2,T 3.... and let T = 0. In this chapter we are interested in the process {W(t), t > 0}, where:(t) = Z(Tk) for Tk t k+ (k < 1), and +(t) = () for O = T0 < t < T1. We will show that {:(t)} is a semi-Markov process with the same state space as {Z(t)}; and we will derive the necessary functions that define the process {~(t)} from the ones defining {Z(t)}. The application we have in mind is the following. Assume the input to a system of R parallel queues is semi-Markovian. Upon arrival a customer is assigned to one of the R queues so that the probability of the n-th customer being assigned to the j-th queue depends only upon where the n-st customer was assigned. Then, the times only upon where the n-1 customer was assigned. Then, the times -35 -

-36 - To,TT2,.-, are the times of arrivals to the system, event Er corresponds to an assignment of an arrival to the r-th queue, and T1,T2... are the times of arrivals to the r-th queue. Then {1(t)} is the arrival process to the r-th queue. This chapter, then, discusses the problem of decomposing a given semi-Markovian stream into R streams by a Markovian decision rule. 2. DEFINITIONS AND NOTATION Let {Z(t), t > O} be a semi-Markov process. Let the times {Z(t)} makes its transitions be To,T1, T2... where 0 = T < T1 < T2 < T3 <.... Let X = Tn - Tn_, n > 1, and set Xo = 0 Let Zn = Z(Tn+O), n > 0. Assume there are M states of {Z(t)}, states 1,2,..,M1 forming an ergodic class, and the remaining ones, M+1i, M1+2, o., M, being transient states. Let ao = [ao(l), ao(2).,. ao(M)] be the vector of initial probabilities, i,e o, a,(j) Pr Z = Pr {Z } (j = 1,2,...,M); ao > 0; aoE = 1 o Define the mass functions Aij(x)= Pr {Z = j, X < x Zn- = i} (ij = 1,2,..M) and let and A(x) = [A..(x)], A*(s) = fesx dA(x) o

-37 - The process {z(t)} is well defined once ao and either of A(x) or A*(s) are known. Further, let Hi(x) = A ij(x), and 00 Ki = Sx dHi(x) i = 1,2,...,M. o Aij(x) is the joint probability that the process {Z(t)}, given that it is in state i at instant Tnl, stays there for a time not exceeding x and then enters the state j. Hi(x) is the distribution of the occupancy time of state i; then Ki becomes the expected value of that occupancy time. Now assume that at each point Tn one of R possible events, E1,E2,...,ER, can happen. Let Yn be an integer valued random variable which assumes the value j if the event that happens at time Tn is E.. We assume {Yn; n=0,1,2,...} is a Markov chain with stationary transition probabilities Qij = Pr {Yn = j | Yn i = i1,2 R and the initial distribution qo(j) = Pr {Yo =} j =1,2,..R We let Q be the square matrix of degree R whose (i,j) entry is Qij and let q = [qo(l)... qo(R)]. We assume that every state of {y } is persistent non-null; and that the process {Yn} neither depends nor has any effect whatsoever on the process {Z(t)}.

-38 - Choose an integer r, 1 < r < R, and fix it once and for allo Let the times the event Er happens be T1, 3... 0 < T < T2 < and set o = 0. Let Gk = Tk - k-1 T k 1, and set Go = 0. We define a new stochastic process {((t), t > 0O such that {^o for 0 < t < 1 and:(t) = Z(rTk) for Tk < t < Tk+l (k > 1), where, Pr {0 = i} = b (i) i = 1,2,...,M are given. The object of this chapter is to show that the process {:(t), t > 0} is a generalized semi-Markov process, to determine it and to analyze its properties with respect to those of {Z(t)} and {YnJ. Accordingly, let (k= - (Tk+0) k > 1 and define i.(x) = Pr {1 = j, e <x | 1 = i} Bij(x) = Pr {fk+l j' Ok+l x = i} k > 1 rBtx) = [E (x)], B(x) = [Bi(x)] B(s) S! e d B(x), B(s) = e d B(x), B*(s) = e ) B*(s) = f e d B(x) O O B= B(0)= B(O), B=B(0~)= B*(O) 3o THE PROCESS {~(t)} In this section we will characterize the process {f(t)} and will determine it completely. First we give the following main result.

-39 - Theorem II.3.1. The process {~(t), t > O} is a generalized semi-Markov process defined by the triplex (bo, B(x), B(x)). Proof. By the definition of a generalized semi-Markov process we only need to prove that (i) (ii) (iii) Pr {((0) = j} = bo(j) Pr {f1 = i ' e1 < x I ~o} Pr {k+l = J ' k+l < x Pr {k+l= j = B (x) for k j = 1,2,...,M = Bo (x) Co0 l "' k; k+l < x I k k>l.;and G1 G2,...,k} = Now, (i) and (ii) follow assume k > 1. Then, from definitions. To prove the statement (iii) Pr {+k+l = j ' k+l x I 00 = mz Pr {Zn+m = m=l Yn+m-l 00 = m Pr {Zn+m = m=l Y = n+m - to'1'*'" 'k; l,'2,'' - k} j; Xn+l +...+ Xn+m < x; Yn+ 4 r, Yn+2 $ r,..., r, Yn+m = r Zn = k' Yn=r; o''... k; l -' Gk} j; Xn+l +...+ Xn+m x; Yn+l z r^t. Yn+m-l M r, r I Zn = k' Yn = r} since {Z(t)} is a S.-M.P. = Pr { k+l = j,' k+l < x k} = Bk.(x) Corollary II. Corollary II,, and the proof is complete. 3.1.A. The process {fk' k=O,1,2,...} is a Markov chain, The proof follows from Theorem 1.3,1:

-40 - Pr{k+i j to ' '' k) =k lim Pr {k+l = j' < x I 5o, 1. " k lim Pr {il = j, G < x I k X > co k+l - = Pr f{ k+l = { kJ Bk (0o) = J I k if k = 0 if k>l. Corollary II.31..B. The two dimensional process {(k' k; k=O,1,2,. o} is a Markov process. The proof again follows from Theorem 1.3.1: Pr{ k+l = T k+l x = Pr f kkll=J ' = Pr { k+l= j ' B {kj(X B: i(X-Tk) I to' '"'. k; Gk+l < x - Tk Ok+l < x - Tk for k = 0 for k > 1 TO Tl...,Tk} = 0o'' '";' k T) '' Tk Qk} This theorem and its corollaries characterize the process { (t)}, Notice that a study of the process {in' Gn} is equivalent to studying {f(t)} since {fn' Gn} uniquely defines {f(t)}. Next, we will derive the probability mass functions B.i(x) and Bij(x) thus completely determining the processes {fn' Gn} and {f(t)} since the initial distribution vector is already given as bo Before, however, we will define several useful quantities and derive them.

ao m-step probabilities in {ZnXn} Let the m-step transition probabilities for the process {Zn, Xn} be defined as 6-i U(x) for m = 0, and A (x) = { ij Pr {Zn+m = jXn+l + *+ + m < x I Z = i} for m > 1 where 0 if i j bij = { 5iJ 1= 1 if i = j is Dirac's delta function, and where O0 if x < 0 U(x) = {1 if x > 0 is Heaviside's unit step function. Further, let A(mx) = A(m)(x)], L{A(m)(x)} = e dA )(x). Now, for m = 1, A()(x) = Ai (x) trivially. For m > 2 we have, ij n+ A(m)(x) = Pr {Zn+m = j, Xn +...+ X n+m x Z = i} M = (x Pr {Zn+ = j; Xn+2 +- f+ Xn+m < x-; h=l (ox) n+l = h; y-dy < Xn+1 < I Zn =i} M X= J Pr {Zn+m = j; Xn+2 +...+ Xn+m < x-YlZn+l=h} h=l (o"x) Pr {Zn+ = h; y-dy < Xn+l < y I Zn = i} M x (m-l) = i / Ah (x-y) dA (y) h=l Using matrix notation and taking Laplace-Stielt.jes transforms we obtain, I

L {A(m)(x)} = A*(s) L {A(m-1)(x)} * A*(s) From here, by a simple induction on m, we obtain for for for m= 0 m=1 m>2. L {A(m)(x)} = A*(s m> where as usual A*(s)0 = I b. First passage times in {Yn} Next, related with the Markov chain distribution of the first passage time from state {Yn}, E. to 1 we define the state E.: J F(m) ij Pr {Yn+1 j, Yn+2J,' *', Yn+m-lJ; Yn+mJ I Yn=i} m=l, 2, 3,... Then, defining (m) f_ {ij Q^ {5ij ij PPr {Yn+m = J Yn = we easily have, (cf. Feller [8], ) for m = 0 for m > 1 F(m) - ij Q.. {Qjm) - m-1 (n) (m-n) ij n=l ij ij for m= for m > 1 2 From the Chapman-Kolmogorov equations Q(m) - R Q(m-) ij h=l 1h Qhj (m > 1) (m) (m) QiJ can be calculated, and then Fij can be obtained by the above formulae Let, however,

qj.(z) = (E) Q m for z < 1, and f ( m) m ij(z) = m=o Fij fij (z) Er=1 F(m) z for zl < 1, and further let q(z) = [qi(z)] Then, since Q(m) is the (i,j) entry of Qm, we have ij 00 m m q(z) = Q z m= o Lemma II.3o2, For Izl < 1, I-zQ is non-singular, and the matrix power series q(z) = o (Q)m converges to (I-zQ)-1 Then qij(z) is the (i,j) entry of (I-zQ)-. Proof. For Izl < 1, the matrix zQ is dominated by I zlQ Then, since QE = E, |zlQE = IzlE < E. Thus, by Theorem 1.2.1, all eigenvalues of Iz|Q are less than unity in absolute value; and hence, (by Theorem 1.2.4,) (IzlQ)m - 0 as m - o. Then, (by Theorem 1.2.3,) (zQ)m ->0 as m - 0 since zQ << IzIQ; and, hence by Theorem 1.2.5, I-zQ is non-singular, and the matrix power series q(z) = m=o (ZQ) converges to (I-zQ)- for all z with Izl < 1. From the definition of q(z) its (ij) entry is qij(z). rme (m) Next, from the formulas for F.., for the generating function fi.(z) we have 'a

-44 - fi ( ) ( m- (Qn) m-1 (n ) )Zm ij +m=2 (Qi n=1 ij JJ Q.(M) m n (n) n m-n) m-n 2 Q?..I - E F z Q z m=l ij n=1 m=n+l 'i 3J q.(z) - Q() - (z) (qj (z) - Q )) 1ij j ij iJ jj;= q;(Z) - 6 - f,,(z) (q;;(z) - 1) -LJ - 1 - IJ- I(J I j dJJ so that fij(z) qj(z) = ij() - b., 13 13 13~~z and hence fi (Z) 13 qij (z)/qjj (z) 1-1/qjj (z) for i 4 j for i = j. These verge verge equations are well known (cf. Kemperman [15].) Notice that, since was assumed to have only persistent non-null states, f. (z) confor | z = 1. These results we put below as Theorem II.3.3. The generating unction f) zm Theorem II3.3. The generating function fij (z) = m=1 Fij is given by f (z) qij(z)/=jj(z) fijW = -/J J ~~~1-1/ci (z) if i 4 j if i= j where qij(z) is the (i,j) entry of (I-zQ)X1, ( zl < 1). c. Derivation of Bi (x) and Bij(x) We now return to the derivation of Bij(x). From the proof of Theorem II3.1 we already have

B j(X) 13.(x = Pr k = ' k <I X lk-1 = } 00 m= l {Zn+m = j; Xn+l +..+ Xn+m < x; Y +1 r, Yn+ml = r, = Yn+m = r n = i Yn = Since the processes {Zn, Xn} and {Yn} are completely independent of each other we can write Bij (x) 00 Z Pr {Zn+m = +...+ X+m < x I Z = i m=l n++ = r Pr {Yn+i r, Yn+2 r,..., Yn+m-1 r, Yn+m = r I Yn = r = A){(x) F(m m= rr or in terms of Laplace-Stieltjes transforms in matrix notation B*(s) = Z Frr A*cs m=l Of course, for this formula to be meaningful, the matrix power series Zm1 FFm) A*()m must converge. This we will prove below. First, however, we give the following Lemma II.3.4, For the matrix power series m=1 F() A*(s)] (Re{s} O0) to converge, it is sufficient that Z F() A conver m=l rr Proof, Consider the (i,j) entries of A*(s)m and Am i e., L{AI ) (x)} and A() = A(m)(0) respectively. IL{A (m (x I )} x d Am) (x) I< f e- s dAm' )(x) < 1j o for all < f dA(m)(x) = A ( = A(i) for all s with non-negative real parts. Hence, by Weierstrass' dominated co vergence theorem, Z Frm) Lf{A )(x)} converges absolutely if m=l rr ij oo (m) (m) m=l Frr A ) converges. Restating in matrix language we obtain th Lemma, Lemma m ge. ne

Theorem II.3-5. The matrix power series B*(s) = z F(m) A*(s)m m=l1 rr converges to frr(A*(s)) for all s with Re{s} > 0, where f (z) ~ F(m) zm rr 'tm=l rr Proof, If Em Frr) A*(s)m converges, its sum is fr(A*(s)) by the definition of frr(A*(s)). To prove that E F(m) A*(s)m conrr the efinitin of r(A*(s)). m Frr verges, in view of Lemma II.3.4, it is sufficient to prove that Em F(m) Am converges. Now, by the assumptions about {Z(t)}, the states 1,2,,.,M1 are ergodic and form one class. By Theorem I.3.2, these states are ergodic and form one class in the corresponding Markov chain {Zn,} Then, since A is the transition matrix for this chain, lim An exists Hence, by Perron's theorem, the eigenvalue of A with maximum absolute value is 1 itself, which is a simple root of the characteristic function. Since state Er is persistent, frr(l) = 1. Hence, by the theorem of Hensel (Theorem 1.2,7), Z F(r) A converges. Proof is complete. This completes the derivation of transition probabilities Bij(x), actually we have obtained their Laplace-Stieltjes transforms; in matrix notation, B*(s) = frr(A*(s)) (II.3.1) where frr(z) = 1 (r) zm To complete the identification of the process { (t)} we next derive the first-step transition probabilities B..(x). Now, mJ

^j B. (x) '3 = Pr1 = j I X I o1 = i} = Pr{Zo = j} Pr{Y= r} if x = 0 00 R M = j Pr {Y =4 h} Pr{Yl r, Y2+r,...,Ym-4r, Ym=rIYo=h} n h=l k=l - hSr Pr{Zo = k} Pr{Zm=j, X+...+Xm< x Zo=k} + Pr{Z0 = j} Pr{Y = r} if x >. Hence, i (x) = a,(j) q,(r) U(x) + q(h) F a0(k) A (x) m=l h=l k=l h4r V A^ Note that Bij(x) is independent of i, thus B(x) will have all of its rows identically equal to 00 R a0Oqo(r)I + m= o(h) Fhr) A( (x) m=i h=l h4r Taking Laplace-Stieltjes transforms we obtain as a row of B*(s) R o ao qo(r)I + qo(h) ( Fhr A*(s))). h=l m=l h~r By a theorem very similar to Theorem II.3.5, m F(m) A*(s)m can be hr shown to converge to fhr(A*(s)). Substituting this in the formula, we obtain R B*(s) = EM e ao qO(r) I + qo(h) fhr (A*(s)) (II3.2) h=l h4r where "(9" stands for the Kronecker product (cf. page 27 ). The formulas II.3.1 and 11.3.2 completely identify the process { (t)} since bo, the initial distribution vector, is given.

4, THE PROCESS {n} The Markov chain {yn}, (cf. Corollary II.3.1A, ) is closely related to the process {:(t)}. Especially, the classification of the states of the process {t(t)} is very much the same as the classification of the states of the process {~n} Furthermore, limiting properties of {:(t)} depend on {fn} very strongly. In this section, we will examine the process {~n}, classify its states, and find its limiting distribution, leaving the examination of {:(t)} to the next sectiono First of all, notice that the state space for { n} is the same as the one for the chain {Z}. {n} is defined by (bon B B) where B = B*(O) and B = B*(O). Further, in studying {~n} we will only concentrate on {~n; n = 1 2,.o} since the first step transition probabilities do not depend on the initial distribution as given by bo anyway~ By the assumptions about the states of the process {Z(t)}, the states 1,2, ooM1 are ergodic, and the remaining states Mi+l,...,M are transient. Then, by Theorem 1.3.2, the states 1,2,...,M1 are ergodic, and M1+l, oM are transient in the chain {Zn}. Thus the transition matrix A of the chain {Zn} can be partitioned accordingly to obtain A= 0 A3 A2 where Al is obtained by keeping only the first M1 columns and rows of the matrix A, and the matrices A2, A3 are defined accordingly.

Now, since At 0 An = n An) we have Fn) n(n 00 1 n (n) A1 0 1 0 nnl rr - ~~ (n) n:f (n) B s F A A F B = rr n=l F=rr 0(n) (n) 00 (n) fn) 00 (n> n B = Bj L3 Ln~l rr 3 n=l rr 00 (n) n 00 (n) n _BFrr A E = (n) A(n) (transient) if it is ergodic (transient) in the process {Zn}. Further3 - n=l rr 3 in { } are transiento B = B 3 B2 This partitioning suggests the theorem below. Theorem II.4o.1 State j of the process {~n} is ergodic (transient) if it is ergodic (transient) in the process {Z n} Furthermore, all, ergodic states in { n} are in the same class. Proof, We will prove that the states 1,2,...,M1 in {~n are ergodic and form one class, and that the states Mi+i, Ml+2,...,M in are transient, Since the states 1,2,..,M in {Zn} are in the same class of ergodic states the matrix Al is regular, i.e., there exists a positive integer k such that A1 has no zero elements. Now let h (h) be the smallest integer for which Frr is positive. Clearly, h > 1. Then, 1 -M m > A (h) mr m rr Al1- rr AY

-50 - Hence, since AL > 0 mr m r) h k (h) k k B > (F 4A) = (F(h)k(A)h > Thus B > 0 for some n, (for example k;) hence all of the states 1, 2, o,M form one class, and they are all ergodic since not all of the states of a finite Markov chain, (i.e., the chain defined by B1 in part,) can be transient. To show that the states M1+l,...,M of {n} are transient n -1 we need only to show that B2 -> 0 as n -e ~. Now, let TA2T be in the classical canonical form. Then, T An T = (TA2T- ) is upper triangular, and hence so is -1 (n) n 1 TB2T = T(Zn Fr) A )Tf n rr Now let the eigenvalues of B2 be YM +1; XM +l,..,xM.Clearly, these are the di( 1 TA2T respectively. Hence, (n) n Yi = Z F(n xi mrn. = E F(n) T A2 Tn rr,. "YM and those of A2 agonal elements of TB2T1 be and Now, since the states Mi+l,..,M n ~ oo. Thus, (by Theorem 1.2,4,) Y] =| I F(n) xn Iyi I= I Zn Frr X. I are transient in {Zn}, A2 - 0 as Ixi| < 1; hence, < F(n) xi n F(n) = 1 I n rr n rr But, by Theorem I.2,4 again, lyi < 1 implies that B2 - 0 as n - X. Hence, the states M1+l,.o,M are transient in {n}. Proof is complete. From the above theorem we are assured of the existence of a limiting distribution for {fn} Let a be the limiting distribution vector for {Zn}, i.e, let a = [a(l) a(2)... a(M)] where a(j) = lim Pr {Zn=j} n 0

-51 - Theorem II.4.2. Lim Bn exist, and lim Bn = lim An, or n — 00 n-o 00 n-> 00 equivalently, lim Pr {n=} = lim Pr n{Zn=j} for j = 1,2n M n- oo n- 00 Proof. The existence of lim Bn is obvious in view of the 00n preceding theorem. Now let A = li An. Then, clearly, every row of 00 A is equal to the limiting distribution vector a, and aA = a and aE = 1. Now from B = En F() An we have, since n (r = 1 aB = a Z F(n) An = Z F(n) aAn = Z F(n) a = a. n rr n rr n rr Since the limiting distribution for {(j is unique, and aB = a and aE = 1, a is the limiting distribution vector for {(n} also. Thus, 00 00 00 every row of B = lim B is equal to a, and hence, B = A. Proof n — 00 is complete, The similarity of the chains {n} and {Z} is obvious since the process {Yn} is completely independent of the process {Zn}. The difference in the processes {:(t)} and {Z(t)} lie in their respective interval processes. We examine this in the next section. 5o CLASSIFICATION OF THE STATES OF {f(t)} We have already given the classification of the states of the corresponding Markov chain, {n}. Then, by Theorem 1.3.2, the states M+1, M1+2,..., M are all transient in the process {:(t)} since they are transient in the corresponding Markov chain {n} On the other hand, to say anything about the ergodicity of the states 1,2,...,M we need to examine the mean occupancy times in those states.

Let and Gi(x) = Gi(X) = i = ~i = Pr {Gn+l x I n = iX Pr {01 < x = i}, 0 x d G(x) x d (x) o0i0 n>l 1 Gi(x) is the distribution function of the occupancy times of state i and.i is the mean value of it, both defined for steps later than the first one Gi(x) and ri are the same things as Gi(x) and qi except that they are defined for the first step. We already have defined the analogs of Gi(x) and ri for the process {Z(t)} as Hi(x) and Ki in page 37. Clearly, Hi(x) = Zj Aj(x), Gi(x) = ZE Bij(x ), Gi(x) = Zj Bij(x). Now letting p, p, T as the column vectors with /i, Ti i as elements respectively, we have I = -d A*(s) E ds S=O Now; =- d B (s) dE | s B*(s) = = - B*(s) d CO (n) n - ds ( F(n= A*(s)) E ds E now using he formula 1.2.1rr s=o A*(s)n E, now using the formula 1.2.1, E. d = ds 00 CO n=l B*(s) (n) rr F(n) rr E S=O f d. -.. \ Is =o En-1 A(s)n-k- ( dA*-S) A*(s)k E k=o A*( s S=0 n=l rr = Boo (n) z En=1 Fr n AkE = E for all k. ] and that dA() s ds s= n-1 An-k-l ( dA*(s)_ k=o - ds s=o n-1 pn-k-l dA*(s) E -'ds Es = A E since Futher noticing that En-l A An 1 n Ak E = L we obtain

co n-1 (n) k n lzko Fr A K (II.5.1) Now multiply both sides of (II.5.1) by the limiting distribution vector k a, and we get, since aA = a, - co En-1 (n) Ak a = n=l k=o Fr aA - cx n-1 F(n) a~ n=l Ek=o rr = (Zn=l n F(r)) al (Cn~ 1 rr ar = kr aK where %r,= E=0n n Frr is the mean recurrence time for state Markov chain {Yn} Next multiplying both sides of (II.5.1) by I-A, a! that (I-A)(I+A+A2+.. +An-1) = I-An we get (II.5.2) Er in the nd noting (I-A)- = Zn=l F(n) (I-A)(I+A+A2+...+An-l) = Eco F(n) (I-A) 4 -(I - n= ( n) "An) (I-A)1 = (I - B) 4 (II.5.3) Notice that both I-A and I-B are of the same rank, namely of rank M-1. Hence, the non-homogeneous system of linear equations (II.5.3) in M unknowns, Ti1, 2' 2 ''".M does have a solution. Since a is independent of the rows of I-A or I-B, the equation system (I-A)r = (I-B) K (II.5e4) aT = %r a p has a unique solution for T. (Clearly, one of the first M1 equations provided by (I-A)~ = (I-B)K is superfluous and can safely be discarded.)

We next need the mean occupancy times in the first step. From our derivation of B..(x) we found that Bi (x) were independent of i Thus, the expected time of @1 is the same for all values of o, i.e., n1 =2 = ' 2 =.. Then, /L d R 00 1 - ds {o (I qo(r) + hl ml qO(h) F(m) A*(s))} E h4r s=o R oo d k = O ha l ml q(h) Fh(m) m- Am-k-l d A*( ) AkE = a F F, q (h) Fhr E ds A*(s) AE 0 h=l m=l hr k=o ds= h~r qo(h)F(m) m-1 A htr m (h) Fhr k=o k hr E qo(h) F(m) m a (15.5) where ak r B where ak = a Ak is the row vector of probabilities Pr {Zk = i}. Lemma II..1. The expected value of 9 is finite. Proof. The expected value of 91 is given by (II.5.5). To show that it is finite let 4 = sup {i; i=l,2,...,M}, and max = sup {r; h=l,2,...R} where Xhr = m Fhr is the mean first passage time from state Eh to Er in the chain {Yn} I Then clearly B. < 1maxE, and hence, m- 1 (m) m-lM (i) = E F q0(h) FEhr a < F E q0(h) Fhr akE =4max 1r hr m=l k=o - htr m=l k=o hthr hmax ax m-1 (m) (in) = Sie st qo(h) max Fe mltrans t in (t)mx hfitr q(h) r s max hr qo(h) Max rmax < max maxn Since state Mi+l,...,M are transient in {Z(t)}, PMi +l''.p''M are finite; and since states 1,2,.,M1 are ergodic in {Z(t)},

-55 - 1t, ~~t'M are finite by Theorem 1.3.2. Hence, t. is finite for all i, and hence V is finite. On the other hand, since all of the states max of {Yn} are persistent non-null, Xhr is finite for all h, and hence kmax is finite, Thus, ^1 < X ma < o also. Lemma II.5.2, The mean occupancy times ni are finite. Proofo From the proof of the preceding lemma pi is finite for all i Also, since Er is persistent non-null in {Yn} n' is finiteo Then, the solution q of the system of linear equations (II.5,4) is finite, i.e., ~i is finite for all i. Theorem I1.5.3. State j of the process {((t)} is ergodic transient) if it is ergodic (transient) in the process {Z(t)}. Furthermore, all ergodic states in {((t)} are in the same class. Proof follows from Theorems II.4.1 and 1.3.2, and Lemmas II.5.1 and II.5:.2 6. SPECIAL CASES AND APPLICATIONS Consider R queues in parallel. Assume the arrivals into such a system form a semi-Markov process, Each arriving customer is assigned to one of the queues on the basis of the assignment of the last customer. Then, we have shown that the arrivals to the r-th queue form a semiMarkov processo Example II.611. One of the most common rules of assignment is "every other" rule, ie,,, the first customer goes to the first queue, second customer to second queue, and so on. In this case, the transition probabilities for the chain {Yn} are given as

-56 - 1 if j - i + 1 (mod R) ij 0 otherwise. Then, clearly, (n) 1 for n = R F rr 0 otherwise, an d F(n) = 1 for n = r-l Ir 0 otherwise, and fl (z) = r-l f (z) = R (r > 1) Since Pr {Yo = 1} = 1, for the r-th queue we obtain B*(s) = EM e (ao A*(s)r1), (cf. formula 11.3.2,) and B*(s) = A*(s), (cf. formula II.3.1.) This last formula is very well known in at least one special case: when arrivals form a Poisson process. Then, there exists only one state {Z(t)}, and the Xn are distributed as 1 - e~ where a is the mean arrival rate. It is well known that the arrivals to any of the queues form an Erlang-R process; i.e., A*(s) = a/(ca+s) and thus B*(s) = /(a+s)R Example 11.6.2. Another special assignment rule is to assign customers to queues on the basis of independent trials, i.e., {Yn} is an independent trials process. Then letting Pr {Yn = Yn- i} = Pr {Yn = } = qj, and writing p. = 1-q. we get F(n) nq - l rr r r so that, -1 frr(z) = qrz (l-zPr)

-57 - Thus, by formula 11.3.1, B*(s) = q A*(s) (1 - p A*(s)). If the arrival process is a recurrent process, i.e., {z(t)} has only one state, then the semi-Markov process {f(t)} is equivalent to a recurrent process. Then, the Laplace-Stieltjes transform of the distribution of the time between arrivals to the r-th queue is B*(s) for which an explicit formula is given above. This result, even if known, has never been published. A further special case of this, however, is very well known; and that is where the arrivals form a Poisson process. In that case A*(s) = a/(a+s), so that ( a )-1 qra. a+s _qr B*(s) = qr a+s(1 - r ) = as s a+s a+s a+s-pra qra+s i.e., the arrivals to the r-th queue is Poisson with parameter qra 7. CONCLUSION In this chapter we have solved the problem of identifying the process that resulted from choosing, according to a Markovian rule, some of the points of a known semi-Markov process. Furthermore, we have discussed some of the more important properties of the resulting process as well as deriving formulas for the mean occupancy times, etc. Semi-Markov processes arise naturally as the balking streams from negative exponential servers subject to recurrent inputs as well as semi-Markovian inputs as will be shown in Chapter IV. If one is interested in decomposing such a stream into R sub-streams, this

-58 -chapter provides him with an analytical tool. One can, by means of the tools provided here, examine each one of the R sub-streams, so that the sub-systems with these streams as inputs may be studied in isolation from each other. In this chapter the process {Yn} (the decision process, or the assignment rule,) was taken to be completely independent of {Z(t)}, the arrival process. In the next chapter we will discuss the same problem with the process {Yn} being dependent on the arrival process.

CHAPTER III DECOMPOSITION OF A SEMI-MARKOVIAN STREAM UNDER A STATE DEPENDENT DECISION RULE 1 INTRODUCTION Consider a semi-Markov process {Z(t), t > O}. Let the times the process makes its transitions be To, T1, T2,... Assume that at each one of the points {Tn}, one of R possible events, E1,E2,...,ER, can happen. The probability of Ej happening at the instant Tn is conditional on the state of the process {Z(t)} at that instant, but what event occurs at the instant Tn has no effects on the process {z(t)} For a fixed r, let the times the event Er happens be T1,T2,T3..., and set To = 0. In this chapter we are interested in the process {f(t)}, where:(t) = Z(Tk) for Tk < t < Tk+l, k > 1. We will show that {6(t)} is a semi-Markov process, will identify it, and examine its properties. The application we have in mind is as follows. Assume the input to a system of R queues is a semi-Markov process. Upon arrival, a customer is assigned to one of these R queues depending, in part, on the customer's type, Then {Z(t)} is the arrival process, states of {Z(t)} are the customer types, event Er is the event that a customer is assigned to the r-th queue; To,T1,T2,... are the times of arrivals into the system, and T1,T2,... are the times of arrivals to -59 -

-6o the r-th queue. By means of this chapter, we will be able to find the arrival processes to each one of the R queues, so that, each queue can be studied in isolation. 2o DEFINITIONS AND NOTATION Let {Z(t), t > 0} be a semi-Markov process with M states, the states 1,2, M.,M1 forming one ergodic class, and the remaining ones, M1+l, oo,M, being transient states. Let the times {Z(t)} makes its transitions be To,T1, T2,... where 0 = To < T1 < T2 <... Let X = T -T n > 1, and set Xo = O. Let Zn = Z(Tn+O) Let ao = [a (l) a0(2)... a0(M)] be the vector of initial probabilities, and let A(x) be the transition matrix whose (i,j) entry is A. (x) Pr {Z = j X < x Zn- i} Further, define A*(s) as the Laplace-Stieltjes transform of A(x), i.e., A*(s) = L{A(x)} = e dA(x); let Hi() = dA(x); let Hi(x) =pl Aij(x) and pi = f x dHi(x) Now assume that at each point Tn one of R possible events, EE2,.o.,ER, can happen. Let Yn be an integer valued random variable which assumes the value j if the event that happens at time Tn is Ej o We assume Yn depends only on Zn i.e., Pr {Yn=j Yo'Yl' -,.Ynl; ZoZl,... Zn; XX2,...Xn} = Pr {Yn=j Zn} We assume that what value Yn takes has no effect on the future of the process {Z(t)} or {Y}. Now choose an integer r, 1 r < R, and

-61 - fix it once and for all. Let qj = Pr {Yn to be the column vector whose j-th entry is satisfy 0 < qj < 1; further we assume that ql, q2 o.e " qM1 is positive. This assumption the problem meaningful, (cf. p. 69.) r Z = j}; and define q q. Clearly, q. (j=1,2,...,M) J J at least one of is made in order to make Let the times the event Er happens be T1, T2..., and set To = 0. We define a new stochastic process {f(t), t > 0}, such that, 0J if 0 < t < T ((t) = {~ Z(Tk) if Tk < t < Tk+l k > 1, where Pr {fo=i} = bo(i) i=l,2,...,M, are given We let n = ((Tn+0), n > 0, and Gn = Tn-Tn- n > 1, and set 0o=0 The object of this chapter is to show that the process {f(t)} is a generalized semi-Markov process, to identify it, and examine some of its properties. Accordingly we let B. (x) B. (X) B(x) B*(s) v ) B= B(~) = Pr {l = j, = Pr {n+l = j ' = [Bij (x)], = f e dB(x), o = B*(o), l) <x I o = i}, n+l < x I n = i}, B(x) = [Bij(x)]; B*(s) = f e- dB(x) B = B() = B B = B(~) = B*(O) n>l; Further, let Gi(x) = j=1 Bj (x), Gi(x) = B (); ni =d 1~ x n i = xdGB(x) ^ = f x dGi(x), and ~i = f x dGi(x) 0io

-62 - 3. THE PROCESS {1(t)} In this section we will characterize and define the process I (-t)I Theorem III.3.1. The process {f(t)} is a generalized semi-Markov process defined by the triplex (bo, B(x), B(x)) Proof. Pr {o =i} = bo(i) follows by the definition of bo(i). Pr {l=ji, G1 < x o = i} = Bj (x) again by definition. Next, we need to prove that, for k > 1, Pr { 1k+l=j' Qk+l x o'l''' k; 01'02'''0k} Pr {Uk+l=j G ek+l < x I k Now, Pr {k+l=j Gk+1 < x I o',.. k; '1,Q 2, k} = = m-l Pr Zn+m=j; Xn+l+ +Xn+m < x Yn+14r, Yn+2r,...,Yn+m-lr Yn+m=r I| o'''.. k; Gl'02''',k; Zn = k; Yn = r} >1 Pr {<Zz ~ X.. K X Yn+lSr,...,Y $, = m=l r Znm=j; Xn+l+.'+Xm x n+l' * n+m-l 4 r Yn+m=r Zn = k = r} (since {Z(t)} is a S.-M.P.) = Pr {k+l = j k+l x I k = Bkj(x) The proof is complete. Corollary III.3.1.A. The process {fk; k = 0,1,2,...} is a Markov chain defined by (bo, B, B).

-63 - Proof. Pr {fo=j} = bo(j) by definition. Pr { l=j = Pr {l =j, GI 1 I o} = B j(m ) = B j. Next, for k > 1, Pr k>l = * = r k+ r {~k+lj o'''" kk} = Pr {~k+lj, @k+l - = Corollary III.3.1.B. The process {fkGk; k > O} is a Markov processO Proof follows from the general theory, (see for example the proof of Corollary II.3.1.B.) This theorem and its corollaries completely characterize the process {f(t)} * Next we will derive the matrices B(x) and B(x) from already given things such as A(x), q, etc. First, we define ij(x) = Pr {Zn+lj, Xn+ x Yn r Zn = i} Then, since Y depends only on Zn, n Qi(x) = Pr {Zn+l=j Xn+l x Zn = i} Pr {Yn r I Zn = i} = A.i(x) (l-qi) Let D be the diagonal matrix of degree M whose i-th diagonal entry is qi; and let Q(x) = [Qij(x)], Q*(s) = L {Q(x)} = 00 f e5X dQ(x), Re {s} > 0. Then, we have 0 Q(x) Q*(s) = (I-D) A(x) = (I-D) A*(s) ^

-64 - Next define m-step probabilities Q ) 6i U(x) for m = 0 ij Pr {Zn+m=j Xn++...+Xn+m < x; Yn; r,..Ynm- 4 r Zn=i} for m > 1, where 5ij is the Dirac's delta function and U(x) is the Heaviside's unit step function. Clearly, for m = 1, Q 1)(x) = Qij(x); and for m > 2 we get m > 2 we get Q,(i)(X) = 'a M E f h=l(o x) r n+m Xn+2+ +Xn+m < x-y, y-dy < Xn+ < Y; Ynr, Yn+14r,..'' Yn+m-_lr, Zn+l=h I Zn=i} M = z / h=l (o,x) M x = J h=l Pr {Zn+=j Xn+2+* +Xn+m < x-y; Yn+14r, Yn+2+r,'..' pr ^ n+ 1h} d P r' — Yn+m_lr Zn+l=h} dy Pr {Zn+l=h, Xn+1 Yn4 rlZn=i} Qh-1) (x-y) dQih(y) In matrix Q (x) = notation using Laplace-Stieltjes transforms we have, with [QiJ)(x)], L {Q(m)(x)} = Q* ( s) L {Q(m-l)}(x) Q*(s) for m=o for m=l for m>2. Then, by a simple induction on m, we obtain L {Q(m)(x)} = Q*(s) m >O. Further, we let Q = Q(~) = Q*(O) = (I-D)A*(O) = (I-D)A. One expects that Q ((x) should go to zero uniformly for all ges t infns tha x as m goes to infinity. This is true and we have

-65 - (m) Theorem III.3.2. The m-step probabilities Qij (x) (i,j=l,2,...,M) converge to zero uniformly for all x > 0 as m approaches infinity. Proof. Since 0 < Qij(x) < QirQ)(0) = QM), by Weierstrass' (m) uniform convergence theorem, it is sufficient to show that Qi -> 0 as m - oc By our assumptions, at least one of the quantities qlq2,... qM ' say q, is positive. The states l,2,...,M1 form an ergodic set in {Z(t)}, and hence in {Zn}, (by Theorem 1.3.2.) Thus, lim Pr {Zn+m = N Z j} is positive. Therefore, m -> co nin n M lim Pr Yn+m=r I Z =ij = lim Z= Pr {Yn+m=r Zn+mj Pr {(n+m m _+> c nin n m _4 j1 {+r ZZn+m=j Zn=i} > lim Pr {Yn+m=r Zn+m=N} Pr Zn+m=N I Zn=i} q lim Pr Zn =N I Z i} > 0 m- M3>3 n+m n Consequently, the probability that Yn+h=r for at least one h > 0, given that Zn = i, is equal to one. Hence, the complementary event that Yn+h 4 r for all h > 0, given Zn = i, has probability zero, ieo, Pr {Yn+h r, h >0 Zn i}= 0 Now, (m) (m) 0 < Q ) Z Q )j = ZPr {Z =j Yn+r, h = 0l,...,m-l I Z=i} -Pr {n+hm n Pr {Yn+h~r, h = O,1,...,m-l Zn=i) Then, taking limits as m ->,

-66 - 0n< lim (n) _ 0 < Qm < Pr {Y 4 r, h = 0,1.. | Z = i} = 0 Hence, lim Q() = 0 for all i and j. Proof is complete. m -> Q1 = J We restate this theorem in matrix language as Theorem III.3.2'. The sequence of matrix valued functions (m) Q (x) converges to zero uniformly for all x > 0. Or, equivalently, the sequence {Q*(s) } converges to zero uniformly for all s with Re{s} > 0. We now return to the derivation of Bij(x) and Bij(x). From the proof of Theorem III.3.1 we already have 00 Bi (x) m= Pr {Zn+m=j; X+... +Xn+m < x; Yn+lr,.Yn+m-lJr im=l - - Yn+m=r I Zn = i,Y n = r} Then, B. (x) = Pr {Zm=j; X2+..+X x-y; Yl+r,...,Ylr I Zl=k} m=l k=1 o Pr {Ym=r Zmk} dy Pr {Z1=k, X1 < y Zo = i} co M x (m-l) = m k fl J q Qj (x-y) dAik(y) m=l k=1 o ik Or in matrix notation, using Laplace-Stieltjes transforms, m-l B*(s) Z A*() (s) (s D. m=l Lemma III.3.3. The matrix I-Q*(s) in non-singular, and the matrix power series =o0 Q*(s)m converges to (I-Q*(s))-1 for all s with Re{s} >. Proof. By Theorem III.3.2', Q*(s)m - 0 as m -4 o for all s with non-negative real parts. Then, the lemma follows from Theorem 1.2.5.

Using this lemma, we obtain as the Laplace-Stieltjes transform of the transition matrix B(x) B*(s) = A*(s)(I-Q*(s)) D. (III.3.1) We next derive B.. (x). We have, ij (x) = Pr {1= j,' 01 < x | = i} = Pr {Zo = Y 4 = Pr {YO Pr {Zm = = a(j) qj 1 M j Yo = r} + m E1 hl Pr Zm=j; X1 +..+ X < x; r Y, r,...Y1 r, Ym= r, Z h} r Z = j} Pr {Zo=j} + m_ hM Pr {Y = r Z| = j} j; X1 +...+ Xm < x; Yor, *., Yml4r Zo=h}Pr{Z=h} Ml Q (m) + E E ao(h) Qj (x)qj m=l h=1 In matrix B*(s) = E product. and using notation, taking Laplace-Stieltjes transforms, we have e (ao D + ZE=l ao Q*(s)m D) where "e" stands for the Kronecker 00 Noting that Q*(s)~ = I, we get B*(s) = E e (a( Q*(s)m)D); the Lemma III.33, we finally obtain B*(s) = E e (ao(I-Q*(s)))1D) (III.3.2) This completes the problem of identifying the process {;(t)} since of the defining triplex (bo, B(x), B(x)), we have derived B(x) and B(x), and the initial distribution vector bo was already given.

-68 - 4. THE PROCESS {=} By the Corollary III.3.1.A the process {fn; n = 0,1,2,...} is a Markov chain. In this section we will classify its states and give some limiting results. First, consider the result of setting qj = 0 for some j Then, the j-th column of D will be completely zero. This would cause the j-th columns of B and B to be completely zero, (see the formulas III.3o1 and 111.3.2.) Hence, a state j for which q = 0 cannot be reached from any state of the process {n} We shall call all such states empty states, since {n} can never visit any such state. An empty state is transient in a trivial manner, namely, Pr {fn = = for all n > 1 if j is an empty state. Lemma III.4.1. A state j of the process { n} is transient if it is transient in {Z(t)}, or if q = 0. Proof. If the state j of {Z(t)} is transient, then that state is transient in {Zn} also. For the process { n} to visit a state i, first Zn must be i, and then Y should be r. As n approaches infinity Pr {Zn = i} approaches zero; hence by the preceding argument, Pr {fn=i} approaches zero also. Thus, any state j which is transient in {Z(t)} is transient in { }. On the other hand, if qj = 0, by the paragraph preceding this lemma, the state j is transient in a trivial manner. Actually, these are the only transient states in {n} Theorem III.4.2. State j of the process { n} is ergodic if it is ergodic in {Z(t)} and qj > 0. State j is transient if J

-69 - either it is transient in {z(t)} or q = 0. Furthermore, there exists only one ergodic class in { n} Proof. The second statement is the preceding lemma restated. Now, assume a state k can be reached from a state h in the process {Z(t)} (and therefore in {Zn} ) Then, there exists an N such that N) = Pr {ZN = k Z = h} > 0. Now,assume m = h. Then, there exists an n such that Zn = Cm = h. Then, there exists an i < N such that Pr {:m+=k I =mh} Pr Yr ZN=k Pr ZN=k Zo=h} = q AN) which is positive if qk is positive. Hence, a state k can be reached from a state h in the process {n } if qk > 0 and state k can be reached from h in {Zn}. Now consider the set of all states j such that j is ergodic in {Zn} and qj > 0. Then, all these states can be reached from one another. Hence they are all in the same class. These states must all be ergodic since the complementing set of states are all transient by Lemma III.4.1, and not all of the states of a finite Markov chain can be transient. Proof is complete. We can now give the rational behind an assumption made earlier. We have assumed that at least one of qlq2,... qM is positive. If these quantities were allowed to vanish altogether, then by the preceding theorem there would be only transient states in {?n}' i.e., { n} cannot be a Markov chain. In view of the preceding theorem, the limiting probabilities lim Pr {n =j} exist. Let b be the row vector whose j-th element n — oo is b(j) = lim Pr {% =j} ~ Similarly, let a be the row vector whose j-th entry is a(j) = lim Pr {Zn=j} n — > co

-70 - Theorem III.4,3. The limiting probabilities b(j), j = 1,2,...,M, exist and are indepedent of the initial distribution given by bo 1 Furthermore, b = aD aD where a is the limiting distribution vector for the chain {Z} Proof, Existence of the limiting distribution is guaranteed by the preceding theorem. Furthermore, since there is only one ergodic set in {jr}, these limiting probabilities do not depend on the initial conditions. Hence, the unique eigenvector b corresponding to the simple eigenvalue one, (bB=b,) with bE - 1 is the vector of steady state distributiono Now, consider the steady state distribution vector a of the process {Zn}; aA=a, aE=l. Note that, a(I-Q) = a(I-(I-D)A)= a-a(A-DA) = a-aA+aDA = aDA Then, (aD)B = (aD)(A(I-Q) D) = (aDA)(I-Q) D = a(I-Q)(I-Q) 1D = aD. Hence, aD is an eigenvector of value one. Now dividing each term 1 aDE = aq, we obtain aq aD which an eigenvector for the eigenvalue c Zj= a(j)q. > 0 since a(j) > 0 J=1 J of ql'q2' ~^q-Ml is positives) I 1 b = aD. Proof is complete. We restate this theorem h 00 A =lim A, and B = lim B n -n o n ->o B of is Dne; for From corresponding to the simple eigenaD by the sum of the terms, a probability vector besides being (note that, aq = Zj=la(j) qj = j = 1,2,..,M1, and at least one 1 the uniqueness of b, then, )elow in matrix form by first letting

-71 - 00 00 1 00 Theorem 111.4.3'. B = lim Bn exists, and B = A D --— n — > 00 aq 5. THE CLASSIFICATION OF THE STATES OF { (t)} In this section we will give a complete classification for the states of the process {~(t)}. First, however, we need to examine the occupancy times of each state. The mean occupancy time of state i by the process {~(t)} was defined to be qi for the first step and qi for the succeeding steps, (cf. page 61.) The occupancy time of state i by {Z(t)} was. 'Now let p,, q be the column vectors whose i-th entries are i ' rli ', 'i respectively. Then, dA*(s) = dB*(s) * E and ds s=o ds s=o ds =o dAs) =0 dB* S=0 E ds S=0 Now, B*(s) = d (A*(s) (I - Q*(s))-OD) ~ds ~ ( d A*(s))(I - Q(s))- + A*(s)((- (s)))D dA(s) (IQ*(s))-D + A*(s)(I-Q*(s))-1 ds (I-Q*(s)) D ds ds by formula (1.2.2). Then, ds B. =o ) ds + A(I- dQ ) (I-Q)-1D. ds B*)ds S=O s=o s=O Notice that, dQ*(s) = _d (I-D) A*(s) = (I-D) dA(s) ds ds ds

-72 - so that, - ds B(s) E d *(s)A) Noting that, DE = DE+E-E = E-(I-D)E = E-(I-D)AE = E-QE = (I-Q)E, and putting this in the preceding equation, we obtain d rl - ds B(s) S=C E = (I+A(I-Q)-(I-D))(- - A*(s) S=O E = (I+A(I-Q)1 - A(I-Q)-1D) Ti = (I-B+A(I-Q)1 )k. (III.5.1) Given p, T can be computed from (III.5.1) uniquely as a function of p. It should perhaps be noted that the mean occupancy time of a state i may be positive even though the state i is an empty state. This is possible since ri is the mean time spent in state i given that the process is in state i already whether it is possible to be in state i at all or not. Next we derive the mean occupancy times in the first step. Since Bij(x), (ij = 1,2,...,M,) do not depend on i, Gl(x) = G2(x)... = G(x), so that? = = =; and i, G1(x) G2(x) GMM; and hence, d (a (I-Q*(s))-1D) T1 d- (a (I-Q*(s)) D) E (see formula III.3.2) s=o S=O "o dQ*c(s S= OE-1 = a (-Q*(s))- ( dQ*(s)) (l-Q*(s))) DE = a (Q)1I ds (I-ds

-73 - Now, noting that DE = (I-Q)E, and ds = (I-D d s) 1 i a~ (I-)- (I-D) - s E '91 0 ds __ l = a0 (I-Q)1 (I-D). (III.5.2) Given V, 1= = 2 = M can all be computed from this formula uniquely. Lemma III.5.1. The mean occupancy time of state i is finite for all i = 1,2,...,M in the process { (t)} Proof. Since {Z(t)} had its first M1 states ergodic, and the remaining M-M1 states are transient, Vi is finite (i = 1,2,...,M ) by Theorem 1.3.2. Then, clearly, 1 =... = =M as computed from (III.5.2) are finite. Also, i as obtained from (III.5.1) is finite since V is finite. Hence, the occupancy time of state i has finite expectation both in the first step and in the succeeding ones. Theorem III.5.2. A state j of the process {f(t)} is transient if either J is a transient state in {Z(t)} or q = 0. A state j of { (t)} is ergodic if it is ergodic in {Z(t)} and qj > 0. Furthermore, all ergodic states are in the same class. Proof. The first statement follows from Theorems I.3.2 and III.4.2. By Theorem 11.3.2, a state is ergodic in {f(t)} if it is ergodic in { } and ri, i are finite for all i in the same class with that state. Hence, the second statement follows from Theorems II.3.2, II.4.2, and Lemma III.5.1. The fact that there exists only one ergodic class follows from Theorem III.4.2 in view of Theorem II.3.2.

Before ending this section we should point out that in the processes {f(t)} and {(n} the empty states can be completely discarded. If in the matrices B*(s), B(s), and B all columns which are completely zero and all corresponding rows are thrown out, the remaining matrices will still represent the processes fully. This is clearly desirable whenever these processes are the inputs in other problems, 6, SPECIAL CASES AND APPLICATIONS In multi-server queueing systems, it may happen that certain servers specialize in the service of certain types of customers. This is advantageous if specialization enables the service times to be reduced appreciably. In such cases, there are separate waiting lines forming before different types of servers. The treatment in this chapter enables us to obtain the process of arrivals to each one of the queues. Once the arrival process for a queue is obtained, that queue can be studied in isolation from the rest of the queues. Semi-Markov processes arise naturally as the streams of balking customers from negative exponential servers. There, the states of the semi-Markov process reflect the number of people in the queues from which customers balked. If such a balking stream is the input to a system of queues in parallel, it may be desirable to assign a customer to- a certain queue on the basis of the number of people in the previous queues from which he balked, (or was not accepted.) This chapter, by letting the probability of assigning a customer to a certain

-75 - queue be dependent on the type of the customer, provides a valuable model for such a problem. In the case the number of states, M, is one, the process {Xn} becomes a renewal process; and the dependence of {Yn} on {Z becomes meaningless. Then, the results of this chapter coincides with the results of Chapter II for the case M=l and the decision process is an independent trials process. A special case of interest is where 0 < ql = q2=. = q < 1, i.e, Y does not depend on Zn Then, q = qE, D = qI; Q(s) = (I-D)A*(s) = (l-ql)A*(s) Hence,.letting l-ql = pi B*(s) = A*(s) (I-Q*(s))" D = A*(s) (I-p A*(s))- qlI = qlA*(s)(I-pA*(s) )-, which is the same as the formula obtained in Example 11.6.2. 7. CONCLUSION We have solved the problem of identifying and deriving some of the more important properties of a process which is constructed from a given semi-Markov process by sampling its points by a rule that depended on the nature of points. Both the present chapter and the preceding one treated the problem of decomposing a stream of arrivals into R streams, each one

-76 -of which may become the arrival process for some subsystem. So far, the rules of decomposition, as reflected in the process {Yn}, have been chosen to be independent of the states of the queues involved. In the next chapters, we will take up the case where the process {Yn} depends on the state of the queueing process.

PART II BALKING IN THE QUEUEING SYSTEM SM/M/1

CHAPTER IV QUEUEING SYSTEM SMM/M WITH BALKING 1. INTRODUCTION Consider a single server queue with negative exponentially distributed service times with mean 1/3. Let the capacity of the waiting room be N-1. Let the arrivals into the system form a semiMarkov process with M states, the first M1 of which are ergodic and form one class, and the remaining states, M1+l,...,M, are transient. Upon his arrival, a customer may or may not join the queue depending on the number of people already there and his own type. The object of this chapter is to analyze the queueing properties of such a system. We will examine the queue size process, waiting times, and the busy period, 2. DEFINITIONS AND NOTATION Let the arrival process {Z(t)} be a semi-Markov process with M states. Let the times {Z(t)} makes its transitions, (i.e., the instants of arrivals into the system,) be To,T1,T2,... where 0 = To < T1 < T2 <.... Let Xn = Tn - Tn-_ (n > 1,) and set Xo= 0. Let Zn = Z(Tn); Zn can be referred to as the type of the n-th arrival. We assume that the states 1,2,...,M1 form an ergodic class, and that -78 -

-79 - the remaining states, Ml+l,...,M, are transient. Let ao=[ao(l)...ao(M)] be the vector of initial probabilities, ao > 0, aoE = 1; and let A(x) be the matrix whose (i,j) entry is Aij(x) = Pr {Zn+l=j Xn+ < x Zn=i} n 0 0 As usual, we set A*(s) = e dA(x), and A = A(X) = A*(0). o Let the state of the system at time t be denoted by S(t); we write S(t) = j if the number of people in the system, (number waiting plus the one being served,) at time t is j, j = 0,1,2,...,N. We especially denote by Sn the state of the system just before the arrival of the n-th customer; i.e., Sn = S(Tn-0), n = 0,1,2,.... We take So(j) = Pr {So=j}, S(j) > 0, so(j) 1, as given. j=o An arriving customer is permitted to balk, i.e., upon his arrival a customer may or may not join the system depending on his wish and the number of people already there. Associated with the n-th customer we define a random variable Yn which takes on the values of one or zero according as the n-th customer did or did not join the queue. We assume that Y depends only on Z and S, i.e., n n n Pr {Yn=0 Zo...,Zn; So' '.Sn; Yo'',.Yn-l} = Pr {Yn=0 I Zn, Sn} We let b be the column vector of balking probabilities, so that, the jM+i element of b is b(i,j) where b(i,j) = Pr {Yn0 Zn=i, Sn=j}. In particular, b(i,N) = 1 for all i since everyone must balk when the system is full.

-80 - We let, P(i,j;h,k,x) = Pr {Zn=h, Sn=k, Xn < x I Znl=i, Sn-l=j} P(i,j; h,k) = Pr {Zn=h, Sn=k Z =i S j} Pn(iJ) = Pr {Zn=i, Sn=j} We define P(x) = [Pjk(x)] where the submatrix Pjk(x) has as its (i,h) entry P(i,j;h,k,x). We define P = [Pjk] similarly. We further let Pn be the row vector whose (jM+i) entry is pn(i'j); and let D be the diagonal matrix whose jM+it diagonal entry is b(i,j). 3. THE PROCESS {Zn,Sn} The major process of interest in this chapter is {ZnSn; n = 0,1,2,...}. First, however, we will look into another process, {Zn,Sn,Xn}. Theorem IV.3.1. The process {Zn,Sn,Xn} is equivalent to a semi-Markov process defined by (Po, P(x)). Proof. We need to show that Pr {Zn=h, Sn=k, Xn < x Zo' " Zn-; S,,Sn-_l; X1..,Xn-}= Pr {Zn=h, Sn=k, Xn < x Zn1 Sn1. Now, Pr {Zn=h, Sn=k, Xn < x Zo,...,Zn-1; So,...,Sn-1; X1,...,Xn-1} = = o Pr {Sn=k Zo,..,Zn; So0'.,Sn1l; X o,...,Xn_; Xn=y} dy Pr {Zn=h, Xn < y Z Zo,.,Zn-; So,...,Sn-1; Xo,..,Xnl}

-81 - Sn depends only on Xn, Snl and Yn-1 But Yn-l depends on Z - and S only. Hence, given Zn-al Sn-_l Xn all other information lose their predictive value as far as predicting Sn is concerned, i.e., Pr {Sn=k Z,..,Zn; So,..., Sn_; Xo,...,X = Pr {Sn=k I Zn-l Sn-)l' Xn On the other hand, since {Zn Xn} is equivalent to a semiMarkov process, and the arrival process is independent of the queueing process we get Pr {Zn=h, Xn < y I Z,...,Zn-1; So,..,Sn_-; X,...,Xn-l}= Pr Zn=h, Xn < Y Zn-} Putting these into the above we obtain, Pr Zn=h, Sn=k, Xn < x I Zo,..Zn-l; S,n...,Sn; X,..'.,Xn.i} = -= fxPr {Sn=k I Zn-, Sn1 Xn=y dy Pr {Zn=h Xn y n } 0Pr {Z-=h S=k, X y x | Z<_I Sn-Z = Pr {Zn=h, Sn=k, Xn _< x aZn_1 Snl}1 as was to be demonstrated. A very important corollary of this theorem Corollary IV.3.1.A. The process {Zn Sn; a Markov chain defined by (po, P). Proof. Pr {Zn=h, Sn=k Zm, Sm; m < n} = lim Pr {Zn=h, Sn=k, Xn x | Zm Sm; m < n} lim Pr {Zn=h S =k, X < x I Z Sn-l} x- L n n,- n n= Pr {Zn=h, Sn=k Zn-, Sn-l} = P (Zn-_l Sn_1; h, k) as needed. follows. n = 0,1,2,...} is

-82 - Corollary IV.3.1.B. The process {Sn} is a Markov chain if and only if M = 1, i.e., the arrival process is a recurrent process. Next we will give formulas for obtaining the transition matrix for the Markov chain {Z, S}. First let, 7k(i,j,x) = Pr {Sn+l=j I Y =k, Sn=i, Xn+l=X}, k 0,1. n Then, fi -j+k() 0 for 0 < j < i+k < N+k for j = 0, i < N otherwise, n = 0,1,2.... where e-x ( x)n in(x) = en' Now let rF(x) and Fl(x) be the square matrices of degree N+l whose (i,j) entries are y (i,j,x) and l1(i,j,x) respectively. Returning to the derivation of P(i,j;h,k), from the proof of Theorem IV.3.1, we have P(i,j;h,k,x) = Pr {Zn=h, Sn=k, Xn < x Zn-=i, Sn-1=j} = x = f Pr {Sn=k Znl=i Snl=j, Xn=Y} dy Pr {Zn, =i} 0 Now, Pr {Sn=k I Zn_l=i, Sn-l=j Xn=} = = j Pr {Sn=k, Yn-l=i Zl=i i Snl=j Xn = y} 5=0,1 = Pr {Yn-_1= Zn-l=i Sn-=j} Pr {Sn=k Yn-=j, Xn=Y} 5,1 = b(i,j)?o(jky) + (l-b(i,j)) yl(jk,y).

-83 - Hence we have, x P(i,j;h,k,x) = / {b(i,j) 7 (jk,y) + (l-b(i,j)) 7l(jk,y)} dAih(y). Using matrix notation we obtain X X P(x) = D( S FO(y) e dA(y)) + (I-D)( S rl(y) e dA(y)), o o (IV.3.1) where ~ stands for the Kronecker product. In many cases, it is easier to work with Laplace-Stieltjes transforms. Note that the elements of r (y) and rl(y) are in terms of cO(y), n = 0,1,2,... only. Thus, in giving the Laplace-Stieltjes transform of P(x) we first need x L { On,(y) dA(y)} = 0 00 f e- an(x) dA(x) 0 e en In xn dA(x) 0 )n dn = (-1)n- -Y - A* (As+) n'. ds +P Let D be the differential L { / an(y) dA(y)} = o d (-_P)n operator ds; and let Q- =. Then, al A*(s+). n Now, define (for k = O,1), i -j+k o O and let nk be the matrix whose (i,j) With this notation, from (IV.3.1), for O < j < i+k < N+k for j = O, i < N otherwise, entry is the operator 7k(ij).

- 8e4 - P*(s) = D(fro A-X(s+)) + (I-D)(rl e A*(s+p)). (IV.3.2) Then, the transition matrix can be obtained by P = P(oo) = P*(O). To check to see if P is a stochastic matrix, i.e., if PE = E, we do the following calculations. (We use the rule 1.2.3 repeatedly; also 0L -- _ - remember that Zn= a n = En=o n' = e, where e [f(x)] = f(x —) for any function f.) P*(s)E = P*(s) EM(N+l) = P*(s)(E N+l EM) = D (r $ A*(s+P))(EN+l EM) + (I-D)(jl ~ A*(s+P))(EN+1 e EM) = D ((r EN+1) (A*(s+P)EM)) + (I-D)((r EN+) e (A*(s+P) EM)) = D (e EN+1 (A*(s+p)EM)) + (I-D)(e-P EN+ e (A*(s+P) EM)) = (D + I - ) (EN+1 (A*(s) E)) = EN+ (A*(s)EM) Thus, PE = EN+1 $ (AEM) as was needed for P completes the problem completely define the distribution Po(ij) initial distributions pendent, we have = EN+1 EM = EM(N+1) to be a stochastic matrix. This partially of identifying the Markov chain {ZnSn}. To process {Zn,SJ we next give the initial; i=l,2,...M, j=01,2,...,N. Since the of {ZJ} and {Sn} are assumed to be inde Po(i,j) = Pr {Zo=i} Pr So=j} = ao(i)S(j).

-85 - r O o o o o k Z ak a 0. O0 1 0 00 J Z2 k a1 ao 0 N C Coq -1 'N-2 2 * * 0 x1 a 0 k o 2 k 2 1 00 _ 2 3 k ~2 ~1 o. O O o o a 0 0 r b 0 a N ~k %N-1 %N-2 001 N N-1 ~N + ak CIN aN- I e o a1 a0 a2 a1 b(0,1) b(O,M) b(l,l) b (1,M) b(N,1) b(N,M) i b (NM) b(0,1) 0 b(0,M) b(l,l) b(1,M) b(N,l) 0 b(N,M) Figure 3. Some of the Matrices Defined.

-86 - 4h DISTRIBUTION OF QUEUE SIZE Let sn(j) = Pr {Sn=j}. Then, s) n(i,j). Hence, the queue size distribution can be obtained easily once Pn(ij) are known. But the calculation of pn(ij), or the vector n n is easy. By the general theory of Markov chains, pn = pO P (n > 0) The behavior of p as n approaches infinity is of much interest, and we will give such results in a later section. In some cases, the correlation between Zn and Sn may be of interest. We are able to obtain any quantities of interest in measuring this correlation once pn are found. 5o CLASSIFICATION OF THE STATES OF {Zn, Sn} In the following sections we will say the state of {Zn, Sr3 is (i,j) if Zn = i and Sn = j Before classifying the states of {Zn, SrJ, a study of the transition matrix is useful. Pjk was defined to be the submatrix of P whose (i,h) entry is P(i,j;h,k). Now, partition the matrix D accordingly, so that the j-th diagonal submatrix of D is Dj whose i-th diagonal entry is b(i,j). Then, from the formulas of page 83, we can write Pjk D= j k y,,) dA(x) + (I-Dj) yl7(j,k,y) dA(x) (IV.5.1) jk J 0 0jo 0 From the formulas for yo(j,k,y) and 1 (j,k,y) we see that y (j,j+l,y) = 7o(j,j+2,y) =.. = yo(,N,y) = 0, as well as l (jj+2,y) = y (jj+3y) = y (jN,y) = 0. Hence, 1 1 1

-87 - Pjj+2 = Pj - ~ I = N = 0 for all j < N-2. Furthermore, we J^J1 - J.J 3- j 3 j =see that no one of the submatrices Pjk with j > k can be completely zero no matter what b(i,j) are. But, if b(i,j) = 1 for all i for some fixed j, i.e., if Dj = I, then P,j+l = 0. There exists at least one j with Dj = 0, namely j = N. Let v be the smallest such integer, i.e, let v = inf {j: b(i,j) = 1, i=l,2,..,M} P 00 Plo P20 P ol p 11 P 21 0 P12 P22 PN2 *, ~ 0 0 0 * * P * * ^ PNN * O PNo PN1 Figure 4. Form of the Matrix P. Lemma IV.5.1. If v < N, then all states (i,j) with j > v are transient in the chain {Zn, S } Proof. Let r~,, ~,I p.=.. P 1 v-1, v P.. M P vo Vv and define P2 and P3 in the obvious appropriate manner so that

-88 - P p1 P =!2 0 P3 The zero matrix in the upper right hand corner of P is there since, by the discussion preceding this lemma, Pj,j+2 = Pjj+3 =. = PjN = 0 for all j K N-2 and Pvv+l = 0. Hence no state (i,j) with j > v can be reached from any of the states (h,k) with k < v. But, since P2 is not zero, it is possible to reach the set of states (h,k) with k < v from the states (i,j) with j > v. Hence, all states (i,j) with j > v are transient states in {Zn, Sn}. Proof is complete. Next, let Al(x) be the Nx M1 submatrix of A(x) corresponding to the ergodic class of states in the process {Z(t)}. Then, by a suitable selection of A2(x) and A3(x) we can write A(x) o A(x) = t(x) A3(x)w Putting this in the formula (IV.5.1), we obtain S yo (jk,y) dA (y) 0 L jk = Dj P7 -(j,k,y) dA2(y) -- Q + 0 f 7y(j,k,y) dAl(y) + (I-D., o J 7yl(j,k,y) dA(y) p(l) o jk p(2) p(3) jk jk 0 /0 7o(J,k,) dA3(y) 0 / 7yl(j,k,y) dA3(y) o -.. +

-89 - From this partitioning of Pjk (j,k = 0,1,2,...,N) the following lemma is obvious, Lemma IV.5.2. A state (i,j) of the process {Zn, Sn} is transient if i > M1 The partitioning of the paragraph preceding the Lemma IV.5.2 suggests the next lemma which has Lemma IV.5.1 as a corollary. Let N be the smallest integer such that b(i,N1) = 1 for i = 1,2,...,M1, ie., let N1 = inf {j: b(i,j) = 1, i = 1,2,...,M l. Clearly, such a number exists and N < v Lemma IV.5.3. If N1 < N, then all states (i,j) with j > N1 are transient in the chain {Zn, Sn}. Proof, Consider only P.. Clearly, P is zero whenever jk jk zero whenever P is zero, i.e., whenever k > j + 2. Furthermore, P(1)NI is jk N N1+1 zero since b(i,N1) = 1 for all i = 1,2,...,M1. Hence, P(h,k;i,j) = 0 whenever h < M1, k K N1 and j > N. Thus none of the states (i,j) with j > N1 can be reached from any of the states (h,k) with h < M1, k < N1. But, it is easy to see that it is possible to reach a state (h,k) with h < M1 and k < N1 from any of the states (i,j) with j > N1. Hence, all states (i,j) with j > N1 are transient in the chain {Zn, Sn} We put the Lemmas IV.5.1, IV.5.2, and IV.5.3 together in the form of a theorem. Theorem IV.5.4. A state (i,j) of the process {Zn, Sj is transient if either i > M1 or j > N1.

-90 - Next we will show that these are the only transient states by proving that all states (i,j) with i < M1 and j < N1 are ergodic. In the proof of the following theorem we will write (ij) - (h,k) if the state (h,k) can be reached from the state (i,j). If h is an ergodic state of {Zn}, i.e., if h < M1, then (i,j) -(h,j) for all i = 1,2,...,M and j = 0,1,...,N since state h can be reached from any state i in the process {Zn} On the other hand, if b(i,j) < 1, i.e., if balking is not certain given the state (i,j), then (i,j) -> (i,j+l) since P(i,j;i,j+l) > 0 in that case. With these in mind we will prove the following theorem, thus completing the classification of the states of the process {Zn, Sn} Theorem IV.3.5. All states (i,j) with i < M and j < N1 are ergodic states in the chain {Zn, Sn}; furthermore, they all belong to the same class. Proof. For j < N1, by the definition of N1, there exists at least one k (1 < k M1), say kj, such that b(kj, j) < 1 Now let (h,i) and (k,j) be any two states with h,k < M1 and i,j < N1 If i = j, then (h,i) - (k,j) and (k,j) - (h,i) clearly, since both h and k are ergodic states belonging to the same class in {Z } Assume next that i < j. Then, since Pji is not zero and h,k are ergodic in {Zn}, (k,j) - (h,i). On the other hand, by the comments of the paragraph preceding this theorem, we have (h i) - (kii) i (ki i+l) > (k^ i+l)... > (k,_l j-l) -- (kj_l, ) < (k,y ) Hence, (h,i) - (k,j) also. Thus, (h,i) - (k,j) and (k,j) - (h,i) for all (h,i) and (k,j) with i,j < N1, k,h < M. Therefore, all

-91 - such states belong to the same class and hence are all of the same type. Thus, all these states are ergodic, since otherwise there would be only transient states in view of Theorem IV.5.4. Proof is complete. Corollary IV.5.5A. The process {Zn, Sn} is an irreducible Markov chain if and only if M = M1 and N = N1 Proof, If M = M1 and N = N1, all states communicate with each other, (Theorem IV..5;) hence, {Zn, Sn} is an irreducible chain. On the other hand, if M1 < M, the state (Ml+l,j) is transient; or if N < N, the state (i,Nl+l) is transient (Theorem IV.5.4.) Then, it is impossible that {Zn, Sn} be irreducible when M1 < M or N1 < N. Thus, the condition of the corollary is necessary. In the special case where b(i,j) = 0 except for b(i,N) = 1, i.e., no balking is allowed unless the system is full, clearly, N1 = N. If further M = 1, then the chain {S} is an irreducible Markov chain by the preceding corollary, This is well known (cf. Takacs [31].) 6, STEADY STATE BEHAVIOR OF {Zn) SnJ From the theorems of the preceding section the following is obvious from the general theory of Markov chains. Theorem IV.61l. The limiting probabilities lim Pr{Zn=i,Sn=j}, n- o00 (i=1,2, o,M; j=0,,2,.,N, ) exist and are independent of the initial distribution Po(i,j) Let p(i,j) = lim Pn(i,j) = lim Pr{zn=i, Sn=j}. Clearly, n —>o n-> o0 p(i,j) = O whenever i > M1 or j > N1. Letting p to be the row vector whose (jM+i) entry is p(i,j) we have

-92 - P l= m Pn = lnm Po pn = PO lim pn n-p o 0 = n- > pP = p, pE = 1 Once P is known, p can be calculated as the unique solution of the system of linear equations pP = p pE = 1 00 n Then, P = lim P can be constructed easily as a square matrix of n-4 degree M(N+l) whose every row is p 7. WAITING TIMES If we suppose, in particular, that the customers are served in the order of their arrival, then the waiting time behavior can be immediately deduced from the queue size. Let Wn(x) be the distribution of the waiting time of the n-th customer, given that he has joined the system; and let W*(s) = n f e sx dW(x). Then, W*(s) is PJ/(P+s)j given that the n-th arrival found j customers in the system. Hence, M N Wn() = ( ) p(ij) n+s n i=l j=o Let W(x) l im Wn(x), and let W*(s) f e sx dW(x). This n-> 00 o limiting distribution exists and is given below without proof. Theorem IV.7.1. Waiting time of a customer in the steady state has the distribution W(x) whose Laplace-Stieltjes transform is M N W*(s) = 7 p(i,j) (s)a i=l j=O

-93 - 8$ BUSY PERIODS Let 0 denote the length of a busy period in the steady state, and define G(x) as its distribution function. Let Gn(x) be the joint probability that a busy period is n services long, and that these n service times, V1, V2,..., Vn, add to at most x units of time. Further, let Cm be the remaining busy period at time Tm, and introduce the distribution function F (i,j,x) = Pr {m = V1 + + Vj+n < x Z Z =, Sm = j} for n > 1, and for n = 0, F (iyjyx) = 0 for j 0, i = 1,2,...,M; Pr ( = V +..+V. < x Z i, Sm j} otherwise. m j -- m m In other words, F (ij,x) is the joint probability that the server will be idle for the first time after j+n services, and these services will total at most x units of time, given that the state of the {Zn) Sn} now is Zm = i and S = j. Further let H(i,x) be the distribution function of a busy period started by a customer of type i, and let Hn(i,x) be the joint probability that a busy period started by a customer of type i is n services long and these n services take at most x units of time. Reminding that a(i) = lim Pr{Zm=i}, we write the interrelationships of these functions below. Hn(i,x) = Fn(i,O,x) (IV.8.1) H(i,x) = n=1 H(i,x) = n=o Fn(i,0,x) (IV.8.2) Gn(x) = M a(i) Hn(ix) (IV.8.3) i=l G(x) = Z Gn(x) = 1 a(i) H(i,x) (IV.8.4) n=l i=l

We will first derive Fn(i,j,x); then the above relations can be used to calculate other functions of interest. Now, first let Rl(i,j;h,k,x) = Pr(Zm+l=h, Sm+l=k, X+ < x, Ym= I Zm=i, Smj} Ql(i,j;h,k,x)= Pr{Zm+l=h, Sm+l=k, Xm+1 x, Ym=l I Zm=i, S =j} Then, from the developments of page 82, we have Rl(i,j;h,k,x) = x b(i,j) o (jky) dAih(y) Ql(i,j;h,k,x) = /x (l-b(i,j)) yl(j,k,y) dAih(y) 0 O Further, let Rl(x) and Ql(x) be the square matrices of degree M(N+1) whose (jM+i, kM+h) entries are Rl(i,j;h,k,x) and Ql(i,j;h,k,x) respectively for all i,j,h,k meaningful except for k = 0 for which case we define the (jM+i,h) entries to be zero. Also, let R1(s) and Q1 be the Laplace-Stieltjes transforms of Rl(x) and Ql(x) respectively. These can be obtained readily through the developments of pages 82,83o Returning to the derivation of Fn(ijx) we have, for n=, j >0: F (i,j,x) = Pr {m = V1 +...+ V < x Zm = i, S = j} = x Pr{X+l>y I Z-=i} dyPr{Vl+...+Vj<y} Pr{Ym=0 I Zm i, Sm=j} M N h+ l xPrm+l V+...+V y I Zm+l h, Sm+ =k h=l k=l o M+l dy Pr{Zm+l=h, Sm+l0k, Xm+ y Ym=O I Zm=i St=J y M~+1jk Xm+K y

-95 - -X b(ij) (1 - AiL(y)) Y dy + o 0=1 j' M N x hE1 kZl / Fo(h,k,x-y) dRl(ij;h,k,y) h=l k=-L 0 and further for n > 1, M N x F (i,j,x) E Z f {Fn(hk,x-y) dRl(i,j;h,k,y) + Fn ~i~j~x) h=l k=l o Fn-(h,k,x-y) dQl(i,j;h,k,y)} Now let F (x) be the column vector whose (jM+i) entry is Fn(ij,x), and let K(x) be the column vector whose (jM+i) entry is M e (Py) x b(ij) (1 - A,(Y)) dy o ' Further, let F*(s) and K*(s) be the Laplace-Stieltjes transforms of Fn(x) and K(x) respectively. K*(s) can be computed by the help of the developments in page 83. Then the equations for Fn(i,j,x) can be put in matrix notation as x Fo(x) = f (dR1(y)) Fo(x-y) + K(x) o Fn(x) = x (dRl(y)) Fn(x-y) + /x (dQl(y)) Fn-l(x-y) n > 1 Taking Laplace-Stieltjes transforms everywhere, we obtain Fo(s) = R1(s) Fo(s) + K*(s) (iLv.8.5) F*(s) = R*(s) F*(s) + Q*(s) F* (s) n > 1 n = R1 n 1(s) n- 1 00 n Define the generating function L(s,z) = n=o z Fn(s), zl < 1; then, from (IV.8.5),

-96 - L(s, z) = Z z R*(s) F*(s) + E zn Q(s) Q(s)) + K*(s) n=o 1 n n=l 1 - = R*(s) L(s,z)+zQ[(s) L(s,z) + K*(s), so that, (I-Ri(s) 'r z Q*(s)) L(s,z) = K*(s). (IV.8.6) Lemma IV.8.1. I-Rl(s) - zQ*(s) is non-singular for Izi < 1 and Re{s} _> 0 Proof. Let R1 = R(O) and Ql = Q(O). Clearly, then R(s) is dominated by and Q(s) is dominated by Q (i.e,, Ri(s) < R1, Q (s) << Q1). Then, since l< 1, zQ (s) << I Qi << Q; and hence R(s) + zQl(s) << R1 + Q1 for all s with Re{s} > 0 and Izl <. Notice that Rl+Ql is the transition matrix P discussed earlier with the first M columns replaced by zeros. Since no row was completely zero in these first M columns, we have (R1+Ql)E<PE = E Hence all eigenvalues of R1 + Q1 are less than one in absolute value (Theorem I.2.1). Then, from Theorem 1.2.2, since R (s) + z Q*(s) << R1 + Q1, all eigenvalues of Rj(s) + z Qn(s) are less than unity in absolute value. Therefore, I - (R(s) + z Q(s)) is non-singular for Iz| < 1 and Re{s} > 0 Using this lemma we can write (IV.8.6) as: L. ) = -1 * L(s,z) = (I-Rl(s) - z Ql(s))l K(s). (IV.8.7) Once L(s,z) is obtained all the other functions of interest can be computed as follows. Let Lo(sz) be the M x 1 column vector

-97 - obtained from L(s,z) be keeping only the first M entries. Then, clearly from the definitions, Lo(s,z) = Z f zn e d H (x) Lo ( z) n where H (x) is the column vector whose i-th entry is H (i,x). Thus J e s dHn(x), and hence f e-x dHn(i,x) as its i-th entry, can be 0 o obtained from a Taylor's series expansion of Lo(s,z) around a suitable point z within the unit circle. Calculation of H(i,x) is easier: 00oo - sx f e dH(i,x) can be obtained as the i-th entry of the vector Lo(s,l). o The distribution function G(x) of the busy period can be obtained from J esx dG(x) = a L (s,l) where a is the steady state distribution 0 0 vector of {Zj. 9, CONCLUSION We have investigated the queueing processes (queue size, waiting times, busy period) of a finite queue with a negative exponential server subject to a semi-Markov process input and a decision rule about joining the queue which depended on both the arrival process and the queue size. Some generalizations with respect to number of servers and the service time distributions will be given in Chapter VI. Also in that chapter, some special cases, representing specializations with respect to balking mechanism and the arrival process, will be discussed. In the next chapter we aim at an investigation of the properties of the stream of customers that decided to balk from the queue of this chapter.

CHAPTER V STREAM BALKING FROM THE QUEUEING SYSTEM SM/M/1 1, INTRODUCTION In the preceding chapter we have discussed the properties of a queueing system with a negative exponential server subject to an input which formed a semi-Markov process where customers are permitted to balko In this chapter we will investigate the stream formed by the customers who have chosen not to join the queue. We will show that this stream is equivalent to a semi-Markov process whose state space is the Cartesian product of the state spaces of the arrival process and the queue size process. Some generalizations with respect to the number of servers and the service times, and some special cases such as the overflow problem will be treated in the next chapter. 2. NOTATION AND DEFINITIONS We will use the notation and definitions already introduced in the preceding chapter. We assume the arrivals form a semi-Markov process identified by ao as the vector of initial distribution, and A(x) as the transition matrix. We assume that the arrival process {Zn, Xn} have M states, the first M1 of which are ergodic, and the remaining ones are transient. -98 -

-99 - The state of the system at time t, denoted by S(t), is the number of people in the queueing system at time t. It is assumed that the service times are negative exponentially distributed with mean 1/P. Customers are permitted to balk, not to join the queue. Let the instants of arrivals be T, T1, T2, T3,..., and let the instants of balkings be To0T17T2' - *. We take, for convenience, To = T = 0 Obviously, for every k there exists an n such that Tk = Tn. We let k = Zn and = S if Tk = T. Further, we let Xn = Tn-Tnl and Qn = Tn-1 (n > 1), and 0o = X = 0. We take all other terms introduced in Chapter IV as they are without re-introducing them here. Now let us center our attention on the underlying process {ZnSnXn; n = 0,1,2,...}. Assume that at time Tn there occurred a balking, i.e., Tn = Tm for some m; then R(i,j;h,k,x) = Pr {Zn+l=h, Sn+lk, Xn+l x I Zni, Sn=j, Yn=0}'~ is the joint probability that the arrival process will move from state i to h, and the queue size will change from j to k during the next interarrival time which is at most x long, (i,h = 1,2,..,M; j,k = 0,1,...,N; x > 0). On the other hand, the joint probability that no balking occurs at time Tn and the same kind of changes occur in the process {ZnSn Xn} is given by Q(i,j;h,k,x) = Pr {Zn+l=h, Sn+l=k, Xn+1 x Yn=l Zn=i, Sn =} (i,h=l,2,...,M; j,k=O,l,...,N; x > 0). These probabilities can be

-100 - calculated by the following equations (see page 82) 00 R(i,j;h,k,x) = / 70(j,k,y) d Aih(Y) Q(i,j;h,k,x) = (l-b(i,j)) /yl(j,k,y) d Aih(y) Let R(x) and Q(x) be the square matrices of degree M(N+l) whose (jM+i, kM+h) entries are R(i,j;h,k,x) and Q(i,j;h,k,x) respectively. Then x R(x) = f r (y) 6 dA(y), o o x Q(x) (I-D) ( r (y) 6 dA(y) o 1 Thus, the Laplace-Stieltjes transforms of R(x) and Q(x) can be obtained as F*( s) = rT A*(s+p) Q*(s):- (I-D) (rl A*(s+4)). (V.2.1) (V.2.2) The probabilities Q(i,j;h,k,x) defined were "one-step" (m) probabilities. Now let Q (i,j;h,k,x) be the m-step probabilities defined by i \ 5,h bjk, U(x) for m = 0 (m)(i,jh,k,x) = h jk (x) for m = PrZn+m=h, Sn=k, Xn+...+X < x, n n+m= n+l n+m X Yn=Yn+l.'.Yn+m-_1 1 Zn=i, Sn=j} for m > 1 (m) Q (ij;h,k,x) is the joint probability that the arrival process will move from state i to h, and the queue size will change from j to

-101 - k during the next m interarrival times whose total is at most x units, and none of the customers balk. Further, let Q(m)(x) be the (m) matrix whose (jM+i, kM+h) entry is Q (i,j;h,k,x). Our primary aim in this chapter is to show that the process { nn, Qn} is equivalent to a semi-Markov process. Accordingly we define B(i,j;h,k,x) = Pr{fn+l=h, tn+l=k, Gn+l< x I n=i n=j} and define B(x) as the matrix whose (jM+i, kM+h) entry is B(i,j;h,k,x), and further define B*(s) as the Laplace-Stieltjes transform of B(x). 3. DERIVATION AND THE LIIT OF Q ((x) We have already derived expressions for R*(s) and Q*(s), (i) and hinted at the derivation of Q (x). First, note in passing that R = R*(O) is a stochastic matrix: R*(s)EM(N+1) = (o A*(s+))(EN+1 Ei ) = ( EN+1) $ (A*(s+) EM) - EN+1 e (e D A*(s+P) EM) = E+ (A*(s)EM) thus, RE = EN+1 $ A*(O)EM = EN+l B EM = E But this does not hold for Q = Q*(O), In fact,

-102 - Q*(s)EM(+) ( I-D) ( A* ( s+))(E +1 EM) = (I-D)(('1 EN+1) e (A*(s+P)EM)) = (I-D)(EN+1 (A*(s)E)) so that QE = (I-D)(EN+1 $ A EM) = (I-D)E = E-b. We now show that the probabilities Q(m)(i,j;h,k,x) obey the Chapman-Kolmogorov equations. From the definition, Q( (i,j;h,k,x) = Q(i,j;h,k,x). Now assume m > 2; then, Q(m)(i,j;h,k,x)= PrZm=h, S =k, Xi+X2+...+X < x P2?{Zm k, X1+2+.e+m - x Yo=Y1.=Ym-1=l I Zo=i, So=j} = M x Pr{Zm_l=g, Sm-_l=, Xl+..+X < X-y, Xm7' m-1+g=l =0=o o0 Yo=Y1 ^=h. Ym_21 I Z=i, S ==} dy Pr{Zm=h, Sm=k, Xm< y, YZm_f IZm-K_=g, Sm-l=} M N x g=l ~=o o In matrix notation then, Q(i-i (i,j;g,,x-y) d Q(g,;h,k,y). for m= O for m = 1 (y) for m > 2. (I U(x) Q (x) = Q(X) fX Q(m-1)(x-y) dG 0 Now taking Laplace-Stieltjes transforms, L{Q W)(x)} = {L{Q(m'l)(x)} Q*(s)

-103 - so that by a simple induction on m we obtain L{Q( (x)} = Q*(s)m, m>. (V.3.1) The m-step probabilities Q(m)(i,j;h,k,x) were defined so that they are joint in the event that each one of the m customers joins the queue. Intuitively, it can be seen that this cannot go on forever, ioe., somebody would balk at some instant. We formalize this below Theorem V.3o1. Sequence of mass functions Q(m)(i,j;h,k,x) converge to zero uniformly for all x > 0 as m approaches infinity. Proof. By Weierstrass' uniform convergence theorem, it is sufficient to prove that Q(m)(i,j;h,k) converges to zero as m approaches infinity since 0 < Q(m)(i,j;h,k,x) < Q(m)(i,j;h,k,o) = Q(m)(ij;h,k). Again by the same theorem, since 0 < Q(m)(i,j;h,k) < Zk Q(m)(i,j;h,k), it is sufficient to show that Zh Z Q i(mij;hk) *- 0 as m -*. But now, h Zk Q(m)(i,j;h,k) = Zh k Pr{Zm=h; Sm=k; Yo=. Yml=l I Zo=i, S=j} =Pr {Yo=Y1=... =Ym-l1l Zo=i, S=j}. On the other hand let N1 and M1 be as defined in Chapter IV; then, from Theorem IV.5.5, probability of finding Zn = M1 and Sn = N1 for at least one n is equal to 1 no matter what Zo, So are. Since the probability of balking when Zn=M1 ' Sn = N1 is b(M1,N1) = 1, the probability that Yn=O for at least one n > 0 is 1 no matter what Zo, So are. Hence, the complementary event Y= = Y1 = 1 = 1 has probability zero, i.e.,

-lo4 - Pr {Y=l, n > 0 I Zo=i, SO=j} = for all i,j Hence, rim Zh Ek Q() (i,j;h,k) = lim Pr{Yo=.. =Ym-=l Zo, S=j } =, and the proof is complete. We now restate this theorem in matrix language below without proofo Theorem V;3o1' The matrix sequence {Q(m)(x)} converges to zero uniformly for all x > 0 as m approaches infinity. Equivalently, the matrix sequence {Q*(s)m} converges to zero uniformly for all s with Re{s} > 0 as m approaches infinity. This completes our examination of the sequence {Q(m)(x)} and enables us to start our investigation of the main process of interest in this chapter. 40 BALKING STREAM Assume at time Tn a balking has occurred, i.eo. Tn = Tk for some k, and the arrival process and the queue size were in states Zn = 5k = i and Sn = *k = j respectively. With this knowledge, one can predict the future of {Zn, Xn} and {Zn Sn Xn} in all future arrival epochs Tn+l, Tn+2) etc. In particular, one can compute the probabilities related to ik+l' *k+l' @k+l with only a knowledge of ik, kk ~ Subject of this section is to prove these assertions and identify the balking process {k,tkOk}'

-105 - Theorem V.4.1o The process {(k' tk' Gk; k = 0,1,..} is equivalent to a semi-Markov process defined by the vector of initial probabilities po and the transition matrix B(x). Proof. Since To=To=O, o =Zo and o=So. Thus Pr{fo=i, Vo=j} = PrZo=i, So=j} = o(ij) On the other hand, Pr{;k+l=i, *k+l=j' Gk+l < x I 0 ~o ';"k' o'' tk; G1' '. k} = 00 r = Pr {Z i n+m, Sn+l + o + Xn+m < x, Yn+l = m=l n+m Yn+m_1=l, Yn+m= I o'" ~~' k" l' 5k=Zn; o' ''*k-l' k=Sn; 1,...,Gk; Yn=O} 00 m= Pr {Znnmi S+m=j Xn+ +... + Xn+m < Yn+l = = Yn+m- 1, Yn+m-O I Zn= kn Sn=*k, Yn=0} = Pr{ k+l=i, k+l k+l < x I k' = B ((k' *k;i,j,x) as claimed. Proofs of the following corollaries follow immediately from the general theory of semi-Markov processes. Corollary V.4.1.A. The process {,k', k} is a Markov chain defined by (po B) Corollary V.4.1oB. The process { k' 4k' Tk} is a Markov process. Corollary V.4.1.C. The interbalking times 01,Q2... are mutually conditionally independent given o',1'. and *o,1l,...

-106 - During the proof of the above theorem we already have derived a relation that can be used in computing the transition probabilities. We have B(i,j;h,k,x) = Pr {n+l=h, n+l =k, n+ < x =i, =j} 00k; = Z Pr {Z =h; Sm=k; Xi+.. m=l m m ' 0o M N = E z z m=l g=l Q=o o M N = z z z m=l g=l Q=o ooM N m=l g=l ~=o x Pr o dyPr /x Pr o r dyPr {Zl=g, { Zm=h, {Zl=gh Zl=g,.+xm < Y =O m Sm=k S1=m, Sm=k, Si =i x; Y1=Y2=...-Ym_ 1 I Zo=i, So=j, Y=O } x1 < x-y I Zo=i, So=j, Yo=O} X2+...+X < Y Y 1=..-.=Y; Ym=O I Zl=g, S1=r} X1 < x-y I Zo=i, So=j, Yo=O} X2+..-+X < Y; Yl=-'=Ym-l= I Pr{Ym=O I Zm=h, Sm-k} Jx R(i,j; g,g,x-y) dQ(m-'l)(g ~;hky) b(h,k). 0 1 ~~~yQ 9.lhky bhk. In matrix notation this can be written concisely as 00 x Xo (m-l) B(x) = z f R(x-y) dQ (y)D m=l o Now taking Laplace-Stieltjes transforms, remembering that L{Q(m)(y)} = Q*(s)m by (V.3.1), we obtain B*(s)= z R*(s) Q*(s) D. m=l (v.4.1) 00 Of course we need to show that Z=o Q*(s)m converges. This we do next

-107 - Lemma V.4o2, I-Q*(s) is non-singular, and the power series 0m=o Q*(s)m converges to (I-Q*(s))-i for all s with Re{s} >. Proof. From Theorem V.3.1', Q*(s)m -4 0 as m -> X for all s with Re{s} > 0. Then, the lemma follows from Theorem 1.2.5, Using this lemma in (V.4.1) we obtain B*(s) = R*(s)(I-Q*(s))l D. (.4.2) Clearly, B = B*(O) should be a stochastic matrix. To check: B = R(I-Q) D; hence BE = R(I-Q)-DE. Now from page 102, QE = E -DE; so that, DE = E - QE = (I-Q)E. Hence, BE = R(I-Q)-1(I-Q)E = RE = E as needed. Our identification of the process {fk,*k-Gk} is now complete, since the initial distribution vector po was already computed in Chapter IV. 5, THE PROCESS {fntn} The Markov chain {vn'n} is well defined by po and B, (see Corollary V.4,loA.) In this section we will classify its states and give some steady state results. For every n > 0, there exists a k such that In = Zk and?n = Sk. Hence the state space for {,nn} is a subset of the state space of the process {Zn,Sn} o Note that, for any n, ~n=i and *n=j only if there exists a k such that TkTn, Zk=i, Sk=j, and Yk=O 0 Therefore, if a state (i,j) is transient in the process {Zn,Sn}, then that state is transient in {fn',n} also. Thus, the lemma below follows from Theorem IV.5.4.

Lemma V.5,l A state (i,j) of the process {fn/n} is transient if either i > M1 or j > N1 However, these are not the only transient states. Assume for some i and j b(i,j)=O. Then, no customer can balk if Zn=i and Sn=j o Hence, it is impossible that such a state ever be visited by the process {~nVn}. This can be seen easily from an examination of the transition matrix B = R(I-Q) D. Note that if b(i,j)=O, then the jM+ith diagonal entry of D is zero, and hence th the jM+i column of B is completely filled with zeros, i.e., such a state can never be reached from any state. In Chapter III, we called such a state an 'mpty state", a state which the process never visits. Clearly, an empty state is a trivial transient state. In fact, such states can safely be discarded from the state space without any losso Adding the arguments of this paragraph to Lemma V.5 l we obtainTheorem V.5^2. A state (h,k) of the process {ns/n} is transient if (i) h > M, or (ii) k > N1, or (iii) b(h,k) =. Actually these are the only transient states. We will show this by proving that all other states are ergodic. Theorem V.95o3 A state (h,k) of the process {fn,tn} is ergodic if and only if

-109 - (i) h < M and (ii) k < N1 and (iii) b(h,k) > 0 O Furthermore, all the ergodic states are in the same class. Proof. If anyone of the three conditions is not fulfilled, then by Theorem V.5.2, (h,k) is a transient state. Hence the conditions are necessary, Proof of sufficiency: Consider the set C of all states (h,k) with h < M, k < N1, and b(h,k) > 0 o Let (i,j) and (h,k) belong to C o Then, by Theorem IV.5-5, (h,k) can be reached from (ij) and vice versa in the process {Zn,Sn}; i.e., n=1 Pr {Zn=h, Sn=k Zo=i, S=j} = But now, assume ~o = i, o = j; then, Pr {(m=h: tmk I o=i, o=j} = n=l Pr {Tm=Tn I Zn=h Sn=k} Pr{Zn=h, Sn=k Zo=i So=j} Hence, 00 1m=1 Pr {mh, 4m-k I ~o=i, o =j} Z E b(hk) PrZn=h, Snk I Zo=i Sj} = n=l since b(h,k) > 0 o Thus, (h,k) can be reached from (i) o We have shown that any two states of C can be reached from each other. Hence, C has only one type of states, either all ergodic or all transient. But they cannot be only transient since then, in view of

-110 - Theorem V.5o2, all the states of {~nn} would be transient: an impossibility since in a finite Markov chain not all states are transiento Therefore, all states belonging to C are ergodic. Proof is complete, In view of this last theorem, limiting probabilities exist and are independent of the initial distribution as given by po 0 Now let r(ij) = lim Pr {nn=i, n=j}, and let II be the row vector n —> 0 whose (jM+i) entry is j(i,j) Clearly I is the unique solution of the system of linear equations TB = TI ITE = 1. There exists a very close relationship between In and p where p was defined as the limiting distribution vector for the process {Zn Sn}, i.e., (jM+i) entry of p was p(i,j) = nlio Pr {Zn=i, Sn=j}7 Theorem V.5.4. Limiting probabilities t(ij), (i=l,2,...,M; j=0,l,. o N,) exist and are independent of the initial distribution Po(ij). Furthermore,.T(ij) = p(ij) b(ij) M N z z p(i,j) b(i,j) i=l j=O Proof. Existence and the independence from the initial distribution are obvious. Now, by the definition of p, pP=p Note that P = DR+Q, so that pP = pDR + pQ = p, therefore pDR = p - pQ = p(I-Q). Then, (pD)B = pD(R(I-Q)- D) = (pDR(I-)(I —)-D = pD.

-111 - Hence, pD is an eigenvector of B corresponding to the eigenvalue 1. Then, dividing each element of pD by the sum of its elements, pDE = pb, we obtain p pD; (notice that pb > p(M1,Nl) b(M1,Nl) = p(M1,N1) > 0 since the state (M1,N1) is an ergodic state in {ZnSn} ) Then, clearly, 1 1 (p pD)B pD and, (p pD)E = 1. From the uniqueness of H, then, I = 1 pD. Proof is complete. pb Next, let B = lim B Then, B has all its rows n->o~ identically equal to T. From the last theorem, we also obtain: C* 1 1 ~C B =E E e = pb pD p- PD 6. THE PROCESS {1n ~/n Gn Theorem V.4ol show that {fnnn Gn} is equivalent to a semi-Markov process and is defined by the vector of initial probabilities Po and the transition matrix B(x) whose Laplace-Stieltjes -1 transform is given by B*(s) = R*(s) (I-Q*(s)) D. In this section we will classify the states of this process and give some steady state results, First let, M M N Hi(x) = j1 Aij(x) = hl kO B(i,j;h,k,x) x dH(x) 00 4i = / x dHi(x), n(ij) = x dG(i,j,x) o o

-112 - Then, Hi(x) is the distribution of the occupancy time of state i in the process {Z(t)}, and li is the mean of that occupancy time; G(i,j,x) is the distribution of the occupancy time of the state (i,j) by the process {(n,4n',n}; and l(i,j) is the expected value of that occupancy time. Further, let 4 be the M by 1 column vector of i's, and let I be the column vector whose jM+i entry is r](i,j). Clearly, then, = dA*(s) E, and = dB*(s) E ds s=o ds s=o Now, ds ) ds D = (R*(s)(I-Q*(s)) D) ds dR*(s) (I-Q*(s))-iD + R*(s)(I-Q*(s))-1 dQ*(s) (I-Q*(s))-1 ds ds dR() + R*(s)(I-Q*(s))-1 dQ*(s) (I-Q*(s))-, ds +s )I(sds so that, d B*(s) ds s=c Rememberir d B*(s) I ds s=o Now, from R*(s)E Q*(s)E hence, - dR*(s + R( D ds |S=O ag that (I-Q)E = DE > (dR*(s) + R( ds s=o page 101, En+1 i A*(s)EM (I-D) R*(s)E; I -1Q) dQ*(s ):I-Q,) ds S=O -1 IIs> D, we obtain I-Q)1 dQ*(s) 1 E ds s=o

-113 - dR*(s) E E a -dA*(s) EM = EN+1$I, ds s=o n+ ds I=o and dQ(s)s E (I-D) (EN+1 ) ds N+l S=O Putting these in the above, we have -dB* s) E = (EN+1 ) + R(I-Q)-(I-D)(E +L ) =-dsN S=O n = (I-B+R(I-Q)-)(En+l. (V.6.1) Next is a theorem on the classification of the states of the process {n',n', n} Theorem V.6.1. A state (i,j) of the process {fn'%n'Gn} is ergodic if and only if i < M1 and j < N1 and b(i,j) > 0. Furthermore, all ergodic states are in the same class. A state is transient if it is not ergodic. Proof. By Theorem 1.3.2, a state of a semi-Markov process is transient if it is transient in the corresponding Markov chain. Then, whenever i > M1, or j > N1, or b(i,j) = 0, the state (i,j) is transient since it is transient in {nrJn by Theorem V.5.2. Hence the conditions of the theorem are necessary. To prove the sufficiency, consider the state (ij) with i < M1, j < N1, and b(i,j) > 0. Then, from Theorem V.5.3, (ij) is ergodic in {ntn} * Further, the solution of T given by (V.6.1) shows that 1 is finite since p is finite, i.e., r(h,k) is finite for all h and k. Thus, by Theorem 1.3.2, the state (i,j) is ergodic.

The fact that all ergodic states are in the same class follows from Theorem V.5.3. 7. LENGTHS OF INTERBALKING TIMES In this section we will put together all the results relating to {Gn}, the sequence of interbalking times. We have already shown that points of balking are regeneration points for the balking process. Time between balkings are dependent on the states of the arrival process and the queue size. Although balking intervals are dependent on each other, this dependence is through the states of the other processes { } and {tn}, so that, given {Snyn} and (~n+m'nn+m) ' the intervals Gn+l and n+m+l are stochastically independent. We have derived the expected value of Qn+l conditional on { n-'n} in the preceding section, (formula V.6.1). To find the expected value of +1l with no conditions we need to find In n where 1n is a probability vector whose jM+ith element is Pr {fn=i, n=j } Thus, in general On (n=l,2,.. ) have different means. But, in the steady state, these means become equal to Hq. In fact, the marginal distributions of {n}j become identical in the steady state: from, Pr { n < x} = Zi Zj G(i,j,x) en-l (i,j) we get lim Pr {n < x} = Zi Zj G(i,j,x) n(i,j). (V.7.1) 1 J Let I, ] denote the expected values of 0n and Xn respectively in the steady state. Then, clearly,

-115 - = r and 4 = p4. But now, from (V.6.1), q = Iir= II (I-B+R ( I-Q)- )(E+ ) = (HI - IB+ pDR(I-Q)-)(EN+ e ) = (I - n + K p) (E ~ v) pb n+l pb ( EN+ Now, M N p(E e~p) = Z p(iyj) 4i n+l i=l j=O M = E a(i) l Thus, we obtain T = -p p.. Noticing that pb is the probability that a customer balks in the steady state, we can state this above result in words as: "in the steady state, the expected value of a balking interval times the ratio of balking customers is equal to the mean interarrival time." Next we give a characterization of balking processes with independent interbalking times in the following three theorems. Theorem V.7.1. The interbalking times 1,02, 3,... are independent and identically distributed if and only if (i) = 1 (ii) b(l,j) = 0 for j < N1; and (iii) Zo = 1, So = N1

-116 - Theorem V.7.2. The interbalking times G1,92,G3,... form a modified renewal process, (i.e., 01,G2,.. are independent and Q2,03...n are indentically distributed,) if and only if (i) M1 = 1 (ii) b(l,j) = 0 for j < N1, and either (iii) Z = 1, S < N1 or (iii') Z = 1, So > N1, and b(l,So) = 1, b(l,j) = for j = N1+l, N1+2,..., So-l. Theorem V.7.3. Assume the process has started at time - * Then, the interbalking times 01G,2,... are independent and identically distributed if and only if (i) M = 1 (ii) b(l,j) = 0 for j < N1 Proof of Theorem V.7o1. Proof of sufficiency. Assume M1 = 1, b(l,j) = 0 for j < N1, and Zo=l, So=N1. Then, from Theorem V.6.1, the only ergodic state in the process ({n, n, 'n} is (1, N1). Since the process is initially in that state, it can never get to any other state; hence ~n=l and tn=N1 for all n > 0. There is only one state of the process {n,tn,rn}, and hence the balking process {fn',n',n} is equivalent to a simple renewal process; i.e., 91,G2,G3,... are independent and identically distributed as G(1,N,x) o Proof of sufficiency. Assume {nn' n'On} is a simple renwal process. Then, there exists only one state. By Theorem V.6.1, the state (M1,N1) is an ergodic state; hence this must be the only

-117 - state. Then by Theorem V.6.1 again, we must have b(i,j) = 0 for i < M1 and j < N1 in order for (M1,N1) to be the only state. But for M1 > 1, by the definition of N1, b(M1-l,N1) = 1. Thus, it is necessary that Mj=l, and, we only need b(l,j) = 0 for j < N1. Furthermore, since (1,N ) is the only state permissible, it is necessary that |0 = So = No and ~o = Zo = M1 Proof of Theorem V.7.1 is complete. Proof of Theorem V.7.2. Proof of sufficiency. Assume (i), (ii), (iii) or (iii'). Then, 1 for h = M1 = 1, k = N1 B(i,j;h,k) = { 0 otherwise Thus (~n'Vn) = (1,N1) = (1,N1) for all n > 1; and hence { *nn, n; n > 1} is equivalent to a renewal process, i.e., @2,G3094,... are independent and indentically distributed as G(l,Ni,x). Since (n, nn) is known for all n > 1, Q1 is independent of all the others. Proof of necessity. Assume G2,Q3,... are independent and identically distributed. Then, by Theorem V.7.1, it is necessary that M1 = 1, b(l,j) = 0 for j < N1, and 1 = 1, 1 = N1 Now then, we need to have 1 if h=l, k=N1 B(i,j;h,k) = { 0 otherwise. This is possible only if i=l and j < N1, or, j could be greater than N1 but then we need condition (iii'). Proof is complete.

-118 - Proof of Theorem V.7.3. Sufficiency. Assume (i) and (ii). Then the only ergodic state is (1,N1), hence if the process has started at -a, then at time zero it is already absorbed into state (1,N1), i.e., O = 1, *o = N1. Then the sufficiency follows from Theorem V.7.1. Necessity. There needs to be only one ergodic state. Since (M1,N1) is one, it must be the only one. This is possible only if M1 = 1 and b(l,j) = 0 for j < N1 by Theorem V.6.1. Proof is complete. 8. NUMBER OF CUSTOMERS JOINING THE QUEUE DURING AN INTERBALKING TIME Let.ij be the number of customers that join the queue during an interbalking time starting with =n=i, n =j; and let 5 be a column vector of random variables whose jM+ith element is ij. Then, Pr {Sij=m} = Z Pr Zm+l=h, Sm+ =k; Y1 = 1, Ym+ I= h k Zo=i, So=j, Y=O } (m) = E Z R(i,j;g,~) Q (g,A;h,k) b(h,k) hkog i or in matrix notation, Pr =mE = RQm Pr {j=mE} = RQ b. (v.8.1) Now let ij (z) be the generating function of,ij, i.e., let ij(z) = Zm=o zm Pr {..=m}; and further let 0(z) be the column vector of these generating functions. Then,

-119 - 00 0(z) = E zm Pr {T=mE} m=o = zm R Qm b m= o From Theorem V.3.1, Qm - 0 as m - m. Then, since zQ << Iz Q << Q, (zQ)m -O as m - m for zl < 1. Thus, by Theorem 1.2.5, the 00 matrix power series ZE=o (zQ)m converges to (I-zQ)1. Putting this above, we obtain 0(z) = R(I-zQ)-1 b, Kzl 1. (V.8.2) Note that 0(l) = R(I-Q) b = RE = E as needed. Mean number of customers joining the queue during a balking interval starting in state (i,j) can now be computed as vij= d.ij(Z) |. Let v be the column vector with these J dz ^ z=l means as entries in the usual arrangement. Then, d 2 v = - 0(z) = RQ (I-zQ) b dz z=l z=l = RQ (I-Q)-2 b = RQ (I-Q)-Q E Since Q and (I-Q)-1 are commutative, v R(I-Q)1QE = R(I-Q) (I-(I-Q))E = R(I-Q)-1E - E The unconditional distribution of the number of customers joining the queue can also be obtained easily, especially in the steady state. Now let ~ be the number of customers joining the queue during an interbalking time. Then,

-120 - Pr {T=m} = E r (i,j) Pr i{i.=m = In Pr j{=m} = R Q b The generating function for g can be calculated easily as 00 0(Z) = zm Pr {I=m} — 111=0 = R (I-zQ)-l b Then the expected value v of ~ is - -l v = IR(I-Q) E - TIE pb pDR (I-Q)-E - 1 pb Noting that 100 pb is the percent of customers balking, k is the pb expected number of customers arriving per balking period. Then, subtracting one for the one who balks we have the expected number of customers that do join the queue in a balking period. 9. CONCLUSION In this chapter we have discussed the properties of the balking stream. We have shown that the balking stream forms a semiMarkov process. We have indentified the process, gave a classification of its states and some steady state results. Further we have provided a characterization of all balking streams with independent and identically distributed interbalking times. All this was done under most general balking rules. Some more special balking processes will be investigated in the next chapter.

-121 -On the other hand we have assumed a single server queue with negative exponential service times. We will give some generalizations with respect to the number of servers and the service times in the next chapter.

CHAPTER VI GENERALIZATIONS AND SPECIALIZATIONS 1. INTRODCT ION In the preceding chapters properties of a single server queue and a stream of customers balking from it were consideredo Arrival process was taken to be a semi-Markov process and the balking rule was of the most general type. it depended not only on the queue size but also on the type of customerIn this chapter we will remark on some generalizations with respect to number of servers and service times, and some special cases with respect to arrival process, queue size allowed, and balking rules. 2o MANY SERVER QUEUE ING PROCESSES Restr'iction of the number of servers to one in the Chapters [I and V was not neces-:a y at all but it, made the presentation of the probiLem easier because of its simplicity as well as helping the intuition, Main theorems of Chapters _V and V9 Theorems IVo3ol and V o4l - remain unchanged under the assumption of many servers. Hence the resuLlts of both chapters can be easily generalized for many server sueue B O Let the number of servers in the system be r and le.t the waiting room be of size N-r ' 0, so that, the maximum number of customers allowed into the system is N, N > 1 Assume all the servers -122 -

-123 - are identical, they all have negative exponential holding times with the same mean 1/3 o a Generalizations of the processes of Chapter IV Then, in Section IV.35 the formulas given for y7(j, k x) = Pr {S nl-k i Sn-sj; Y n -, Xn=r} do not hold. Instead we have 'Yokx) - 1(j-lkx) for j=O otherwise; (j+l)e-kx (1-e-Px).k for j, r e -e e-3x)r-k rddy} for j > r, and k < r for j > r, and k > r o e-rx (rpx) J+ e (j+l-k) See Takacs [31] for these formulas.) The matrices ro(x) and rFix) now have these above defined quantities as elements. All the other formulas of Section IV,3 hold with no change excepting the ones for To and fl which need to be changed in accordance with the definitions hereo Sections IV4, IV, IV o6 remain unchanged, all the theorems hold without changes. Section IVo7 on waiting times needs to be revisedo With r servers we see that a customer's wait is zero if there are less than r customers in the system upon his arrival. Assuming that customers are served on a first come, first served basis, the waiting time of the n-th customer is zero if S < r, and is equal to the service time of Sn-r+l customers if Sn > r. Hence, the distribution of the waiting time of the n-th customer is:

r-l N-1 Wn(x). Z Pr {Sn - j} + Z Pr {sn=j} f e-rY (ry)J rJdy. j=o j=r or A limiting distribution exists: as n - o, Wn(x) -> W(x) and is given by M r-l N-l x rpe-rpy (rpy)j-r W(x) { z p(i,j) + z p(i,j)l ro dy}. i=l j=o j=r o (-r0) Section IV08 on the length of a busy period no longer holds. In fact, a new meaning for busy period is necessary. If we define the busy period as the period where at least one server is busy at any time, then we are able to modify the section for the general case of r servers. We let all the functions introduced in IV08 remain as they are definedo We only need to supply new formulas for the functions Fn(i,j,x) o We have now, = 0 for j=0 F (i,j,x) M o b(i,j) yo(j,O,y)(l-EAi(y)) dy + Ax Fo(h,k; x-y) dR_(i,j; h,k,y) for j > 0; and the formula for n > 1 is the same as in page 95 We define R1(x) and Q1(x) in the same way as in Chapter IV, (note that yo(j,k,y) and yl(j,k,y) are different now,) Further, define Fn(x) as the column vector with Fn(i,j,x) as its jM+lth entry as before, Define K(x) so that its jM+ith entry is b(i,j) Yo(j,0,y)(l-Z= l AiS(Y)) dy o D n e y=i

-125 - Then, just as before, we obtain in terms of Laplace-Stieltjes transforms F*(s) = R1(s) F*(s) + K*(s) F*(s) = R*(s) F*(s) + Q(s) F* (s), n > 1 which are the same as (IV.8.5). All the rest of the section remain the same with the new meanings attached to the quantities K*(s), Ri(s), Q((s), etc. This then is the generalization of Chapter IV to many server case. r Chapter V holds without change except that in the derivations of R(x) and Q(x) the matrices ro(x) and rl(x) are now different and should be calculated through formulas given in this section. With this change nade everything looks exactly as in Chapter V. A special case of the many server case is interesting. Let N = r, i.e., no storage is allowed. Then the resulting problem can be solved by the general method outlined in this chapter. If, however, the probability of balking when the system is not full is zero, then this problem can be looked as r queues in parallel, each one being a single-server queue with no storage, and the overflow of from one queue becoming the arrival stream for the next one. 3. SINGLE SERVER QUEUE WITH GENERAL SERVICE DISTRIBUTION Let us return to the single-server queue of Chapters IV and V. We have assumed, in those chapters, that the service times are negative exponentially distributed. Next, retain the assumptions

-126 - regarding independence of service times, but assume that service times are identically distributed positive random variables with V(x) as their distribution function. Then, {Sn, Zn, Xn} is no longer equivalent to a semi-Markov process; neither, therefore, is the process {(n' Vn, Qn} equivalent to a semi-Markov process. Therefore, methods of Chapters IV and V fail to be useful, We can, however, remedy for this by the addition of a dummy variable Un denoting the remaining service time on the customer being served at the instant of n-th arrival, Tn. Un is zero if and only if Sn = 0. Further let Tk denote the remaining service time on the customer being served at the instant of k-th balking Tk, i.e., Tk = Un if Tk = Tn For convenience we take Uo = To = 0 O Theorem VIo3.1l The process {Zn, Sn, Un, Xn} is equivalent to a semi-Markov process whose state space is the Cartesian product of the set of non-negative real numbers with the set of integers {l,2,OO., M(N+1)}. Proof. We shall show that Pr {Zn=j, Sn=k, Un < x, Xn < y Zk, Sk, Uk, Xk; k < n-l} = Pr {Zn=j, Sn=k, Un < x, Xn < y Zn-l, Sn-i, Un-1} First note that Sn, Un depend only on Sn-l Un-l Yn-1 But Yn-l depends on Zn 1 and Snl. Hence, Pr {Sn=k, Un < x Zk, Sk, Uk, Xk+l; k < n-l} = Pr {Sn=k, Un < x | Zk-l, Sk-l, Uk-l, Xk} ~

-127 - Then, Pr (Zn=j, Sn=k, Un < x, Xn < y Zk, Sk Uk, Xk; k < n-l} =.: f o Pr{Sn=k, Un x IZn=j, Xn=w; Zk, Sk, Uk, Xk, k < n-1} dw Pr{Zn=j, Xn < w I Zk, Sk, Uk, Xk, k < n-l} Y Pr {Sn=k, Un < x Zn-1, Sn-1, Un-1, Xn=w} dw Pr{Zn=J, Xn < w I Zn-1} Pr{Zn=j, Sn=k, Un < x,Xn < y Zn-, Sn-i,Un-l} as was to be demonstrated. We next give a few corollaries without proofs. Corollary VI.3.1.A. The process {Zn, Sn, Un} is a Markov process. Corollary VI.3oloBo The process {Zn, Sn, Un, Tn} is a Markov process. If one defines the balking process as {(n, gn} where Qn-Tn-lTnl as before, and where kn = (Sn, ny Tn), then with a alteration of proof of Theorem V.4.1, we obtain the next theorem. omit the proofo Theorem VI.3.2. The balking process {4n, Gn} is equive to a semi-Markov process, i.e., little We alent Pr {n cv, Gn c(O,x) I k' k, k < n-l} = Pr {$n C v, gn e(O,x) n-l}

-128 - In view of these two theorems, both the queueing properties of the system and the balking process can be investigated by almost the same methods as outlined in Chapters IV and V. However, because of the ugly nature of the state space of the processes of interest, the calculations involved become much too involved. We do not carry out investigation of this general case any further. 40 SINGLE NEGATIVE EXPONENTIAL SERVER WITH SOME SPECIALIZED BALKING RULES In this section we take up the problem of Chapters IV and V. In those chapters balking rules were taken to be most general, namely, the probability of a customer balking from the queue depended on not only the state of the queue but also on the state of the arrival processo This can be specialized in at least two directions. a) Probability of balking depends only on the state of the queue; ioeo, Pr {Yn=O | Zn=i, Sn=j} = Pr {Yn=O I Sn=j} This case will be encountered in applications where the customers themselves are homogeneous. For this case, proofs of theorems given in Chapters IV and V become much easier. However, we have no new theorems. b) A further specialization on this would be to set Pr {Yn=O Zn=i, Sn=j} zero if j < N and unity if j=N, i.e., customers join as long as there is room for them. In this case, Chapter IV gives the queueing properties of a finite queue with a single negative exponential server, no balking allowed. Of course this is of interest in its own right; and Chapter IV retains its originality even in this case.

-129 - Under this type of a balking rule, Chapter V gives the properties of the overflow stream from a single negative exponential server subject to a semi-Markov process input. In this case, the steady state solution becomes quite interesting. From Theorem V.6.1 the only ergodic states are (i,N) i=1,2,..o,M1. So that, the overflow stream will have many of the characteristics of the arrival stream, but will differ from it in its interval process. Our research remains original in this special case also, A further specialization of this problem by letting the interarrival process be a simple renewal process will be discussed latero c) Another specialization of the general problem may be made at the other extreme by letting balking probabilities be dependent on -the state of the arrival process only; i.e., Pr ={Yn=O Zn=i} if j < N Pr {Yn=O Zni, Sn=j} = - ~1 ~if j = N. This case will be quite useful if the customers differ from each other greatly in their needs so that it is advantageous to send only certain types of customers to a given queue. This special problem can also be solved in a different way by the methods of Chapter III and the special case b above of Chapters IV and V. First, by the method of Chapter III, the arrivals can be partitioned into two streams, one of which would become the arrival stream to the queue of interest. Then by the use of special case b above, things of interest can be obtained.

-130 - These then are some of the special cases obtained by restricting the balking rules to certain forms. 5~ SINGLE NEGATIVE EXPONENTIAL SERVER SUBJECT TO A RENEWAL INPUT In this section we take up the interesting simple case where the interarrival times are independent, identically distributed random variableso This is equivalent to a semi-Markov process with only one state, ioeo, M=lo Note that in this case dependence of balking rule on the arrival process becomes of a trivial natureo In this special case, Chapter IV gives the queueing properties of a GI/M/1 queueing system with balking. By Corollary B to Theorem IV5o31, the process {Sn} (n > 0) is a Markov chain. This was also noted by Finch [9] who investigated the properties of queue size process for this caseo Finch, further has proven that a steady state distribution exists and has given this limiting distribution through generating functions. We in Chapter IV give the solutions for these by a different treatment through the use of matrix theory. Because of different tools used, it is difficult to show that our solutions are the same as given by Fincho In Chapter IV, in this special case, we are not completely original. However our treatment of busy period still remains original. In Chapter V, through further specializations we obtain some already investigated problems. One such further specialization is obtained by letting the balking probabilities be zero as long as the system is not fullo Then we obtain the overflow problem investigated by Disney [6] for a GI/M/1 system with storage, and by Palm [20] for the same system with no storageo

-131 - 00 Zo "0 - I 1 1 fl = 2 1. _ _ a Palm 's overflow problem For this case N is set ~sk 0 e0 -~ 1 e -D - ak 8J e -DlD k ~=e-D -1_ -D _ - ~k <1 e -D-i+pD to be 1. Then, 0 1 1 -PD 0 D - 0 so that, A*(s) R*(s) = ro Q A-(s+)- A A* (s)-A*(s+p) 0 A*(s+P) Q*(s) = (I-D)(1r 4 A*(s+P)) = A*(s)-A*(s+p) 0 A*(s+p) 0 In this case, by Theorem Vo7(1, the times between overflows are independent and identically distributed. This was shown by Palm. Now to get L(x), the distribution function of the time between overflows, we first obtain B*(s), then L*(s) = tB*(s)E by the formula (Vo7o1)o l-A*(s) + A*(s+p) I-Q*(s) = 0 -A( s+ ) 1

-132 -1 - (l-A*(s)+A*(s+p))1 L 0 (I-Q*(s))-1 A*(s+p) 1-A*(s)+A*(s+p) Then, from (V.4.2), B*(s) = R*(s)(I-Q*(s))-1 D = (1-A*(s)+A*(s+p))-" 0 0 A*(s)A*(s+p) A* (s++) A*(s+() B*(s) -A~s)+A (+ ) l-A*(s)+Ax (s+p) 0 0 A*(s) 1 0 1 Since B = B*(O) = L0 1 rn =- o B [O 1] J Also, c and. lo=o = [ 1], then *learly, II = [0 1] Thus, -+2, [ 00 A* (s)" 11 +P) i ' 1 1 L*(s) = JT B*(s) E = A*(s+A) l-A*(s)+A*(s A* (s+p) L*(s) =l-A*(s)+A)(s+P) i-A* (s )+A* ( s+) In Palm's problem, the overflow stream from the i-th server st constitutes the arrival stream for the i+lt server. Then, denoting the Laplace-Stieltjes transform of the interarrival distribution to the i-th server by Al(s), we obtain A.(') 1 A^) As+A..+)

-133 - These are the well known difference equations of Palm. For a more complete treatment of Palm's overflow problem we refer to [6, 16, 20]. bo Disney's overflow problem Disney [6] has generalized the results of Palm by letting storage in the system. Since M=M=1l, b(l,j)=0 for j < N, and 'o=-So-N, we have, from Theorem Vo7ol, that the times between overflows, 91, Q2,..o, are independent and identically distributed. This result was proved by Disney qualitatively. But he did not give the distribution of the time between overflows. In the following we are going to derive this distributiono T Since.I,=po= O..0 1], b=[O.. O 1], clearly, IT=[Oo..O 1]. Thus, L*(s), the Laplace-Stieltjes transform of the distribution function of the time between overflows can be calculated from L*(s) T-:B*B(s)E rR*(;' )(I-Q*(s) )3DE = — IR3*(s)(I-Q*(s)) b which reduces, by the special nature of IT and b, to the inner product of the last row of R*(s) and the last column of (I-Q(s))l. (See also the figure on page 135) Though this description of the solution should suffice, we next go on to give a formula for G*(s) in terms of A*(s). First, we introduce some special notation: let, 00 -k gk(s) =- 5k A*(s+8) = f e- - Lx dA(x), Re.{s}. 0; 9k ( s ): - _k.... kx,

-134 - (sz) m= F z gk(s) I, |z < 1; k=o g and 00 Q(s,z) = 2 z An(s), Iz < n=o 1 for n = o An(S) = * i det (I-Qn(s)) for n 1, where n > 1, Re {s} > 0 where QA(s) = C* gk(B) 2; gk(B) 3- gk(s) n-2 gk(s) 200 n1 gk(s) n gk(s) go(s) gl(s) g2(s) gn-(s) gn-2(s) gn-l(s) go(s) gl(s) go(s) gn-4(s) gn-3(s) gn_2(s)..gl(8).g2(s).g3(s) go(s) gl(s) g2(s) gl(s) First, (hI 00 k 00 k (s,z) = ko zk f e-sx e x dA(x) O =o e/ X e ex dA(x) 0 ((s,z) = A*(s-p(1-z)), Iz < 1, Re {s} > 0 (VI.5.1) Next, expanding An(s) about the last row we obtain, (we suppress "s" for convenience,) n-1 n+2 n-2 Ln = (_1)n+l (-zn gk)(-go) + (-1) (-gn-1)(-go)n2 A1 + (-1) (-gn-2) (-go)n- +.. +(-l)nn-2 (-g5)(-go)2 n-3 + (-1)n+n (-g2)(-go) An-2 + (-1)nn (l-gl)(-go)~ An-1 -1 n 00 6n-1 - go jzE gj0 go - j - z "i~l ~ k=n+l k- 1 gk go n >1.

-135 - 00 ~o gk oo Z1 gk 2 gk 00 ~2 gk Z~ gk OO 1 gk 2 gk 3 gk 00 N 9k 0 0 *N-1 *N-2 gN-3' go 1g2 92 0 So go 0 0 * ' gl go O O Q* = gn-1 0 gN-2 0 Figure 5. Matrices in the Overflow Problem (The argument "s" is suppressed everywhere.) Thus, 00 Q(s,z) = 1 + nl an Zn n= 00 n-1 = 1 + z 1 Anl. z n=l 00 00 k-1 - Z g2 k g o n=l k=n+ 1 -1 00 n n -go n=k j 1 g g n-j (VI.5.2) Now, 00 nl An-i n- = z); z = a(s,z); (vI.5.3)

-136 - 00 n n n=l ijl j g Lon z 00 = = j=l nj gj(zgo)J An-j =o1 g0j ( ~ k = g.(zg0) Az j=l 0 k o k = 2(sz)(4(s,zg0(s)) - g"(S)); (vI. 54) and finally, 00 n=l gn-l n j=n+l go z gj = n-1 n (A* - gk) n-El go z -k=o zA* n n n 1-zg - O noo g k + 1 + zA* -1 00 g (Zgo)k 1 + l-zgo -o nk n-k (zgo) = 1+ zA*(s) 1-zgo(s) 4(szgo(s)) g0(s)(1-zg0(s)) (VI.5.5) Putting (VI.5.5), (VI.5.4), and (VI.5.5) in (VI.5.2), we obtain Q(s,z) = 1 + zn(s,z) - )( zgo(s))-go(s) go(s) n(sz) - 1 - zgo(s)A*(s)-$(s,zgo(s)) go(s)('-zgo(s)) Introducing 0 = c(s) = zgO(s); -go (sc )-Ac*(1s) g0(l1-) Hence, Q2(sz) = $(sW)-wA*(s) 1 —w>c)) s s W)- W where, c = zgo(s) = zA*(s+p), (VI.5.6)

-137 - and C(s,w) = A*(s+p(l-w)) = A*(s+p(l-z A*(s+p))) Q(s,z) has a removable singularity at z=A*(s+)-1, (this corresponds to aol.) Further, $(s,))-w = A*(s+P(l-co))-w has, by Rouche's theorem, only one root within the unit circle; denote that root by co(s). Then, Q(s,z) has at most one pole within the unit circle, since from no(s) = zo(s) A*(s+p) we have zo(s) = |IA -j) > Icu(s)l. Therefore, Q(s,z) is analytic for all z with 0 < Izl < z(s)l, Re {s} > 0. Hence, An(s) can be obtained as the coefficient of zn in a series expansion of Q(s,z) around the origin with a suitably chosen z Next we obtain the last column of (I-Q*(s))1, (by using the equation M-1 = (det M)-1 adj M for a non-singular matrix M.) We then have as the last column (-1) (-go)N gN AO (-l)N+l (-go)N-l A gNo-1 Al (-l)N+2 (_go)N+2 2 -2 A2 (-1)N+a o AN A' N (-1)N+N- (-g0) AN go AN1 (-1)N+N (-go)~ AN go AN I

-138 - The last row of R*(s) on the other hand is [rk=N gk gN-1 gN-2 ~~ g2 gl go] Then, the Laplace-Stieltjes the time between overflows, product of the last row of Doing this we obtain transform of the distribution function of L*(s), can be obtained by taking the R*(s) with the last column of (I-Q*(s))-1 LN() ( gN + gN g -A1 + g+ g -2 + 2 g2 go AN-2 + gl go AN-l + go AN) ~ = AN- (AN- - AN + AN) Hence, since g ) A(s+ Hence, since go(s) = A*(s+p) L*(s) =N-l A*(s+P) ALN( S) (VI.5.7) This formula can be used by calculating the determinants AN_l(s) and AN(s) first. This can be done easily by direct computation if N is small. However, for larger values of N, it would be easier to use the generating function Q(s,z) given by (VI.5.6) to calculate AN and AN 1 As an illustration of the use of these we return to Palm's problem, where N-= o Then, L*(s) ( (S) A*(s+P). Now, al(s) 1(Ksz) + zA*(s+p)(l-A* (s)) -2(sz) l-zA*(s+P) L A*(s+ -PzA*(s+p) )-zA*(s+p)

-139 - A0o() -, (s,o) = 1, and Al(s) d ( (s,z) dz Z-0 dL (s,z) A*((s+) dz " (l-zA*( s+B) ) } + 1 A* ( s+ )(l-A (5) ) (l-zA*(s+p )) A*( s+' —zA*s+))-zA*s+p) zA*(s+1)( 1-A-t s )) [-A* (+P-zA*(s+p))-A*(s+P)] [A* s+p-pzA*s+sp)- zA*(s+.P)] so that, Al(s) d(,z)l A*(s+i) + A*(s+P)(l-A*(s)) dz z=o A*(s+) = 1 - A(s) + A*(s+p). Hence we have, L*(s) = (s)A*(s+P) _ Ai(s) A*(s+P) l-A*(s) + A*(s+P) once againo 60 CONCLUSION In this chapter we have given some generalizations of Chapter IV and V as well as some more important special cases. This completes our treatment of the decomposition of streams under queue dependent decision rules.

PART III APPLICATIONS

CHAPTER VII A SIMPLE SYSTEM:.AN EXAMPLE 1, INTRODUCTION In this last chapter we will illustrate the use of the analytical methods developed in the thesis by an example. We choose a simple system as shown below in Figure 6. Customers arriving into the system at D1 form a recurrent process. The decision rule at D1 is of the type discussed in Chapter II, namely, the sequence of outcomes of decisions forms a Markov chain with two states. One of the resulting two streams becomes the arrival stream to a single negative exponential server at Q1. Customers arriving at D2 either join the queue or balk with probabilities depending upon the number of people already there. Those who do not join Q1 form the arrival stream to D3. Decisions at D3 are made on the basis of customers' experiences, as in Chapter III... "J-". a!iQ._ \ _. Figure 6. A Simple System. -141 -

We will only be interested in the stream which constitutes the arrival stream at Q2; we will not consider the queueing properties of Q2 but will give the overflow stream from Q2 2, DEFINITIONS AND NOTATION We will denote the distribution function, (single or matrix valued,) of the interarrival times at the k-th decision point by Ak(x), k l,2,3,4, 5 We take the arrivals into the system to be a recurrent process with the time between arrivals distributed as Al(x). We assume the decision process at D, the process {Yl} is a Markov-chain with two states with the initial distribution vector [1 0] and the transition matrix Q = i 21 Q i > 0 LQ21 Q22 We take the server of Q1 to have negative exponentially distributed holding times with mean 1/P. There is no storage allowed in Q1 so that the maximum number allowed in Q1 is N = 1. Customers arriving at -2 balk Q1 with probability c (< 1) if the server is idle and with probability 1 if the server is busy. Hence the balking rc' rc oT vector is b = 1, and D = Llj Lo lj The customers arriving at D3 will be of two types: those who have balked Q2 when it was empty, and those who balked when it was full. Assume only the customers of second type are allowed in the direction of D4, and that with probability d (d is positive.) Hence the vector q is given as d idj Since only one type of customer is permitted to arrive at D4, the arrival stream to D4 will be a recurrent process. We assume

the maximum number allowed in Q2 is 2, and only those customers who find the system full are turned away, so that, the arrivals to D5 are the overflows from Q2 ~ 3, DECOMPOSITION AT D1 ARRIVALS INTO Q1 The decision process at D1 is a Markov chain defined by [1 0] and Q. The event E1 corresponds to a customer being sent to D2, thus we only care about the occurrences of E1 The arrivals at D1 form a recurrent process, i.e., a semiMarkov process with only one state. Since the decomposed streams have the same state space as the original stream, (cf. Theorem II.3.1,) the stream arriving at D2 will have only one state also, i.e., it will be a recurrent processo Hence the distribution of the time between arrivals to D2, A2(x), will be given, by (II.31l), by its LaplaceStieltjes transform as Aj(s) f (A*(s)) where fll(z) is the generating function of distribution of the first passage time from state El to E1 in the chain {Yl} o From Theorem II.3o3, the generating function fll(z)=l-l/ql (z) where qll(z) is the (1,1) entry of (I-zQ)-1, zI < 1 Now, I-zQ 1- Q11 - zQ12 zQ21 l-zQ22 thus,

-144 - 1 (I- zQY' 1-zQ22 ZQ21 zQ12 1-ZQll (1-zQ11)(1-zQ22) - Q12 Q21 Z so that, 1-zQ22 q11(Z) = 1 - (Qll+Q22)z + (QllQ22-Ql2Q21 )z 1-ZQ22 1 - (Qll+Q22)z - (1-Q1l-Q22)z2 Then, we obtain fl (z) 1 - 1 - 1 qll(z ) Q11z + (1-Qll-Q22)z2 1-zQ22 and hence, A2 (s) Q911A(s) + (1-Qll-Q22) Ar(s)) 1 - Q22A*(s) (vII.3.1) Let the expected interarrival time at D1 and D2 be 41 and 42 respectively. Then, by (II.5.4), we have 42 = \.1 ' Now, 1 dz 11 f z= (Qll+2 (1-Ql-Q22)X1-Q22) + Q22(Qll+l-Qll-Q22) (l-Q22)2 (1Q22 )2 2 - Qll - Q22 1 - Q22 = 1 + Q Q21 112 = (1 + Q ) t1 Thus, (VII.3.2)

4. QUEUEING SYSTEM Q2 Now that we have the arrival process for Q2 completely indentified, we turn to an investigation of its properties. Since N=l, the only states of the system are 0 and 1. Now, from equations in page 83 we have -PD - -r D _ e 0 _ e -1 r = -i-l 1 ' rl = - given b = and hene D = 1 PD and we are Then, P (s) = D(ro $ Aa(s+p)) + (I-D)(rl ~ A(s+ )) = (Dr0 + (I-D)rl) 4(s+P) ce-D + (l-c)(e- -1) e D 1 - - 1- -e - 1 + c e-iD _1 l-c 1 -1-c Af js+P) e2 (s+P) A2(s) - (1-c) A2(S+P) A(S) - A(s+P) (1-c)A^(s+P) A2 (s+P)i Clearly, the process {Sn}, (where Sn is the queue size at the n-th arrival epoch,) is a Markov chain (cf. Corollary IV.3.1.B.) The transition matrix P for {SJ, then, is

P P(1 - (1-c) A(P) p = P (0) =.l - AS(P) (l-c)A4(P)1 a2ffi) Let c' = 1-c, and v = A 2(). Then, 2 1 - c'v p = l1-v cfv] (VII.4.1) a. Distribution of queue size Assuming that Q1 was empty initially, i.e., pO = [1 0], we find the distribution of the queue size at the n-th arrival epoch from P pn as follows Let = f <1. Then, Pn= P P as follows. Let p(z) = 2n=o PnZ for IzI < 1. Then, p(z) = no PO pn zn = Po z (zp)n The matrix zP is dominated by z P whose rows all sum to z < 1; thus the series ZEn (zP)n converges to (I-zp)-. We, thus, have p(z) = (I - z) p(z) = p0 (I - zP) Now, (T-zP<-1 1- z (1- C V) =L- I (l-v)z — 1.c'vzj 1-vz c~vz (l- z ) (l-evz ) -(1-V 1-Z(1-C'V) Hence, p(z) [ 1 0] (I-zP) l p(z) 1- vz [(l-z)(l-cvz) (l-z)(l-cvz)I

-147 - From a power series expansions of the entries of p(z) the probabilities Pn(O) and pn(l) can be obtained as the coefficients of zn. Now, 1-vz expanding (l-z)(l-cvz) we get 1 - vz (l-z) (l-cvz) 1- v 1 + cv 1 l-cv l-z l-cv l-cvz for z < 1, Icvz < 1 also since c < 1 and v < 1; thus, 1-v 1 l-cv l-z c'v 1 l-cv l-cvz 1-v 2 3 = v (1 + z + z + z c v (l+cvz+(cv) z + 1-cv + c +... (cv)3z3+...). Hence, 1 - vz (1-z) (1-cvz) 00 l-v+c'v(cv)n n = - z n=o 1 - cv Thus we have l-v+c'v(cv)n Pn(O) = 1 - cv Pn(l) = 1 - p (0) = 1 l-cv (VII.4.2) The matrix P is positive, hence both of the states are ergodic and form one class as they should be in view of Theorem IV.5.5 since My = M = 1 and N1 = N = 1. The distribution of the queue size in the steady state can be obtained from pP = p and pE = 1 uniquely. Doing this we obtain 1-v P = [1-cv c'v 1- cv" (VII.4.3)

The same could have been obtained by taking limits as n approaches infinity of the Equations (VII.4.2) above: since c < 1, v < 1 we have cv < 1, and thus (cv)n -0 as n -; hence, Pn(0) — v 1 asand Pnk/ -~ " l-)n lcv-cv and p (1) — > 'v ' *"n' / 1-cv b. Length of the busy period Since the arrival process is a recurrent period is the busy period started by a customer of type Laplace-Stieltjes transform of the distribution of the is the first entry of the 2 x 1 column vector F*(s) process, the busy 1. Hence the busy period where F*(s) = (I-R* (s)- 1 K(s) with R*, Q*, K* 1 1 0 R1(s) = 0 Qv = LO — Ql(s) = ~o as defined in Section IV.8. Now; 1(cO A0(s+0) = A(s+)1 -c ')1 0A*+ 0. a,-j L 00 And from, F cPeY (l-A2(y))dy K(x) = 2e epyy(l-A2(y))dy o we get,

-149 - K*(s) 00 LPC f o 0 0 00 R2 d -c f d0 esx e-x (l-A2(x)) dx e- e- x (l-A2(x)) c eux (1-A (x))dx Iu=S+P dx] I - f e (1-A2(x))dx u o u+ K*(s) Pc (l-A (s+P))/(s+P) _2 (-A ( s+ ) + (s+P) d A( s+) )/ ( s+ )2 = 1-c 'A(s+p)F o 0 - A(s+P) Now, (I-R_(s)-Q.(s)) 1 1 0 c'A(s+P)/(1-A2(s+P) ) l/(l-A*(s+P)) so that, the first entry of F*(s) = (I-Rj(s)-Qj(s))-1 K*(s) becomes, (by letting s+P = u and A(s+P) = w, c(-w) w ( -w+ ) = (c(l-w) - c du og ) w Hence we have as the Laplace-Stieltjes transform of the distribution of a busy period; H*(s) H* () j ( c (-A(s+))(s+) l A-(s+ )-1 - Pc (l-A(s+P)) - c'A ^(s+P) d log -s+ s+P 22 d +P

-150 - This completes our treatment of the properties of Q2; if the properties of this queue is desired in terms of the original arrival stream, that can be accomplished by replacing As(s) by its equivalent in terms of A(s) as given by (VII.3.1). 5o BALKING PROCESS FROM Q2: ARRIVALS AT D3 The customers arriving at D3 are those who have balked from the first queue. Thus by the use of the methods of Chapter V, we can obtain the arrival process at D3 as the balking process from Q1 We now have R*(s) = $ A; (s+p) = A;(s) 0 A(s)-A(s+p) A*(s+P) and, Q*(s) = (I-D) A(s+)= c AsA (s+p)) c' A(s+P3 O 0O Let u = A2(s), and w = A(s+P); then I - Q(s) = l-c'(u-w) -c'w LO 1t so that (1-c (u-w))-C W(1- C' (u-W))] LO 1 Hence, from (v.4.2), we have as the matrix of balking process:

-151 - A3 (s) 3 = R*(s)(IQ*(s))-1D "u 0 1 = u-w W 'lC-(u-w) 0O ~1 cu 01 1- c (uw) Lu-c' ( 1 rcu c 'uw 1 1-c' (u-w) c(U-w) w_ C,' (w0 l-c'cu(u-w) w) O 1- c' (U-W) ie., 1 FcAJ (s) ctA*(s) A (s+P)] A (s) = 1 IC- *- S+ — 3 l-c'A(s) + c'AAs+) LA(s)-cA(s+P) A2(s+P) (VII.5.1) Thus, the arrivals to D3 form a semi-Markov process with two states, both of which are ergodic and in the same class. To find the distribution of the type of an arrival in the steady state we solve IIA3 = II, IE = 1 for II. Letting v = A(P), AC = cC 'V ] A 3 = 1 3 c + c 'v c(l-v) v (VII.5.2) hence IIA3 = 1 gives: v I c (c' t(o) + (1)) = 1 c + = and, since 7(O) + t(1) = HE = 1, v + (c + ci(l)) = t(1) c + C'v

-152 - Hence; c(l -v) ) c+(l-2c)v ' 1 1 - cv Note that I = -- pD = cv pb (i-v)c + c'v =[ c(l-v) c + (l-2c)v vcI t(i) = c+(1-2c)v [(l-v)c vc 1-cv 1-c1 vc' 1 c + (1-2c)v as needed in order to satisfy the Theorem V.5.4o Let [3 denote the 2 x 1 vector of times at D3 started by a customer who balked and by a customer who balked when it was busy. (V.6.l) we have expected interarrival Q1 when it was empty, Then, from Equation 3 = (I - A3 + R(I-Q*(o)) -)(E2 e 2) where 42 ' to remind, is the mean interarrival time at D2. Now, 1 0L 1 c:v[ R(I-R)-1 = 1 L l-v Ov L C+C 'v 1 C + C V 1 L1-v cI v v_ thus, (I-A3+R(I-Q)-1) 1 -0 1+c 'v c+c 'v c+c'\ + -- 0 C + C'v 1 1 _' (l-v) ) 1 I

-153 - so that, l+c ' v ~ 2 ~3 = c+c'v c'(l-v) c+c'v p 4 c'2 L+; (VII.5.3) 3 = c+c'v 6, DECOMPOSITION AT D3: ARRIVALS INTO Q2 Arrivals into Q2 constitute one of the streams resulting from the decomposition effected at D3 ' Assume that we start with an instant of arrival at D4 as the origin, i.e., 0 = To = T1. Arrival process at D3 has two states both of which are ergodic, i.e., M1 = M = 2 o And the vector of conditional probabilities of an T assignment is q = [0 d]. From Theorem III.5.2 we immediately have that the first state is empty, (since ql = 0, ) and the second state is ergodic. Hence the arrivals into Q2 will form a semi-Markov process with only one state, i.e., a recurrent process. Now, to find the distribution function of the interarrival times we use the formula (III.3.1). We have cu Ct'UW A*(S) = 1uV 3 1-c'(v-, Lc(u-w) wi where u = A;(s), and w = A*(s+P), and c' = 1-c. Then, further letting d' = d cu c'uw Q(s) = (I-D)A3(s) _ 1c lc' (vw) d'c(u-w) d'w _

-154 - so that, (I-Q*(s))1 = l-c 'U+C 'W (1-u+c'w)(1-c'u+c 'w-d 'w)-cc'd 'uw(u-w) 1-c'u+c w-d 'w cd'(u-w) CfUW 1-1+cT'W hence, letting the denominator of the terms in (I-Q*(s)) be denoted by A, we get B*(s) = A3(s)(l Q(s))-D cu c 'uw. -lc -'u+c'w-d 'w c (u-) wi cd ' u-w) c ' UW1 0 l-u+c 'w_ 0 0" d. 1 n r oo c 'uvwd (cu+l-u+c 'w wd(cc 'u(u-w)+l-u+c 'w) Hence we have xA*(1s-) = dw(l-u+c 'w+cc'u(u-w))_ A4( = (1-u+c'w)(1-c'u+c'w-d'w) - cc'dvw(v-w) (vII.6 1) where u= A*(s), 2 w = A*(s+f), 2 C' = 1-c, and d' = 1-d 7. OVERFLOW STREAM FROM Q2: ARRIVALS AT D5 Above we have given the distribution function, (or rather its Laplace-Stieltjes transform,) of the time between arrivals into Q2 ~ This second queue is a finite queue with a negative exponential server with mean service time i1/, subject to a recurrent input. The queueing

-155 - properties of such a system have been examined extensively in the already existing literature; we refer to Saaty [26] and Takacs [31], and their lists of references. We shall not examine the queueing processes in Q2; we are only interested in the overflow stream from that queue. We have, from Theorem V.7,1, that the overflow stream is a recurrent one, i.eo, the times between overflows form a renewal process. We shall, here, derive the distribution function A5(x) of the time between overflows by using the formulas developed in Chapter VI for this special case. From Formula (VI.5.7), we have for the Laplacee-Stieltjes transform of A5(x) A2 AC(S) = to 4 (S+a) where, - 1 gk(s) -go() A1 = 1 - gk(s), A = det k=l - 2 gk(s) l-gl(s) where gk(s) = A*(s+o). k! Now, go(s) = A(s+) gl(s) = - a d (s+~) Zg0 a(s)= e-D A*(s+o) A(s)

-156 - Thus, A1 = 1 - (o ) - g(s) g()) = 1 - A(s) + A;(s+o), and l-A4(s) + A4(s+c) - A4(s+K) A2 = det -(A4(s) - A(s+) + A(s[E)) 1 + s A(s+a) L ` () - A^ (S+T)ds dss d = (l-A4(s)+AZ(s+a))(l+a d A*(s+a))-A(s+cr)(A(s)-A(s-)+ A(s+)) A = (l-A^(s)+Aj(s+o))Ar(s+a) + (l-A^(s))(l+ ds A4(s+a)) Hence, *^ ^ _________ (l-A4(s)+A4 (s+c) )<A(s+&) A>i) (l-AJ (s)+A*(s+a))Ac(s+a) + (l-A*(s))(l+ dd A *(s+Y)) A(s) [1 + (1+a dA Aj(s+a))(Aj(s+a)" - (l-Aj(s)+A)(s+))]1 (VII.7) ^5 ds (vll~lds If one wants to express (VII.7.1) in terms of the original interarrival distribution, it can be done by first expressing A(s) in terms of As(s) as given in (VII.6.1) and then replacing A;(s) by its equivalent in terms of Al(s) as given in (VII.3.1). 8. CONCLUSION We have taken a very simple system and illustrated the step by step analysis of the system by the methods and tools developed in this thesis. Although the computations involved are tedious (if done by

-157 -hand of course,) they require basically simple operations. Happily we live in the age of computers, and there exist programming languages that can handle the calculations involved quite efficiently. The fact that we only need the results of the last step to go on to the next step is helpful in reducing the size of the problem. Further, if one is interested only in steady state solutions, all transient states created in the course of decompositions can be discarded, as is done in Section 6 of this chapter, thus reducing the size of the matrices involved.

BIBLIOGRAPHY 1. Benes, V. E. "Heuristic Remarks and Mathematical Problems Regarding the Theory of Connecting Systems," Bell System Tech. J. vol. 41 (1963) 1701-1748, 2.. "Algebraic Properties of Connecting Networks," Bell System Tech. J. vol. 42 (1963) 567-607. 3. Berge, C. Theorie des Graphes et ses Applications. 2nd ed. Paris: Dunod, 1963. 4. Burke, P. J. "The Output of a Queueing System," Opr. Res. vol.4 (1956) 699-704. 5. Chung, K. L. Markov Chains with Stationary Transition Probabilities. Berlin: Springer-Verlag, 1960. 6, Disney, R. L. "Some Problems in the Theory of Conveyors and Their Analysis by the Method of Decomposition of Queueing Networks." Doctoral dissertation, Johns Hopkins University, 1964. 7. Doob, J. L. Stochastic Processes. New York: Wiley, 1953. 8. Feller, W. An Introduction to Probability Theory and Its Applications. 2nd ed. vol. 1. New York: Wiley, 1957. 9. Finch, P. D. "Balking in the Queueing System GI/M/1," Acta Math. Acad. Sci. Hung. vol. 10 (1959) 241-247. 10. Ghosal, A. "Queues in Series," J. Roy. Stat. Soc. Ser. B, vol. 24 (1962) 359-363. 11o Hunt, G. C. "Sequential Arrays of Waiting Lines," Opr. Res. vol. 4 (1956) 674-683. 12. Jackson, J. R. "Networks of Waiting Lines," Opr. Res. vol.5 (1957) 518-521. 13. _____. "Jobshop-Like Queueing Systems," Mgt. Sci. vol. 10 (1963) 131-142. 14. Jackson, R. R. P. "Queueing Processes with Phase Type Service," J. Roy. Stat. Soc. Ser. B, vol. 18 (1956) 129-132. 15. Kemperman, J. H. B. The Passage Problem for a Stationary Markov Chain. Chicago: University of Chicago Press, 1961. -158 -

-159 - 16. Khinchine, A. Y. Mathematical Methods in the Theory of Queueing. London: Griffin, 1960, 17, Kingman, J. F. C. "Two Similar Queues in Parallel," Ann. Math. Stat. vol. 32 (1961) 1314-1323. 18. Levy, Po "Processus Semi-Markoviens," Proc. Int. Cong. Math. (Amsterdam, 1954), vol. 3, 416-426. 19. Mirsky, L. An Introduction to Linear Algebra. Oxford: Oxford University Press, 1963. 20. Palm, C. "Intensitatschwankungen im Fernsprechverkehr," Ericsson Technics, vol. 44 (1943) 1-189. 21. Patterson, R. L. "Some Analytical Methods in the Study of n-Stage Stochastic Service Systems with Applications to the Optimization Problems." Doctoral dissertation, University of Michigan, 1963. 22. Pyke, R. "Markov Renewal Processes: Definitions and Preliminary Properties," Ann. Math. Stat. vol. 32 (1961) 1231-1242. 23,. "Markov Renewal Processes with Finitely Many States," Anno Math. Stat. vol. 32 (1961) 1243-1259. 24, Reich, E. "Waiting Times When Queues are in Tandem," Anno Math. Stat. volo28 (1957) 768-773. 25, Rosenblatt, M. Random Processes. New York: Oxford University Press, 1962, 26, Saaty, T. L. Elements of Queueing Theory with Applications. New York: McGraw-Hill, 1961. 27 _. "Stochastic Network Flows." Paper read at the Symposium on Congestion Theory, Chapel Hill, August 1964. (Mimeographed.) 28, Smith, W. L. "Regenerative Stochastic Processes, " Proc. Roy. Soc. (London), Sero A, vol. 232 (1955) 6-31. 29, _. "Renewal Theory and Its Ramifications," J. Royo Stat. Soc. Ser. B, vol. 20 (1958) 243-302. 30. Syski, R. "Congestion in Telephone Exchanges," in Information Theory, ed. E. C. Charry (New York: Butterworth, 1961), 85-98. 31 Takacs, L. Introduction to the Theory of Queues. New York: Oxford University Press, 1962. 32, Thrall, R. M., and Tornheim, L. Vector Spaces and Matrices. New York: Wiley, 1957.

UNIVERSITY OF MICHIGAN III3 9015 02828 5487 3 9015 02828 5487