Past Intensity of a Terminated Poisson Process
S. M. Pollock
The University of Michigan
R. L. Farrell
Vector Research, Incorporated
Technical Report 83-14
Department of Industrial and Operations Engineering
The University of Michiganc
Ann Arbor, Michigan 48109
July, 1983

Past Intensity of a Terminated Poisson Process
S. M. Pollock
The University of Michigan
R. L. Farrell
Vector Research, Incorporated
INTRODUCTION
One rule for the termination of a stochastic point process is to
stop the process when the number of events within a window of length w
exceeds a threshold number k. Such a behavior was hypothesized in [1] in
an attempt to develop a model that might represent the decision-making of
judges when facing a juvenile delinquent with a record of "police contact" events extending into the past.
An interesting question arises when one wishes to look at the past
behavior of a collection of processes terminated in this way. This was
examined in [1] in an applied (criminal justice) setting, but unfortunately the mathematical analysis given there is flawed in not having
taken into account the true conditional nature of the process before it
is terminated, given that it indeed does become terminated at some time
t. In this paper we present an analysis that rectifies this flaw.
There are a broad variety of applied situations where an analyst is
faced with a stream of data from a controlled process and wishes to
analyze the parameters of the uncontrolled underlying process. Any such
analyses, whether in the criminal justice problems that originated the
current work or in other quality control problems, require an understanding of the relation between the unobservable parameters of the

uncontrolled process and the observable behavior of the controlled
process. In many instances- apparent facts about the underlying process,
derived from observation of the controlled process, may actually be
related to the control mechanism and not the underlying process.
FUNDAMENTAL PROBLEM
Consider a point process commencing at time 0 -- that is, with no
points in (-a, 0] -- and an associated stopping rule that terminates the
process at the first time t that there are k events within the interval
[t-w, t]. Let the stopping time be the random variable T, with density
function f(t). For example, if the process is Poisson with rate X, then
when k = 1, f(t) is trivially Xe-t. However, if k > 2, even in
this simple case f(t) becomes an extremely complicated function (the case
of k = 2 is discussed in detail below).
We wish to find the rate of a stopped process y time units into the
"past", e.g., previous to being stopped, but of course after the process
started. The situation is illustrated in Figure la, which schematically
shows a number of realizations of stopped processes, and Figure lb which
shows these same realizations "right justified" at the stopping time.
The rate of interest is seen to be proportional to the conditional probability of an event in an interval around the time y in the past, given
that the process was in existence at time y in the past. That is, in
looking back at time y we consider only evidence from realizations which
were "alive" at that time; other realizations are disregarded.
This rate can be computed by first considering a rate conditioned on
a particular value of the stopping time. In particular, we can first
compute this conditional rate, defined to be

Figure 1a. Process realizatons.
----- — A-A-A ----A-.AA
-— A --- A-A
Z' -A
— A --- —A —A.Al
4i-AA
Figure 1b. Looking back from terminatlon.
-— A a -A-AaN
~A AA
-A —A -- -
— A —.A A
0
0
N
U
0
K
1 I I I I 1I I I I
I a I
I
I
I Ia
0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100
Time
-100.0 -80.0 -60.0 -40.0
Time Into the past
-20.0
0.0

p(ylt) - prob. dens.{non-stopping event at t-y
process stops at t} (1)
(Throughout this paper, we assume that the relevant densities exist, that
functions are smooth, that limits taken exist, etc., without giving
mathematical details or describing discontinuous analogs of the cases
described.)
Once this has been computed, this allows us to find the unconditional rate:
r(y) = (y|t) h(y,t)dt (2)
where
h(y,t) - p.d.f. for the stopping time T given T > y
0 0 <t <y
(3)
f(t) y < t < X
F(y)
with F(y) f(t)dt. The conditioning event {T > y} is important here,
since it guarantees that the time y into the past is smaller than the
stopping time T.
One expects this rate r(y) to be high for y < w, reflecting the
stopping condition. One might expect it to be monotone non-increasing:
as will be shown, it is not. One might also expect the "controlled" rate
r(y) for y > w to be less than the "uncontrolled" rate X and less than
(k-l)/w due to the stopping condition. The first of these assertions is
true for the cases analyzed; the second is not and, in fact, X can exceed
k/w. Thus, the controlled process, viewed from its termination, has
several non-intuitive properties.

SOLUTION FOR A POISSON PROCESS AND k = 2
In the case where k = 2 and the point process is a renewal process,
Equation (1) may be usefully re-written by using Bayes' rule. In particular, by defining g(y) - prob. dens.{process stops at t non-stopping
event at t-y} and e(t-y) = prob. dens.{non-stopping event at t-y} then
equation (1) becomes
o(yjt) = 9(y)e(t-y)
f(t)
Combining this with equations (2) and (3) gives, after some manipulation,
r(y) = g(Y) e(x(4)
F(y)
(This formula does not hold for k > 2, in which case g is a function of
y and t-y.)
Evaluation of equation (4) is made extremely difficult by the nature
of the stopping process and its effects on g(.) and f(.). Even in the
case of a stationary Poisson process with rate x an analytic solution is
readily obtainable only for k = 1 and k = 2. In the former case, of
course r(y) = 0 for all y > 0 since the process stops at the first event,
and thus the rate of events earlier than this is zero.
We can also analyze the case where k = 2. First, to find g(y), we
note that this is the p.d.f. for the time Y from a (non-stopping) event
to termination of the process. Simple renewal arguments lead to the
generator equations:
e-Xt 0 < t < w
g(t)= t (5)
Xe-xx g(t-x)dx w < t
^w

The upper expression on the right hand side represents a termination at t
due to the next event's arrival time being less than w; the lower
expression convolutes the exponential next-event time (larger than w)
with g(.). Successive solutions of equation (5) for intervals (w, 2w],
(2w, 3w],... leads eventually to
N N-1
g(y) = xe-xy [ E xn(y-nw)n xn(y-(n+l)w)n ]
n=O n! n=O n!
(6)
where N Y = greatest integer part of -, and 2 O.
w w n.O
By similar arguments it can be shown that
N+1
F(y) = e-xy xn[y- (n-l)w]n (7)
n=O n!
By the additivity of expectations, e(x)dx is the expected number of
non-stopping events. The ntuber of such events is distributed geometrically with parameter e-Xw. Therefore
fe(x)dx = 1 xw* (8)
Figure 2 shows r(y)/x, as calculated from equation (4), using the
results of equation (6), (7) and (8). In the figure w = 1, so that time
is in "window units", and r(y)/X is shown for X =.5, 1, 2 and 5. As can
be seen, there are a number of interesting features to this retrospective
look at the terminated process:
(a) For all X, the rate of occurrences into the past falls off
until t = 1 where it drops to zero.

Figure 2.
2.0
0
E0
E
o
o1.0
I0o0
Ratio of r(y) to lambda
lambda 0.5.......... lambda 1.0
__ lambda 2.0
lambda 5.0
K. 00..6. 4..0a0 60006&
-----— r ICLLI. a. - -. - - - -r
0.0
1.0
2.0
3.0
Time into past (window lengths)

(b) Values of r > 1/w can occur even for y > w.
(c) For times greater than 1 into the past, although the rate
climbs in some time periods, it is never greater than X, and
even for x < 1 can be considerably less than X.
(d) As y - m, r(y)/x approaches a non-zero value dependent on X and
W.
To analyze the limiting behavior of r(y) for large y, consider the defective renewal equations
g(y) = b(y) + g(y-u) dL(u)
Fo
F(y) = A(y) + F(y-u) dL(u)
where
1-e-'Y(1+xy) y < w
A(y) =
1-e-XW(l+ w) + w(e-Xw-e'XX) y > w
Xe'-Y 0 < y < w
b(y) =
0 y > w
0 y> w
L(y) =
e'x.W-e-'Y y > w
Using methods for the asynptotic analysis of renewal equations with
defective distributions (note that L(X) = exw = 1-Po < 1) given,
for instance, in [2] and [3], one can eventually obtain
lim r(y) = lim g(y)/Pn = n(1- n)
y-we y-e 1-F(y) 1-e- w
where n is the (unique) solution to n = e'Xwn. Figure 3 shows
lim r(y)/X as a function of xw.

Figure 3. Graph of lim r/lambda.
1.0
a _ 1
E \
O. L. Is 3 Is --- A V I
t_....' L t > t '
16. 0 1.t0 20.0
0.0 2.0 4.0 g.0 B.n0 I a a I-"
lambda*w
AM A

DISCRETE TIME APPROACH FOR RENEWAL PROCESSES WITH k > 2
Analysis of even Poisson processes with k > 2 has thus far proven to
be intractable. A discrete-time approximation for general renewal cases
with arbitrary k however, is possible -- one that yields messy but computable results. A sketch of this method follows.
Re-define the stochastic process to be a discrete-time renewal
process. Similarly re-define the stopping rule to be: the process terminates if there are k successes (events) within the past V trials. Thus
V: w/T. We wish to find the probability r(j) that the jth trial before
termination was a success, given that the process was "alive" then.
Thus, j y/r.
We can now identify an induced Markov Process (IMP) generated by the
terminating process. The state space of IMP consists of a possibly
countable number of states in three classes:
(1) 1+~i states which can be entered at most once each, representing
initial histories of O, 1,..., u failures (the first of these
is the initial state);
(2) a finite or countable number of states representing histories
of t+1,..., ]+i,... failures since the last success (or process initiation) -- the number of these states is either the
maximum possible length of an interarrival interval, or the
point at which the tail of the interarrival distribution
becomes geometric; and
(3) 2'-1 states representing all possible histories of length Vi
with one or more successes.
Some class (3) states are absorbing (and some -- those with more than k
successes -- are impossible to reach from the initial state). These may

K-1
be lumped together into a single state, leaving J = E () noni=l i1
absorbing states in class (3) and a single absorbing state. The process
after this lumping is called'the Reduced IMP (RIMP); it has K+J+1 states
where K+J, the number of class (1) and class (2) states, may be infinite.
The transition matrix P for the RIMP consists of all zeroes except
for at most two elements in each row: one representing a success on the
next trial, the other a failure. The transition probabilities are easily
determined from the interarrival distribution of the renewal process.
Assume the absorbing state is numbered 0 and the initial state is
numbered 1. Define the events:
S(n) (The nth trial is a success}
Ai(n) - {The RIMP is in state i after the nth trial}
T(n) {The RIMP terminates (first reaches its absorbing state)
on the nth trial}.
Furthermore, let
Z*(i) - the state of the RIMP that results from state i when a
success occurs.
We wish to calculate the success probability on the jth trial preceding termination, given the process was alive at that time. This is,
in analogy to equations (2) and (3)
E p(J n) pr.l(n)}
r(j) = _ nj.+l..n (9)
E pr.{T(n)}
n=j+1

where
p(j|n) = pr.{success on jth trial preceding termination
termination on the nth trial
= pr. {S(n-j) j T(n)}
which can be written
J+K
= pr.(S(n-j) n Ai(n-j-1) T(n)} (10)
i=1
The terms in the summation in equation (10) can be re-written, by using
Bayes' rule, as.
pr. {S(n-j) r Ai(n-j-1) T(n)} =
pr. {T(n) S(n-j) n Ai(n-j-1)} x (lla)
pr. {S(n-j) Ai(n-j-l)} x (11b)
pr. {Ai(n-j-1) pr. T(n)} (11c)
The conditioning event in term (lla) is the equivalent of
AZ*(i)(n-j), since the RIMP is, after the (n-j+l)st transition, in
the state resulting from a success in state i.
The probability in term (11b) is, by definition, PiZ*(i).
Combining equations (9), (10) and (11), then, gives
co~ J+K
Pi,*(i) 1: pr.(T(n) Aa*(i)(n-j)}pr.{Ai(n-j-1)}
F(j) n= (12)
pr. {T(n)}
n=j+l

The terms can be readily (if laboriously) computed from elements of
powers of the matrix P:
pr. (T(n) AZ*(i)(n-j)}
J+k
h=1 l
=* E [pj ^*(i), n Ph
pr. {Ai(n-j-l)}
= [pn-j-11
E pr. {T(n)}
n=j+l
=1 -pi[
= 11,
These formulae offer a computational approach to terminating processes
with k > 2.
ACKNOWLEDGEMENT
We should point out that L. Tierney brought to our attention the
flaws in the development of r(t) in [1]. Jeff Alden was particularly
helpful in pointing out an elegant transform approach that allows a
verification of equations (6) and (7). M. Maltz has developed a simulation of the process that confirms our results. Part of this research was
supported byy,mil BB of Justice Grant #81-IJ-CX 0064.

REFERENCES
1 Maltz, M.D. and Pollock, S. M. "Artificial Inflation of a
Delinquency Rate by a Selection Artifact", Operations
Research 28, pp. 547-559 (1980).
2 Feller, W., An Introduction to Probability Theory and Its
Applications, Volume II, Wiley (1966).
3 Karlin, S. and Taylor, H. M., A First Course in Stochastic
Processes, Second Edition, Academic Press (1975).