THE UNIVERSITY OF MICHIGAN COLLEGE OF ENGINEERING Program in Computer, Information and Control Engineering Technical Report COUNTING PROCESSES AND INTEGRATED CONDITIONAL RATES: A MARTINGALE APPROACH WITH APPLICATION TO DETECTION Frangois-Bernard Dolivo supported by: U.S. AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH GRANT NO. AFOSR-70-1920-C ARLINGTON, VIRGINIA and NATIONAL SCIENCE FOUNDATION GRANT NO. GK-20385 WASHINGTON, D.C. administered through: DIVISION OF RESEARCH DEVELOPMENT AND ADMINISTRATION ANN ARBOR June 1974

L A m A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer, Information and Control Engineering) in The University of Michigan 1974

ABSTRACT COUNTING PROCESSES AND INTEGRATED CONDITIONAL RATES: A MARTINGALE APPROACH WITH APPLICATION TO DETECTION Martingale theory, as recently developed by Meyer, Kunita, Watanabe and Dol6ans-Dade, is used to study Counting Processes (CP) and their likelihood functions. Here a CP is a stochastic process having right-continuous sample paths constant except for randomly located positive jumps of size one. First the problem of modeling and description of CP's is examined. Let (Ft) be an increasing right-continuous family of a-algebras to which the CP (Nt) is adapted and suppose that the random variable Nt is a.s finite for each t. The Doob-Meyer decomposition for supermartingales associates to the CP (Nt) a unique natural increasing process (At) which makes the process (Mt = Nt - At) a local martingale with respect to (Ft). This decomposition (Nt Mt + At) is intuitively a decomposition into the part (Mt) which is not predictable and (At) which can be perfectly predicted. The process (At) is called the Integrated Conditional Rate (ICR) of (Nt) with respect to (Ft) for the following reason: when (Nt) satisfies some sufficient conditions the ICR takes on the form (ft AXds), where (At) is ii ii

a non-negative process called the conditional rate, satisfying t = lim E[h (Nt+h - Nt)IFt]. Our approach, however, requires only the weak assumption that Nt is a.s finite for each t; there always exists an ICR while in general a conditional rate cannot be defined. Sufficient conditions for the existence of a conditional rate are presented. Based on the character (e.g., totally inaccessible) of the stopping times defined by its jumps any CP is shown to be uniquely decomposable into the sum of a regular CP and an accessible CP. It is also demonstrated that each class is completely characterized by continuity properties of the ICR. CP's with independent increments are uniquely distinguished by a property of their ICR's: they are deterministic and given by the mean of the CP. Expressions for probability generating functions and conditional probabilities P{Nt - Ns = nlFs} are derived. The technique used (a typical martingale approach) can be specialized to CP's which admit a conditional rate satisfying some kind of conditional independence property and for processes of independent increments. Results in this last case are well known when the mean of the process is continuous, but our derivation extends to the general case. Likelihood ratios for detecting CP's are computed via an extension of the three-step technique (the Likelihood Ratio Representation Theorem, the Girsanov Theorem and the Innovation Theorem) introduced by Duncan and Kailath in their works on detecting a stochastic signal in white noise. iii

Suppose (Nt) is under the measure P (resp. P1) a CP with an ICR with respect to the family of a-algebras F~(resp. F) of the form (ft A dls )(resp. (ft 1 dm )) where (A,~)(resp. 0 b s Q s s t 10 0 (AX)) is a positive process and mt a continuous determint istic increasing function. Then the likelihood function Lt for the above detection problem and a time of observation [O,t] is shown to be ^0 Lt = n JexP[f (A - 5)dms <t XJ 0 n where t = Ei[Atia(NU, 0 < u < t)], i = 0,1 (Ei(): expectation operator with respect to Pi) and Jn is the time th of n jump of (Nt). Stochastic integral equations which allow us to compute the likelihood function Lt recursively are also derived.

ACKNOWLEDGEMENTS The author wishes to express his gratitude to Professor Frederick J. Beutler, Chairman of the Doctoral Committee for his invaluable guidance, constructive criticisms and encouragement throughout the period of research and preparation of the dissertation. Gratitude is also extended to the other members of the Doctoral Committee, Professors W. L. Root, J. G Wendel, L. L. Rauch, and R. L. Disney for reviewing the manuscript. Thanks are due to Mr. Thomas Hadfield for his competence in typing the final copy. The author would also like to acknowledge the financial support of the Air Force Office of Scientific Research, AFSC, USAF, under Grant No. AFOSR-70-1920C. and the National Science Foundation under Grant No. GK-20385. Early years of study at The University of Michigan were supported by a European Space Research Organization and National Aeronautics and Space Administration exchange fellowship. Their help is gratefully acknowledged. Finally, a special measure of gratitude is due to Viviane, Anne-Catherine and Sylvie for their patience and understanding. v

TABLE OF CONTENTS ABSTRACT............. ii ACKNOWLEDGEMENTS...... v LIST OF APPENDIXES............... ix LIST OF KEY TERMS................. x LIST OF SYMBOLS AND ABBREVIATIONS......... xii INTRODUCTION................... 1 CHAPTER 1. MATHEMATICAL REVIEW: MARTINGALES AND RELATED PROCESSES 1.0 Introduction.............. 7 1.1 Stochastic Processes.......... 7 1.2 Stopping Times............. 11 1.3 Measurable Processes.......... 18 1.4 Class (D) and (DL) Proc esses.... 20 1.5 Martingales.............. 21 1.6 Potentials and the Riesz Decomposition............. 25 1.7 Doob-Meyer Decomposition - Introduction............. 26 - Integration with respect to an Increasing Process........ 29 - Uniqueness.............. 30 - Existence........ 33 1.8 Square Integrable Martingales - Introduction............. 37 - Natural Increasing Processes Associated with Square Integrable Martingales and Stochastic Integrals.............. 39 vi

- Decomposition of the Space... 42 - Quadratic Variation Processes and Stochastic Integrals......... 43 1.9 Generalizations of Martingales - Local Martingales and Stochastic Integrals......... 45 - Semimartingales and the Change of Variables Formula......... 51 - Exponential Formula......... 53 CHAPTER 2. COUNTING PROCESSES AND INTEGRATED CONDITIONAL RATES 2.0 Introduction.............. 55 2.1 Basic Definitions and Assumptions... 56 2.2 A Preliminary Result......... 59 2.3 Integrated Conditional Rate - Doob-Meyer Decomposition for Counting Processes......... 65 - Integrated Conditional Rate: Definition..... 67 - Examples and First Properties.. 70 2.4 Regular and Accessible Counting Processes - Definition and Decomposition... 77 - Regular Counting Processes...... 83 - Accessible Counting Processes... 87 2.5 Conditional Rate............ 100 2.6 Counting Processes of Independent Increments.......... 106 vii

2.7 Probability Generating Function - Preliminaries.....111 - Application to Counting Processes of Independent Increments...... 112 - Application to Counting Processes with a Conditional Rate... 118 CHAPTER 3. DETECTION 3.0 Introduction....... 122 3.1 Two Basic Theorems in Detection - Absolutely Continuous Change of Measure: the Girsanov Theorem.... 124 - Innovation Theorem.......... 134 3.2 Martingale Representation....... 136 3.3 Likelihood Ratio Representation - Main Result............ 151 - Discussion of Assumptions...... 170 3.4 Detection Formulas - Introduction............. 175 - Likelihood Ratio: First Result... 178 - Generalization............ 181 - Integral Equations for Likelihood Ratios...... 182 CONCLUSION.......... 187 APPEND IX............. 190 REFERENCES.......... 198 viii

LIST OF APPELNDIXIES Page Appendix A.1..................... 190 Appendix A.2.................... 191 Appendix A.3..................... 194 Appendix A.4..................... 194 Appendix A.5.................... 196 ix

LIST OF KEY TERMS Page Accessible -counting process 77 -stopping time 15 Adapted 11 Class (D), (DL) 20 Conditional rate 100 -integrated 67 Counting process 57 -predictable 77 Doob-Meyer decomposition 28 -for counting process 65 Family of o-algebras 11 -right-continuous 11 Hitting time 15 Increasing -process 28 -process associated with 39 Integrated Conditional Rate 67 Local martingale 46 Locally bounded 49 Martingale 22 -square integrable 38 Measurable 18 -progressively 18 Modification 9 Natural 30 Poisson process -non homogeneous 111 -generalized 113 x

Page Potential 25 -generated by 29 Plredictable -counting process 77 -process 40 -stopping time 15 Process -adapted 11 -counting 57 -locally bounded 49 -predictable 40 -right-continuous 9 -simple 40 -stochastic 8 Quadratic variation process 49 Regular -counting process 77 -sub-, supermartingale 34 Riesz decomposition 25 Semimartingale 51 Stopping time 12 -accessible 15 -inaccessible 15 -predictable 15 -reducing 46 -totally inaccessible 15 Submartingale 22 Supermartingale 22 Time of discontinuity 16 -free of 16 th Time of n jump 58 xi

LIST OF SYMBOLS AND ABBREVIATIONS The following list contains symbols and abbreviations frequently used in this dissertation, alphabetically ordered. Symbol Meaning Defined on Page a.s Almost surely (At) Integrated Conditional Rate 67 A Class of integrable, right- 45 continuous, adapted, increasing processes which are zero at the time origin A A - A 45 B([a,b]) Borel sets of the interval [a,b] 9 2C ~ Set of complex numbers CP Counting Process 57 E Expectation operator with respect to the measure P Ei Expectation operator with respect to the measure Pi E(-IFt) Expectation operator conditioned to the a-algebra Ft (Ft) Increasing family of a-algebras 11 FT a-algebra associated to the (F ) 14 stopping time T xii

Symbol Meaning Defincd on Plage (Gt) Increasing family of o-aloebras 11 GT o-algebra associated to the (Gt) 14 stopping time T H(Ft) Class of locally bounded predicatble 49 processes IA Indicator function of the set A 8 ICR Integrated Conditional Rate 67 th Jn Time of n jump of the CP (Nt) 58 LHS Left Hand Side (Lt) Likelihood ratio 178 (At) Conditional rate 100 L (X) Class of predictable process (Ht) such 40 that E H2d<X> < o S S L(P,Ft) Space of local martingales which are 46 zero at the time origin (Mt) The local martingale (Nt - At) 69 2 M (P,Ft) Space of square integrable martingales 38 which are zero at the time origin M2(P,Ft) Space of martingales which are locally 38 square integrable and zero at the time origin (Nt) Counting process 57 xiii

Symbol Meani ng Defined on Page Nt a-algebra a(N, 0 < u < t) 57 generated by the CP (Nt) up to and at time t IN Set of integers P,P,P1 Probability measures on (Q,F) _R pR Restriction of the measure Pi to 176 ~1 the a-algebra NR Q+ Set of positive rationals RHS Right Hand Side JR Real line R+ Positive real line V' Class of finite valued, right-continu- 45 ous, adapted, increasing processes which are zero at the time origin V V+ - V+ 45 (Xt) Positive part of (Xt) Xt v 0 (Xt) Negative part of (Xt): -(Xt A 0) (Xt) Unique local martingale which is the 47 continuous part of the local martingale (Xt) d (Xt) Unique local martingale given by 47 (Xt - XT) where (Xt) is a local martingale xiv

Symbol Meaning Defined on Page <X> Natural increasing process 39 associated with the square integrable martingale (Xt) [X]t Quadratic variation process of the 49 local martingale (Xt) w Elementary outcome 7 Q -Set of elementary outcomes 7 (Q,F) Underlying measurable space 7 a(-'*) o —algebra generated by Impossible event, empty set p(z,t,s) Probability generating function 111 Af ft - ft-: jump of the function f 37 at t a A b Minimum of a and b a v b Maximum of a and b xv

INTRODUCTION In this thesis we examine the relation between counting processes and martingales and apply the pertinent results to solve the detection problem for a large class of counting processes. By a counting process we mean a process which is a.s zero at the time origin and has a.s right-continuous sample paths which are constant except for positive randomly located jumps of size one. Such a counting process (Nt) can be interpreted as one which counts starting from the time origin the number of points of a point process falling in the interval (O,t]. We think of a point process as a sequence of points randomly located on the real line. To fix ideas in their most simple form suppose now for amoment that (Nt) is a counting process of independent increments and denote its mean ENt, supposed finite for each t, by mt. Then it is easy to see by a direct computation that the process Mt = Nt m (I.1) t t t is a martingale. If furthermore (Nt) is of Poisson type with rate t then t mt= Xsds 0 Recall also that 1

2 X, = lim EI Nt+h -Nt t hO - The literature (Rubin [R2], Snyder [SI],[S2],[S3], Clark [C1] and lately Br6maud [B1]) reflects interest in the case of a counting process, sometimes called extension of Poisson process, which can be described by a random rate. This random rate,also called intensity function, has the interpretation: N t+h Nt t lim E Ft (1.2) t hhO ^ ^^t where (Ft) is an increasing family of a-algebras to which (Nt) is adapted. Usually the a-algebra Ft is taken to be the minimal a-algebra generated by the process (Nt) up to and at time t. We denote this last a-algebra by Nt. The approach usually taken in the literature ([S1],[S2],[S3], [R2]) to describe such a counting process (Nt) is to assume that the limits lim At [1 - P{t+ - Nt = OINt] A t+ O+ lim tPIN = + lINt} (I.3) A-OAt t+At t- t A t-* exist and are equal. Denote this limit by pt' This process (vt), clearly in the same spirit as the process (At) defined by (1.2), has the following interpretation:

3 the probability that the counting process (Nt) has a jump in the interval (t,t+At] given the past is equal to tat + o (At). The technique to obtain results is then to examine what is happening in small cells of size At and take limits. But this limiting procedure is not simple (the terms o(At) are random!) and for validity requires numerous purely technical assumptions on the process (pt) (see, for example, in [R2] conditions (2), (3) and (4)). This approach has other drawbacks and difficulties. The existence of counting processes (excluding Poisson processes) for which the above limits (1.3) exist and have all the required properties has never been shown. Also the specification of the process (pt) may not define a unique counting process, if indeed any such counting process exits. The problem of existence of counting processes,which admit a random rate as defined by (1.2) has been treated only lately by Br6maud [B1] in his dissertation, where a partial answer to this problem is given: the existence of counting processes which possess a bounded random rate with respect to the family of a-algebras generated by the counting process itself is demonstrated by the use of absolutely continuous changes of measures. We discuss and extend this technique in Section 3.1, while in Section 2.5 sufficient conditions for the existence of a conditional rate are given. We are interested in the generalization of the above ideas. The basic mathematical tool involved in this is

4 the theory of martingales and related processes. This material may not be familiar to the reader and is reviewed in Chapter 1 in which also the basic notation used throughout this thesis is introduced. In Chapter 2 we consider a counting process (Nt) with the sole assumption that (i) The random variable Nt is a.s finite for each t. The Doob-Meyer decomposition for supermartingales then implies that any such counting process adapted to a rightcontinuous increasing family of a-algebras (Ft) can be uniquely written as a sum [compare with (I.1) in the case of processes of independent increments] Nt = M + At (1.4) where the process (Mt) is a local martingale with respect to (Ft) and (At) is a natural increasing process. This process (At), when it has a.s absolutely continuous sample paths, can be expressed as Pt At = Xds (1.5) 0 where furthermore (Xt) satisfied relation (1.2). This process (Xt) is then reasonably called the "Conditional Rate" of the process (Nt) with respect to the family (Ft). For the process (At) relation (1.5) suggests then the name "Integrated Conditional Rate" of (Nt) with respect to (Ft) This terminology will be used even when, as is usually the case, a conditional rate does not exist. Observe that this approach is much more general and goes in the opposite

5 direction of the one taken in previous works ([R2],[S1], [S2],[S3]): we begin with a counting process (Nt) satisfying the very weak assumption (i) and arrive at the notion of integrated conditional rate, instead of defining by (1.3) a conditional rate (subject to numerous assumptions) and then assuming the existence of a hopefully unique counting process corresponding to this conditional rate. Chapter 2 is believed to be the first systematic study of the notion of integrated conditional rate. In Chapter 3 the likelihood ratio for detecting counting processes is computed. This is done using the three-step technique introduced by Kailath [K3] and Duncan [D3] in their works on detection of a stochastic signal in white noise. These three steps are the Likelihood Ratio Representation Theorem ([D3], [Bl]), the Girsanov Theorem (Gl],[Vl]) and the Innovation Theorem ([K3]). By this method likelihood ratios for a large class of counting processes can be found. Stochastic integral equations which allow us to compute the likelihood ratio continuously in time by recursive techniques are also derived. It is also shown how the Girsanov Theorem can be used to prove the existence of counting processes for which the integrated conditional rate is in a special form. Results of this third chapter constitute an extension of Br6maud's work [B1]. It should be noted in this connection that Br6maud's proof of the Likelihood Ratio Representation Theorem is erroneous. We will show that the errors cannot be corrected without supplying a missing assumption.

6 The likelihood ratio formulas p)resented in Section 3.4 constitute a generalization of the formulas given by Reiffen and Sherman [R4] and Bar-David [B2] in the context of Poisson processes, and Skorokhod [S4] in the context of processes with independent increments.

CHIAPTIlR 1 MATHENIATICAL REVIEW: MARTINGALES AND REILATED PROCEISS-!ES 1.0 INTRODUCTION We assume the reader to be familiar with measure theory in general and as it applies to the study of probabilities and stochastic processes. But he may not be acquainted with concepts such as stopping times, martingales, the Doob-Meyer decomposition and stochastic integrals, concepts which are heavily used in this thesis. Therefore, the main purpose of this chapter is to introduce and explain the mathematical notions necessary for a good understanding of this study, and to serve as a reference which will hopefully facilitate the reading. At the same time, the terminology used throughout this thesis will be introduced. The main references for this review chapter are [M1] for Sections 1.1 to 1.6, [M1] and [R3] for Section 1.7, finally [D1] and [M5] for Sections 1.8 and 1.9. Capital letters are systematically used to denote random variables. 1.1 STOCHASTIC PROCESSES The standard notation (Q,F,P) is used to denote a probability space. The set Q is the set of all possible outcomes of a specified experiment and the sets of the c-algebra F are called events. A measurable map from the measurable space (2,F) into a measurable space (E,E), where E 7

8 denotes a a-algebra of subsets of the set E, is called a random variable. The following notation is also standard: Definition 1.1.1: If A is any set we define the indicator function of the set A to be the function given by: 1 if x a A IA(x) ~ IA(X0) otherwise Now the definition of stochastic processes: ([M1], Definition 2-IV) Definition 1.1.2: Let T be an index set. A stochastic process is a system (Q,F,P, (Xt, t E T)) consisting of (1) a probability space (Q,F,P) and (2) a family (Xt, t E T) of random variables defined on (Q,F) with values in a measurable space (E,E). The measurable space (E,E) is called the state space of the process. Whenever it causes no ambiguity we will use the simplified notation (Xt, t c T) or even (Xt). A stochastic process (Xt) is in particular a mapping from T x Q into E. We denote by Xt(w) the image by this mapping of the point (t,w). The random variable Xt(-) (simplified notation: Xt) is called the state of the process at time t and the mappings X (L) of T into E are called trajectories or sample paths (or functions) of the process.

9 Definition 1.1.3: (see [M1], Definition 5-IV) Let (Xt t c T) and (Yt, t c T) be two stochastic processes defined on the same probability space (Q,F,P) with values in the same state space (E,E). The process (Yt, t E T) is a modification of the process (Xt,t E T) if Xt =Yt a.s for each t ~ T. If two processes (Xt) and (Yt) are modifications of each other then they have the same finite dimensional distributions (i.e.: P{Xtl A,... Xt C An} = P{Y A {tl i t n AI. Ytn An for every finite system of times tl,...,tn and sets A,...,An of E; in other words (Xt) and (Yt) are equivalent processes ([M1], Definition 3-IV)). This motivates the fact that, as usually done when dealing with stochastic processes, we will not distinguish between modifications of the same process. In this thesis, the state space will always be the real line IR equipped with its Borel sets B( ]R)* and the index set T will be the positive real line R+. From now on we restrict ourselves to this case. We will deal most of the time with stochastic processes having a.s right-continuous trajectories. We call them right-continuous processes for abbreviation. If two processes (Xt) and (Yt) are two modifications of the same process then we write Xt = Y a.s for each t R (1.1.1) *The notation B([a,b]) denotes the Borel sets of an interval [a,b].

10 By X = Yt for every t ~ R+ a.s (1.1.2) we mean P{Xt = Yt' t 1R+ = 1 Condition (1.1.2) obviously implies (1.1.1). The following Remark shows that the converse is true if the processes (Xt) and (Y) are both left or right-continuous. Remark 1.1.4: Suppose (Xt) and (Yt) are two left or right continuous modifications of the same process. Then clearly the set A = {Xt = Yt t c Q+} where Q+ denotes the set of positive rational numbers, is measurable, and P(A) = 1 For t [ Q+, let tn E ~+ be a decreasing or increasing (accordingly to the right or left continuity property of the two processes (Xt) and (Yt)) sequence converging to t. For w c A we have X = lim X = lim Y Y t n tn n tn which shows that X = Y for every t E ]R a.s

11 1.2 STOPPING TIMES The basic reference for this section is [M1], Chapter IV and VII. Let (Q,F) be a measurable space and let (Ft, t IR+) be a family of o-subalgebras of F such that one has F c F 5 t if s < t. We say that (Ft, t E IR ) is an increasing family of a-subalgebras of F and we often use the simplified notation (Ft). For each t eIR the o-subalgebra Ft is called the o-algebra of events prior to t. We denote by F the c-algebra generated by the union of the a-algebras Ft and we set: t F + = g F t+ - t 5 The family (Ft) is said to be right-continuous if Ft = F + for every t c IR+. Definition 1.2.1: (see [M1], Definition 31-IV) Let (Xt) be a stochastic process defined on a probability space (Q,F,P) and let (Ft) be an increasing family of o-subalgebras of F. The process (Xt) is said to be adapted to the family (Ft) if X is Ft-measurable for every t E R+ We often take for the o-algebra Ft the o-algebra o(Xu, 0 < u < t) generated by the process (Xt) up to time t. If a process (Xt) is adapted to a family (Ft) we must obviously have the relation Ft D a(Xu, 0 < u < t). It is convenient to think of the events of F as the representation of certain phenomena which can occur in a

12 certain universe. The a-algebras Ft consist then of the events that occur prior to the instant t. The Ft-measurable random variables are hence those which depend only on the evolution of the universe prior to t. In problems of detection and filtering, the a-algebra F represent, loosely speaking, the information available up to and at time t on which our detection's scheme and estimation are based. The situation where Ft is given by a(Xu, 0 < u < t) means then that the available information at time t is obtained by observing the process (Xt) up to time t. In the other case where Ft properly contains a(Xu, 0 < u < t) then we have more information at our disposition than merely that generated by the process (Xt). We will now introduce the very important notion of stopping times. Suppose an observer watches for the appearance of a specified event and notes the first time T(w) it occurs. The event {T < t} takes place if and only if the event we are watching for is produced at least once before the time t, or at that instant. Therefore the event {T < t} belongs to the o-algebra of events prior to t. This motivates the following definition: ([M1], Definition 33-IV) Definition 1.2.2: Let (Q,F) be a measurable space and let (Ft) be an increasing family of a-subalgebras of F. A positive random variable T defined on (Q,F) is said to

1 3 be a stopping time* of the family (Ft) iF T sIatisries the following property: the event {'F < t) belongs to Ft for every t IR+. Remark 1.2.3:(a) We often allow stopping times to take the value +c (b) The notion of stopping time depends on the family (Ft) (but not on a measure). (c) If the condition {T < t} c Ft for every t IR+ is satisfied and if the family (Ft) is right-continuous then T is a stopping time (see [Ml], ~34-IV). To get some intuitive feeling for stopping times, here are a couple of examples. In a nuclear reactor, the motion of a particle may be described by a random walk; the first time a particle hits an absorbing barrier, e.g., the shield, is a stopping time. Involved in a betting game you might decide to limit the risks by adopting the following strategy: you will stop playing the first time a given gain or loss is achieved. The time at which this occurs is a stopping time. The two above stopping times are called hitting times (see later Example 1.2.6). Stopping times are a basic tool in the study of Markov processes and martingales. To each stopping time T we can associate in the following *The name "stopping time" ("optional time," "Markov time" are also used in the literature) comes from the theory of Markov processes. Generally speaking these were times at which decisions were taken or where the process was stopped. A better name, in our opinion, would be "causal time."

14 way a -](ebr lich can he int erpretel as the a-algebra of events pri.or to T ( see IM I. Definition 35-IV) Definition 1.2.4: Let T be a stopping time of the family (Ft). We denote by FT the collection of events A cF such that A n {T < t} E Ft for every t We call FT the T-algebra of events prior to T. It is easily verified that these events do constitute a o-algebra and that if the stopping time is equal to the constant t, the a-algebra Ft is recovered. Theorem 1.2.5: Let S and T be two stopping times such that S < T then we have FS c FT. For the above and other properties of stopping times see [M1], Chapter IV. In this thesis all stopping times will be of the type presented in the following example ([M1], ~ 44-IV). Examples 1.2.6: Let (F ) be a right continuous family and let (Xt) be a right-continuous stochastic process adapted to the family (Ft). Let B be an open subset of IR and define: inf {s: X c B} if this set is D q j non empty "B i [ +o otherwise

15 We have {D < t} {X c B} {B < r rational r r < t from the right-continuity of the paths. The event on the left thus belongs to Ft and this implies by Remark 1.2.3 (c) that DB is a stopping time. This stopping time is called the first passage time or hitting time of B. We now give a classification of stopping times which will be very useful in the rest of this work, in particular to classify point processes. Definition 1.2.7: (see [M1], Definition 42-VII; [D1]) Let T be a stopping time of the family (Ft). (a) T is said to be totally inaccessible (with respect to the family (Ft)) if T is not a.s infinite and if for every increasing sequence (Sn) of stopping times majorized by T we have P{lim S = T, Sn < T < o for every n} = 0. (b) The stopping time T is said to be inaccessible (with respect to the family (Ft)) if there exists a totally inaccessible stopping time S such that P{T = S < o} > 0. (c) A stopping time T is said to be accessible (with respect to the family (Ft)) if it is not inaccessible. (d) A stopping time T is said to be predictable (with respect to the family (Ft)) if there exists an increasing sequence (Sn) of stopping times which converge a.s to T and such that for every n one has a.s S < T on the set {T > 0}.

16 It should be strongly emphasized that all these definitions depend on the family (Ft) choosen. Clearly predictable (resp. totally inaccessible) stopping times are accessible (resp. inaccessible). But in certain circumstances accessible stopping times are predictable. Before elaborating on this result we need the following concepts ([M1], Definitions 39 and 40, VII). Definition 1.2.8: The family (Ft) is said to be free of times of discontinuity if for every increasing sequence (Sn) of stopping times F =V F (lim Sn) n S n n n S Definition 1.2.9: Let T be a stopping time of a family (Ft) and let A be an element of FT. By TA we denote the stopping time T(o) if A TA() +00 otherwise The fact that T is indeed a stopping time can be easily verified. Definition 1.2.10: Let T be a stopping time; T is said to be a time of discontinuity for the family (Ft) if there exists an event A e FT and an increasing sequence (Sn) majorized by TA such that the event lim S = TA}

17 does not belong to the a-algebra V F n Sn It can be verified that the two Definitions 1.2.8 and 1.2.10 are compatible (see ~ 41-VII of [M1]). Theorem 1.2.11: (Theorem 45-VII of [M1]) Let T be an accessible stopping time of a family (Ft) which is not a time of discontinuity for the family (Ft). Then T is predictable. We now give some illustrations. In Section 2.4, where the above concepts are applied to counting processes, it is shown in particular that the time of nt occurrence of a Poisson process is a totally inaccessible stopping time with respect to the family of o-algebras generated by the process itself. Note that any stopping time with respect to a family (Ft) is always predictable with respect to the family (Gt) where Gt = F for each t. A stopping time which is a constant is obviously predictable with respect to any family of a-algebras. Define now a family (Ft) by Ft = {,Q} 0 < t < 1 Ft {',{(O1},{w2},Q} 1 < t < O The stopping time w = 1 T(m) - < \ a XJ) == mO2

18 for any a greater than one is a time of discontinuity for the family (Ft). To see that define the sequence of stopping times (Sn 1 - l/n). Then S < T and {lim S = T} = {w1} V F = {pQ} n n Sn This stopping time T is accessible but not predictable. Finally we give a decomposition result for stopping times (Theorem 44-VII of [M1]). Theorem 1.2.12: Let T be a stopping time. There exists an (essentially unique) partition of the set {T < a} into two elements of FT, A and A, such that the stopping time TA is accessible and the stopping time T1 is totally A inaccessible. 1.3 MEASURABLE PROCESSES Definition 1.3.1: ([MI], Definition 45-IV) Let (Q,F,P) be a probability space, and let (Ft, t E IR) be an increasing family of o-subalgebras of F. Let (Xt, t ~ R+) be a stochastic process. We say that (Xt) is progressively measurable with respect to the family (Ft) if, for each t ~ R+, the mapping (u,w) + X (w) from [0,t] x Q into ~R,B0R)) is measurable with respect to the o-algebra B([0,t]) x Ft. The process (Xt) is said to be measurable (without reference to a family of a-algebras)if the mapping (t,O) + Xt(w) is measurable over the product o-algebra LBR+) x F.

19 For a measurable process (Xt) adapted to (Ft) there always exists a modification which is progressively measurable (see [M1] Theorem 46-IV). We will always deal with processes with right-(sometimes left-) continuous trajectories. The following theorem is then what we need ([M1], Theorem 47-IV). Theorem 1.3.2: Let (Xt) be a right-continuous stochastic process adapted to a family (Ft). The process (Xt) is then progressively measurable with respect to the family (Ft). The same conclusion is true for a process with leftcontinuous paths. The following notation will be used constantly ([M1], Definition 48-IV). Definition 1.3.3: Let (Xt) be a measurable stochastic process defined on (Q,F,P) and let H be a positive random variable defined on Q. We denote by XH the random variable X(i) H(w) Usually H is a stopping time and is allowed to take the value +o when (Xt) is a process defined on R+ U {o}. The following theorem is a basic tool when using stopping times, in particular when studying martingales ([M1], Theorem 49-IV). Theorem 1.3.4: Let (Xt) be a progressively measurable process with respect to a family (Ft) and let T be a stopping time with respect to (Ft) (possibly infinite). The random variable XT is then FT-measurable.

20 We will often encounter the following situation. Let (Tt, t ~ R+) be a system of stopping times of a family of a-algebras (Ft, t ~ R+) such that the mappings t + Tt(w) are increasing and right-continuous. Let (Xt, t c R+) be a stochastic process measurable with respect to the family (Ft, t E IR+). The process (Yt = XT ) and the family of t a-algebras (Gt = FT ) are respectively called "the transt formed from (Xt) by the system (Tt)" and "the family of transformed a-algebras." We have ([M1], Theorem 57-IV). Theorem 1.3.5: The process (Yt) is progressively measurable with respect to the family (Gt). 1.4 CLASS (D) AND (DL) PROCESSES The following concepts which are generalizations of the notion of uniform integrability will be needed later on when dealing with the Doob-Meyer decomposition of supermartingales. Definition 1.4.1: ([M1], Definition 17-IV) Let (Xt, t ER+) be a right-continuous stochastic process adapted to a family of a-algebras (Ft, t R+). Define T as the collection of all finite stopping times of the family (Ft) (respectively, T T a the collection of all stoppings times bounded by a positive constant a). (Xt) is said to belong to the class (D) (respectively belong to the class (D) on the interval [0,a]) if the collection of random variables XT, T c T (respectively T E Ta) is uniformly integrable. a

21 (Xt) is said to belong to the class (DL), (or locally to the class (D)) if (Xt) belongs to the class (D) on every interval [O,a], (0 < a < )). Remark 1.4.2(a): A constant time t is a particular case of stopping time. Therefore if a process (Xt) belongs to the class (D), it is a fortiori uniformly integrable. The converse is not true (for a counter example see [Jl]). (b) Every right-continuous and uniformly integrable martingale belongs to the class (D) ([Ml], Theorem 19-VI). (c) If (Xt) is a process such that Xtl < Yt a.s and if (Yt) is a process which belongs to the class (D), then it is easy to verify that the process (Xt) also belongs to the class (D). (d) The notions of class (D) and (DL) arise in the context of the Doob-Meyer decomposition of a supermartingale into the difference of a martingale and an increasing process, but in the continuous parameter case only. While a supermartingale with discrete index set always admits a Doob-Meyer decomposition, such a decomposition exists, in the continuous parameter case, if and only if the supermartingale belongs to the class (DL) (see Section 1.7). 1.5 MARTINGALES In this section every stochastic process is defined on a fixed probability space (Q,F,P) and adapted to the same family (Ft. t e IRb). We suppose that the probability

22 space (Q,F,P) is complete and that the a-algebra F con0 tains all the P-negligible sets. Definition 1.5.1: ([M1], Definition 1-V) Let (Ft, t IR+) be an increasing family of a-subalgebras of F and (Xt, t E R+) a real-valued process, adapted to the family (Ft). The process (Xt) is said to be a martingale (respectively, a supermartingale, a submartingale) with respect to the family (Ft) if (a) Each random variable Xt is integrable, and (b) For every pair s,t of IR such that s < t we have E(XtIFs) = X a.s (respectively, < Xs, > X) Remark 1.5.2(a): This definition is not the most general. First the index set R+ can be in fact any arbitrary set ordered by a relation <. Secondly, in certain cases the assumption of integrability of Xt can be weakened (see [N1], Section 5 of Chapter IV). (b) Here again the above definition is very much dependent on the family (Ft) choosen. (c) If (Xt) is a supermartingale then the process (-Xp is a submartingale, and conversely. Thus theorems need only be stated for supermartingales (or submartingales). We will not give here the basic properties and theorems concerning supermartingales, e.g., fundamental inequalities, optional sampling theorem, convergence theorem, etc. All these results and many others can be found, among other

23 sources, in [Ml], Chapter VI. The simplest example of a martingale is the following ([Ml], ~ 3-V). Example 1.5.3: Let (Ft, t ]R+) be an increasing family of o-subalgebras of F. For each integrable random variable X set X = E(XIFt) The process (Xt) is then a martingale with respect to the family (Ft). By Theorem 19-V of [M1], this martingale is uniformly integrable. Conversly if a martingale (Xt) is uniformly integrable it follows from the supermartingale convergence theorem ([M1], Theorem 6-VI) that this martingale can be written in the above form. More precisely we have: ([M1], Theorem 18-V) Theorem 1.5.4: Let (Ft) be an increasing family of a-subalgebras of F. A process (Xt) is a uniformly integrable martingale with respect to the family (Ft) if and only if it can be written in the form Xt = E(XIFt) where X is the limit a.s and in the mean of X as t 00 t goes to infinity. Most theorems for supermartingales assume the rightcontinuity of these supermartingales. The following theorem gives a necessary and sufficient condition for the existence of a right-continuous modification

24 ([M1], Theorem 4-VI) Theorem 1.5.5: Suppose (Xt) is a supermartingale with respect to a right-continuous increasing family (Ft). The supermartingale (Xt) then admits a right-continuous modification if and only if the mean function E Xt is right-continuous. Remark 1.5.6: The mean of a martingale being constant it follows immediately that any martingale always admits a right-continuous modification. In accordance with the fact that we do not distinguish between modifications of the same process, we will adopt the following convention: when we speak of a martingale (Xt), we always mean its right-continuous modification. Consequently all martingales which appear in this thesis are right-continuous. The following very useful result does not appear in our main reference [M1] but in [M3]. Therefore an original proof of this result will be provided, for easy reference, in Appendix A.I. Lemma 1.5.7: Let (F, n E N) be an increasing family of n a-subalgebras of F and F be the a-algebra generated by the union of the Fn. Let (Fn, n IN) be a sequence of random variables bounded in absolute value by an integrable random variable G and converging a.s to a random variable F. Then E(FnIFn) converges a.s to E(FIFJ).

25 1.6 POTENTIALS AND THE RIESZ DECOMPOSITION The hypotheses of the preceding section will be used again in this one. Definition 1.6.1: ([M1], ~ 9-VI) Let (Xt) be a rightcontinuous supermartingale. We say that (Xt) is a potential if the random variables Xt are a.s positive and if lim EX 0. t+x t We have the following theorem ([M1], Theorem 10-VI). Theorem 1.6.2: (Riesz Decomposition) Let (Xt) be a rightcontinuous supermartingale with respect to a right-continuous increasing family (Ft). The following two conditions are equivalent: (a) There exists a submartingale (Vt) such that: Vt < Xt a.s for each t. (b) There exists a martingale (Yt) and a potential (Zt) such that Xt = Yt + Zt a.s for each t c R+. These two processes are then unique up to modification. Remark 1.6.3(a): The right-continuity property of the processes involved implies (see Remark 1.1.4) Xt = Yt + Zt for every t R+ a.s (b) This decomposition is easily obtained when the right-continuous supermartingale is uniformly integrable: by the supermartingale convergence theorem ([M1], Theorem 6-VI), Xt converges a.s and in the mean to a F -measurable 00

26 random variable Xx. Define the martingale: (Yt E(X0{Ft)). Then it is easy to verify that (Z = X - E(XIFt)) is a potential. Furthermore we also have, by the convergence theorem again: lim Z = 0 t-aot (see [Ml], ~ 11-VI). 1.7 DOOB-MEYER DECOMPOSITION INTRODUCTION This decomposition of a supermartingale into the difference of a martingale and a continuous increasing process, discovered by Doob in the discrete case and demonstrated by Meyer in the continuous case, will play a very important part in our study of point processes. We will therefore spend some time reviewing the basic concepts and results behind this decomposition. For a complete account of this theory see [M1] and [R3]. To fix some ideas let us first take a look at the discrete case. Let (Q,F,P) be a probability space and (Fn, n c N) be an increasing family of a-subalgebras of F. Denote by (X, n c N) a supermartingale relative to the family (Fn) n n and define the random variables Yn and An by induction in the following manner: Y = X A = 0 \\ 0 0 Y1 + [X1-E(XlIFo)] A = X - E(X1IFo) n n-l [Xn E(XnlFn-I)] An = An-l+ [Xn-E(X IF )]

27 The following properties are easily verif-ied: (a) Xn Y - A for every n. n n n (b) The process (Yn) is a martingale. (c) The paths of the process (An) are increasing functions of n. (d) A = and A is F -measurable for every n, and o0 n n-I integrable. Any process (Bn) adapted to the family (Fn), and having sample paths increasing as functions of n and such that Bo = 0 will be called an increasing process. The preceding contruction shows that every discrete supermartingale (X ) n is equal to the difference of a martingale and an increasing process. Consider now the uniqueness of such a decomposition. Starting from an increasing process (Bn) and a martingale (Zn) form the supermartingale (Xn = Zn - B), and construct the processes (Yn) and (An) as above. A simple calculation shows that if B is F nl-measurable then we have: Y = Z and o o Y = Y +Z -Z n n-l n n-l which implies that Y = Z and consequently A = B. Conn n n n versly if A = B then B is by construction Fn -measurable. Bn n n Hence A B if and only if B is F -measurable. There n n n n-i1 thus exists only one decomposition of (X ) by means of an increasing process satisfying property (d). In the continuous case only supermartingales of class (DL) do have such a decomposition and its uniqueness depends

28 on a property of the increasing process (increasing processes having this property are called natural) which is analogous to, although much more complex than, the discrete case. We will now define precisely what is meant by a DoobMeyer decomposition. Every stochastic process in the remainder of this section is defined on a single complete probability space (Q,F,P) and adapted to an increasing, right-continuous family (Ft, t ]R+). We suppose that the a-algebra F contains all the P-negligible sets. Super0 martingales and stopping times are always relative to the above family (Ft). For the following definitions see [M1] Definitions 3 and 5, VII: Definition 1.7.1: Let (At, t 1R+) be a real-valued stochastic process, adapted to the family (Ft). We say that (At) is an increasing process if (a) The sample paths of (At) are a.s zero for t = 0, increasing and right-continuous. (b) The random variables At are integrable. We say that the increasing process (At) is integrable if sup EAt < t Definition 1.7.2: Let (Xt) be a right-continuous supermartingale. We say that (Xt) admits a Doob-Meyer decomposition if there exists a (right continuous) martingale (Yt) and an increasing process (At) such that Xt = Yt -At for every t E IR+

29 Suppose that (At) is an integrable increasing process and define the process (Xt = E(A Ft) - At) It is easily verified that (Xt) is a potential of the class (D) and that the above expression is a Doob-Meyer decomposition of (Xt) (note that by our convention E(A0 Ft) is a right-continuous martingale; see Remark 1.5.6). This motivates the following definition ([Ml], Definition 6-VII): Definition 1.7.3: Let (At) be an integrable increasing process. The process (E(AJFt) - At) is called the potential generated by (At). INTEGRATION WITH RESPECT TO AN INCREASING PROCESS Let (At) be an increasing process and (Xt) be a measurable process. Since by Theorem 14, Chapter 11 of [M1] (or Proposition 111.1.2 of [N1]) the trajectories of (Xt) are B(R+) measurable we can consider for each w E~ the Lebesgue-Stieltjesintegral on R+,, if it exists: | Xt (w)dAt (W) 0 From Fubini's theorem this integral is an F-measurable function of a. Now if (Xt) is progressively measurable with respect to the family (Ft) the process (Yt) defined by: - t Yt I XSdAs 0 (where the point t is included in the interval of integration*) is, if it exists, Ft-measurable for every t E IR *The notation ab is used for (ab].

30 and has right-continuous paths. It is hence progressively measurable (see Theorem 1.3.2). Then if T is a stoppingtime the random variable YT XdAs 0 is FT-measurable (Theorem 1.3.4). UNIQUENESS As said in the introduction the uniqueness of the DoobMeyer decomposition depends on a property of the increasing process, which we now define as ([M1], Definition 18VII). Definition 1.7.4: Let (At) be an increasing process. We say that (At) is a natural increasing process if Ef YsdAs = E YsdAs 0 0 for every t ]R+ and every positive, bounded, rightcontinuous martingale (Yt). Remark 1.7.5(a): The martingale property of a process is very much dependent on the family (Ft0 choosen and therefore so is the above definition. To be more precise we should speak of an increasing process as being natural with respect to a given family (Ft). We will see later on that the same process can be natural with respect to one family but not with respect to another one. (b) If the process (At) is integrable then the condi

tion is equivalent to ([M1], Theorem 19-VII) * 00 co0 lf Y dA = El Y dAA O 0 (c) Deterministic increasing processes are natural with respect to any family (Ft): From the Fubini's theorem it follows that It t E YsdA5 = (EY) dAS 0 0 and Et Ys dA - t (EYs)dAs 0 0 Now for a martingale EY = EYs (See [M1], Theorem 4-VI) S s and the result follows. (d) Continuous increasing processes are obviously natural. We can now state the uniqueness Theorem ([M1], Theorem 21-VII). Theorem 1.7.6: (Uniqueness) Let (Xt) be a right-continuous supermartingale. There exists at most one natural increasing process (At) such that the process (Xt + At) is a martingale. We reexamine now for an increasing process the property of being natural. The next theorem gives another characterization of this property. But first ([M1], Definition 48VII aind Theorem 49-VII):

32 Definition 1.7.7: Let (At) be an increasing process and T be a stopping time. We say that (At) charges T if P{A, = A, } > 0 Theorem 1.7.8: Let (At) be an integrable increasing process. Then (At) is natural if and only if the following two properties are satisfied: (a) For every sequence of stopping times (Sn) which increases to a stopping time S, the random variable AS is measurable with respect to the o-algebra X F (b) (At) charges no totally inaccessible stopping (b) (At) charges no totally inaccessible stopping ti~mes. t IIIOS. Recall that in the discrete parameter case (see the Introduction to this section) property (d) for an increasing process (An is Fn 1 measurable) is the condition under which the Doob-Meyer decomposition is unique. Condition (a) in the above theorem is clearly the analogue in the continuous parameter case, of property (d). But condition (b) above has no equivalent in the discrete case. From Definition 1.7.4 the property of being natural has clearly something to do with the existence of martingales which would jump at the same times as the increasing process. Hence in view of Theorems 46 and 47, Chapter VII, of [M1], condition (b) is not unexpected. The next result tells us that stopped natural increasing processes are still natural:

33 Theorem 1.7.9: Let (At) be a natural increasing process and T be a stopping time. Then the increasing process (AtT) is natural with respect to the two families (Ft) and (Ft^T). This theorem appears in [M1] (Theorem 19-VII, (3)); but there, it is not clear with respect to which family (Ft) or (Ft T) the stopped process (AtT) is natural. This is why we provide a proof of this result in Appendix A.2. The uniqueness theorem and the above result immediately give us: Lemma 1.7.10: Suppose (Xt) is a right-continuous supermartingale with a unique Doob-Meyer decomposition with respect to a family (Ft) given by: Xt = Yt - At where (Yt) is a martingale and (At) a natural increasing process. Let T be a stopping time. Then the unique Doob-Meyer decomposition with respect to the family (FtT) of the supermartingale (XtT) (with respect to (Ft T)) is given by: X = Y A t,^T t^T t^T EXISTENCE We have seen (Definition 1.7.3) how potentials of the class (D) can be generated by increasing integrable

34 processes. The next existence theorem states the converse result ([M1], Theorem 29-VII) Theoremn 1.7.11:,Let (Xt) be a right-continuous potential of class (D). There then exists an integrable natural, increasing process (At) which generates (Xt), and this process is unique. Since the natural increasing process (At) that generates a potential (Xt) is uniquely determined by (Xt), the continuity property of the process (At) follows from a property of (Xt) ([M1], Definition 33-VII) Definition 1.7.12: Let (Xt) be a right-continuous supermartingale of the class (DL). We say that the supermartingale (Xt) is regular if, for every increasing sequence (Tn) of stopping times which converges to a bounded stopping time T, lim EX EXT n Tn For example every right-continuous martingale is regular. A supermartingale which has with some positive probability a jump at a fixed time t cannot be regular. Theorem 1.7.13: ([M1], Theorem 37-VII) Let (Xt) be a rightcontinuous potential of the class D, and let (At) be the natural, integrable, increasing process which generates (Xt). The process (At) is continuous if and only if the potential (Xt) is regular.

35 We get now the existence theorem for supermartingales of the class (DL) from the above results for potentials of the class (D), by using the Riesz decomposition (sec Theorem 1.6.2). A limiting argument is also involved here to get the extension from the class (D) to the class (DL) (see [M1], Theorem 31-VII) Theorem 1.7.14(a): A right-continuous supermartingale (Xt) has a Doob-Meyer decomposition t = Yt - At where (Yt) denotes a right-continuous martingale and (At) an increasing process, if and only if (Xt) belongs to the class (DL). There then exists such a decomposition for which the process (At) is natural, and this decomposition is unique. (b) The natural increasing process (At) is continuous if and only if the supermartingale (Xt) is regular. The following simple remark, a direct consequence of the uniqueness theorem, is often used later on: Remark 1.7.15: Let (Xt) be a right-continuous supermartingale of the class (D) and denote its Riesz decomposition (see Theorem 1.6.2) by: Xt = P + Y (1.7.1) where (Pt) denotes a potential and (Yt) a right-continuous martingale. By Remark 1.6.3(b) the martingale (Yt)

36 is uniformly integrable (this implies by Remark 1.4.2(b) that it belongs to the class (D)) and is given by: v I (XI ft) (1.7.2) The potential (Pt), which is the difference (see (1.7.1)) of two processes of class (D), also belongs to this class and by the above Theorem 1.7.11 there exists a natural integrable increasing process, say (At), which generates (Pt). That is (see Definition 1.7.3): Pt = (AFt) t 1.7.3) Introducing the two relations (1.7.2) and (1.7.3) in (1.7.1) we get: Xt = E(A + XFt) - At (1.7.4) The first term in the RHS of Eq. (1.7.4) is a right-continuous martingale and the second term, (At), is by definition a natural, increasing, integrable process. Relation (1.7.4) is therefore the unique Doob-Meyer decomposition of (Xt). The above is only a summary of results concerning the Doob-Meyer decomposition. For other facts (e.g., the natural increasing process (At) can be obtained as the weak limit of a sequence of absolutely continuous natural increasing processes) we refer the reader to the original source which is [M1], Chapter VII, or [R3].

37 1.8 SQUARE INTEGRABLE MARTINGALIIS INTRODUCTION Ito integrals are now well known. Doob also defined stochastic integrals, in particular with respect to processes of independent increments. The generalization of these concepts to stochastic integration with respect to local martingales was rendered possible by the Doob-Meyer decomposition. Dol6ans-Dade and Meyer on the one hand and Kunita and Watanabe on the other did the pioneering work in this area. But their results are not similar and this has created some confusion. By giving here the following summary of results (Sections 1.8 and 1.9) we hope to introduce as well as clarify some of the definitions and results concerning this relatively new and still developing subject. The basic references for this r6sum6 are [D1] and [M5] (see also [K2]). The basic assumptions of the preceding section (just above Definition 1.7.1) are used again in this section and the next one. If ft is a right-continuous function with left-hand limits we denote the jump of ft at time t by Af f - f t t t-' Square integrable martingales play an important part in the theory of stochastic integration and also later on in this thesis:

38 Definition 1.8.1(a): We say that a right-continuous martingale (X) is square integr~al)le if we halve t2 2 We denote by M2(P,Ft) (or simply M2 when it does not create confusion) the space of all square integrable martingales (Xt), with respect to the measure P and family (Ft), such that X 0. The subspace of M2 consisting of the continuous martingales is denoted by M. c We equip M2 with a scalar product ((Xt),(Yt)) = E(X Y ) for (Xt) and (Yt) belonging to Mz. 2 (b) M (P,Ft) denotes the space of (P,Ft) martingales (X) such that XO = 0 and EXt < m for each t. Remark 1.8.2(a): By Theorem 22-II of [MI], a square integrable martingale (Xt) is uniformly integrable and hence (Theorem 1.5.4) can be expressed as Xt = E(X IFt) where EX2 < oo. 00 (b) If (Xt) ~ M2 then (X ) M2 for any constant a. This implies that all the following results stated for martingales in M2 can be extended to the case of martingales in M Theorem 1.8.3: ([D1], Theorem 1) M is a Hilbert space and the subspace AM is closed in M2 c

39 NATURAL INCREASING PROCESSES ASSOCIATED WITH SQUARE INTEGRABLE MARTINGALES AND STOCHASTIC INTEGRALS The following result is a consquence of the Doob-Meyer decomposition ([MI], ~ 23-VIII; [Dl], Theorem 2). 2 Theorem 1.8.4: Let (Xt) c M. There exists a unique natural increasing process denoted (<X>t) such that the pro2 cess (X <X>t) is a martingale. We will say that (<X>t) is the increasing process associated with the martingale (Xt). For two stopping times S and T such that S < T we have the basic relation: E(X - XSFS) E[(XT - XS)FS] = E(<X>T - <X>IFS) More generally if (Xt) and (Yt) are two square integrable martingales we set (see [D1]): Definition 1.8.5: <X,Y = (<X+Y>- < -> <Y> Remark 1.8.6: It is easy to see that (a) <X>t <X,X>t (b) The process (<X,Y>t) is the difference of two natural increasing processes. (c) The process (XtYt - <X,Y>t) is a martingale. (d) The process (<X,Y>t) is the unique process satisfying properties (b) and (c) above.

40 We will now define stochastic integrals with respect to square integrable martingales. Recall that if ft is a right-continuous function of bounded variation on the real line then fb - fa is the integral of df over (a,b], whose indicator function is left-continuous. It is therefore natural to start with the integration oF left-continuous simple processes: Definition 1.8.7: (see [M5]) A process (Ht) defined as follows is called a simple process of the family (Ft). Take some finite subdivision 0 = t < tI <...< t of the positive real line IR and suppose that: (a) H is F measurable and bounded O O (b) on (ti,ti], i = I,...,n, Ht = Hiwhere H is Ft measurable and bounded ti-l (c) after tn, I-t = 0. These simple processes give rise to the class of predictable processes ([M5], Definition 1): Definition 1.8.8: The process (Ht) is predictable (with respect to the family (Ft)) if the function (t,w) + Ht(w) is measurable over the a-algebra on IR x Q generated by all simple processes of the family (Ft). Definition 1.8.9: ([M5], Definition 2) Let (Xt) e 12 We say that the process (HIt) belongs to the space L (X) if (Ht) is predictable and

41 E H d<X> < ~ s s 2 We equip the space L2(X) with the semi norm I(Ht)l1 = (E Hs d<X>) /, (Ht) e L (X) 0 2 Let (Xt) e M and (Ht) be a simple process. We define the stochastic integral by: f H dX = H H X + H(X + + Hn(Xt - Xt ) 0 1o n n-l and we denote this process by ((H.X)t). We observe: (a) ((H.X)t) ~ M2 (b) E(J HsdXs)2 - E Hsd<X> Of Os 0 0 This last property (b) defines a norm preserving operator from the space of simple processes (which is a 2 2 dense subset of L (X)) into the space M. By applying the procedure of functional completion we obtain the definition of stochastic integrals with respect to square integrable martingales ([D1], Theorem 3; [M5], Theorem 1): Theorem 1.8.10(a): The mapping (Ht) + ((H.X)t) from simple processes to martingales can be uniquely extended as a norm preserving operator from L2(X) to M (b) This stochastic integral is uniquely characterized by the following property: Let (Ht) E L2(X). For any (Yt) E M we have

42 t <(tH.X),Y>t = H d<X,Y>S 0 (c) For almost all o, we have A(H.X)t = H. AX DECOMPOSITION OF THE SPACE M Definition 1.8.11: ([M5], Definition 4) A stable subspace S of M2 is a closed subspace of M2 such that (Xt) e S 2 and (Ht) L (X) imply (H.X)t c S. Remark 1.8.12: The stable subspace generated by (Xt) M2 2 is given by {(H.X)t;(Ht) e L (X)} Definition 1.8.13: ([MI], Definition 26-VIII; [M5], Definition 5) Two martingales (Xt) and (Yt) belonging to M2 are said to be orthogonal if <X,Y>t = 0. Remark 1.8.14: The above definition is equivalent to saying that the process (XtYt) is a martingale. If <X,Y>t = 0 then by Remark 1.8.6(c) (XtYt) is a martingale. Conversly if (XtYt) is a martingale then <X,Y>t is also a martingale; by Remark 1.8.6(b) and Lemma 1.9.4 we must have <X,Y>t 0. 2 If S is a stable subspace of M2, S denote the subspace of all square integrable martingales which are orthogonal to S. Now similarly to the projection theorem in Hilbert space theory we have ([M5], Theorem 2):

43 Theorem 1.8.15: Let S be a stable subspace of M2. Every 2 (Xt) c M can be decomposed uniqucly into + Z ) (Yt tt with (Yt) S and (Zt) C SI As an application we have ([D1], Theorem 4; [M5]): Theorem 1.8.16: Let (Xt) E M2 Then there exists a unique decomposition of (Xt) into a sum of two square integrable c d c 2 d 2i martingales (X) and (X where (X) M and (X) M c t t c t C Remark 1.8.17: The martingale (X) is not simply the process having constant sample paths except for jumps which are the same as those of (Xt). Such a process would not necessarily be a martingale. Now (see the remark, on p. 90, following Proposition 3 of [D1]; [M5]) if (Xt) E M and has a.s sample paths of bounded variation on every finite interval then (Xc) 0, i.e., (Xt) c M. In fact M2L is the closure in M2 of such martingales of bounded variation. 2 d If (Xt) E M, Meyer calls the process (Xt) the compensated sum of the jumps of (Xt). QUADRATIC VARIATION PROCESSES AND STOCHASTIC INTEGRALS The above decomposition (Theorem 1.8.16) allows us to associate to any square integrable martingale another increasing process (this one not natural), ([D1], p. 87; [M5]) Definition 1.8.18(a): Let (Xt) e M2. We call quadratic variation process associated to (Xt) the following

44 increasing process: [XI 9 <XC> + (AX,)2 [Xt t s<t where <XC>tis the natural increasing process associated t with the martingale (X) (see Theorem 1.8.4). (b) If (Xt) and (Y) are two elements of M we set: [XY]t - ([X+Y]t - [X]t - [Y]) We have ([D1], Theorem 5): 2 Theorem 1.8.19: The process (X - [X]t) is a martingale. 2 Remark 1.8.20: Recall that (Xt - <X>) is a martingale. t t Hence the process ([X]t - <X>t) is also a martingale. 2 Let (X ) C M. The fact that the process (Xt - <X>) t t t is a martingale allowed us to construct a norm preserving 2 2 operator from the space L2(X) to M and to define stochastic integrals. Similarly we can define a stochastic integral and construct a norm preserving operator starting this time 2 from the martingale property of the process (Xt - [X]t). It turns out (Theorem 6 of [Dl]) that the class of integrable stochastic processes and the stochastic integral associated with the process ([X]t) are the same as those associated with the process (<X>t). As before we also have ([D1], Theorem 6; [M5], Theorem 4):

45 Theorem 1.8.21: Let (Xt) and (Yt) belong to M2 and (11t) to L (X). Then (a) E|f IHIj ld[X,Y] < < - 0 s (b) The stochastic integral ((IH.X)t) is uniquely characterized by the property: t [(H.X),Y]t = I Hd[XY] 0 2 for every (Yt) e M. \ The interest of the process ([X]t) is that it allows an extension of stochastic integrals to local martingales while the process (<X>t) does not. This is the subject of the next section. 1.9 GENERALIZATIONS OF MARTINGALES LOCAL MARTINGALES AND STOCHASTIC INTEGRALS Definition 1.9.1(a): V is the class of all finite valued, right-continuous adapted increasing processes (At) such that A = 0 0 (b) V = V+ - V+. V is in fact the space of all rightcontinuous, adapted processes having sample paths of bounded variation on every finite interval, and which are zero at the time origin. (c) A is the subspace of V consisting of integrable increasing processes and A = A A+ increasing processes and A = A - A.

46 Definition 1.9.2(a): ([M6], Definition 4) A right-continuous adapted process (Xt) is a local martingale if there exists a sequence of stopping times (Tn) increasing a.s to o such that for every n the process (XtT ) n on Tn > 0} is a uniformly integrable martingale. (b) If the stopped process (XtT ) is a square inten grable martingale then we say that (Xt) is a square integrable local martingale. (c) We denote by L(P,Ft) (or simply L) the space of all local martingales (Xt) such that X = 0. (d) We say that a sequence of stopping times (Tn) reduces the local martingale (Xt) if the stopped process (X ) is a uniformly integrable martingale. t.T n Remark 1.9.3(a): The restriction to uniformly integrable martingales, in the above definition, is not important: if (Tn) is a sequence of stopping times such that (XtT ) is a martingale then the sequence of stopping times (Rn A T n) makes the process (Xt^R ) a uniformly integrable n n n^K martingale. (b) In the above definitions of spaces, a subscript c indicates the subclass of continuous processes (e.g., L denotes the space of continuous local martingales). c The following result will be most useful later on (compare with Remark 1.8.17):

47 Lemma 1.9.4: Let (Xt) L n V. Then P{Xt 0, t c R} = Remark 1.9.5: The proof of this result when (Xt) is a martingale is given in [F1], Lemma 3.2.1. The extension to local martingales is trivial. If furthermore the martingale (Xt) belongs to A the above result follows from the uniqueness of the Doob-Meyer decomposition. In this case (Xt) can be expressed as a difference (At - Bt) where both (At) and (Bt) belong to A. The process (Yt Xt- At) is then a supermartingale of the class (D) and thus admits a unique Doob-Meyer decomposition. But (Xt - At) and (O - Bt) are precisely both such a unique decomposition ((At) and (Bt) are both natural because they are continuous). So we must have (Xt = 0) and (At = Bt) The extension of Theorem 1.8.16 to local martingales is given by (see Theorem 7 of [D1]; [M6], Theorem 1): Theorem 1.9.6: Let (Xt) e L. (Xt) can be written in a unique way as X = X + X where (Xc) E L and (Xd) is such that for every (Y) E Lc the process (XdYt) E L. By definition of a square integrable local martingale (Xt) (Definition 1.9.2(b)) there exists a sequence of stopping times (Tn) increasing a.s to o such that the stopped process (Xt ^ Xt ) is a square integrable martingale for each n. Now if (Xt) e Lc, but is not necessarily square

48 integrable, we can construct such a sequence of stopping times as follows. Let inf {t: IXtl > n} T = oo if the above set is empty Because (Xt) is continuous, the stopped process (Xn) is bounded by n thus square integrable. Furthermore because martingales have sample paths which are a. s bounded on every compact interval (see Theorem 3-VI of [M1]) the above sequence (Tn) increases to o. Hence in both of these cases the process (<X >t) makes sense and by the uniqueness of this process we can uniquely define an increasing natural process (<X>t) such that (Xt - <X>t) E L. Now if (Xt) E L but is not continuous or square integrable, the above is no longer possible because it is not always true that there exists a sequence of stopping times (Tn) which will make the stopped process (Xt T ) a square integrable martingale. n Hence it is possible to extend the definition of the process (<X>t) to continuous or square integrable local martingales only; but this, in turn, allow us to extend the definition of the process ([X]t) (see Definition 1.8.18) to local martingales (see [Dl], p. 98, [M6]). Stochastic integrals with respect to local martingales can then be defined as in Theorem 1.8.21.

49 Definition 1.9.7(a): Let (Xt) e L. By [X]t we denote the quadratic variation process: [X]t = <X t> + (AXs) s<t (b) If (Xt) and (Yt) both belong to L, we set: 1 [X,Y]t = ([X+Y]t [X]t - [Y]t Remark 1.9.8(a): The fact that [X] is finite follows from [Dl], Theorem 7 (see also [Al]). (b) The process (XtYt - [X,Y]t) c L (see [Dl], p. 106). We now give the results on stochastic integrals with respect to local martingales ([Dl], Section 4, [M6]). Definition 1.9.9: ([D1], p. 98) H(Ft) denotes the class -of all locally bounded predictable (with respect to the family (Ft)) processes (Ht), locally bounded meaning that there exists a sequence of stopping times (Tn) increasing to c and a sequence of positive numbers (Mn) such that IH~n. I > M < oo a s [HT t I{Tn>0} < Mn < a. Remark 1.9.10(a): (see [D1], remark on p. 100) Let (Ht) be a right-continuous process with left-hand finite limits. Then (Ht) H. (b) By Theorem 3-VI of [M1] every right-continuous supermartingale (Xt) has sample paths with finite left-hand limits. Hence the process (Xt-) ~ H. This result extends

50 to local-martingales (or semimartingales, defined later on). Theorem 1.9.11: ([Dl], Proposition 5; [M6], Theorem 2) Given (Xt) e L and (Ht) e H there is one and only one process ((H.X)t) such that [(H.X),Y]t - H d[X,Y]5 0 for every (Yt) c L. The stochastic integral ((H.X)t) belongs to L. The following very important lemma makes the connection between stochastic integrals and Stieljes integrals when they both exist ([D1], Proposition 3; [M6], Lemma 2): Lemma 1.9.12: If (Xt) L n V and (Ht) c H the integral of (Ht) with respect to (Xt) is the same in its stochastic and Stieltjes definition. It might be appropriate now to compare the definitions of Doleans-Dade and Meyer on the one hand and the approach of Kunita and Watanabe on the other. There are not the same. First of all when Kunita and Watanabe speak of a local martingale (Xt) they mean a square integrable local martingale. This allows them to deal only with the natural increasing process (<X>t). The class of integrable processes is also different. Instead of the class of locally bounded predictable processes they use the class K which is the

51 closure of the class K with respect to the seminorm rc t 1/2 (E Ksd<X> l/ 0 for (Kt) ~ K and where Kr = {(Kt): bounded right-continuous adapted processes having left-hand limits} Kunita and Watanabe do not need the notion of predictability because they assume that the family (Ft) is free of times of discontinuity. In [D1], Dol6ans-Dade and Meyer do not assume this last condition. But Meyer does in [M7] and that allows him to integrate a larger class of processes. SEMIMARTINGALES AND THE CHANGE OF VARIABLES FORMULA Definition 1.9.13(a): A semimartingale is a process (Xt) which can be written as a sum: X= X + L + A t 0 t t where XO is F -measurable, (Lt) E L and (At) C V. (b) A process (Xt) with values in Jn is a semimartingale if all its components are real semimartingales. Examples of semimartingales are sub- and supermartingales, and processes of independent increments. The above decomposition is not unique. The only intrinsic elements are (1) X0 and (2) (Lc) (see [D1],

52 Section 5). The natural increasing process (<LC>t) is hence uniquely determined by (Xt). We set: c A c c d c Xt = Lt and <X >t <L >t The stochastic integral ((H.X)t), where (It) E H and (Xt) is a semimartingale with a decomposition Xt = X0 + Lt + At is defined by: (X)t = Ho X + (H-L)t + (H.A)t (H'X)t - Ho o t t where ((H*L)t) is a stochastic integral and ((H-A)t) is the usual Stieltjes integral ft H dA. The next theorem, a 0 S s generalization of the Ito differentiation formula was first obtained, for locally square integrable martingales, by Kunita and Watanabe (see [K1], Theorems 2.2 and 5.1) and finally for semimartingales by Dol6ans-Dade and Meyer (see [D1], Theorem 8; for the most general version of this theorem (for martingales taking values in a Hilbert space), see [K2], Theorem 3): Theorem 1.9.14: (Change of variables formula) Let (Xt) be i a vector CRn) valued semimartingale (we denote by X the ith component it component of Xt) and F a twice continuously differentiable function of Rn into C. Denote by Di the derivation operator with respect to the it coordinate. We then have for each finite t:

53 t n F(Xt) = F(Xo) + | DiF(X )dX' i=1l ~tt 0 S 0 + t i l D DF(X -)d<X,X C> j=1 ni i + I [AF(Xs) - D F(X )AXs] s<t i=l where the sum st [...] in the RHS converges a.s for each finite t. In particular the process (F(Xt)) is a semimartingale. This formula gives rise to a lot of applications (see [D1], p. 106; [M6] Theorem 4 on integration by parts, etc.). We give only one of them, very important, which is the subject of a paper of Doleans-Dade [D2]. EXPONENTIAL FORMULA Theorem 1.9.15: Let (Xt) be a semimartingale such that X = 0. 0 (a) There exists one and only one semimartingale (Zt) satisfying the stochastic integral equation: Zt 1+ Zs-dX (b) The solution (Zt) is given by:

54 -AX Zt = exp(Xt - -<XC>t) I (1 + AX)e t t (t t s s<t where the product in the RHS converges a.s for each t. This theorem itself generates numerous applications, in particular on multiplicative decomposition of martingales (see [D2]).

CHAPTER 2 COUNTING PROCESSES AND INTEGRATED CONDITIONAL RATES 2.0 INTRODUCTION In this chapter we use the Doob-Meyer decomposition to uniquely decompose any counting process (Nt) for which the random variable N is a.s finite for each t into a sum of a square integrable local martingale and a natural increasing process. This last process is then called the Integrated Condition Rate as explained in the INTRODUCTION. After defining counting processes and establishing some notation in Section 2.1, we define and study the notion of integrated conditional rate in Section 2.3 (Section 2.2 is concerned with a preliminary result). In Section 2.4 three classes of counting processes are defined: regular, accessible and predictable counting processes, these latter constituting a subclass of accessible counting processes. We show that any counting process can be uniquely decomposed into the sum of two counting processes which are respectively regular and accessible. Regular counting processes have, loosely speaking, totally unexpected times of jump. Poisson processes are of this type. On the contrary, the times of jump of an accessible counting process can be predicted with some chance of success. A counting process which jumps with some positive probability at given fixed times is an example of this kind of processes. Properties of integrated conditional rate of counting processes belonging to these three 55

56 classes are derived and examples are presented. In Section 2.5 we give sufficient conditions for the existence of a conditional rate. Counting processes of independent increments play an important part in solving the detection problem. These processes are precisely those which have a deterministic integrated conditional rate and this is the topic of Section 2.6. Finally in Section 2.7 we obtain, using the change of variables formula originally due to Ito [12] and extended by Dol6ans-Dade and Meyer [D1], some results related to probability generating functions. 2.1 BASIC DEFINITIONS AND ASSUMPTIONS The notation introducted in the previous chapter is used consistently is this one. As before, the state space and the index set of all stochastic processes (see Definition 1.1.2) are respectively given by the real line R and its positive part R+. By a continuous (right-continuous, etc.) process we mean a process with continuous (rightcontinuous, etc.) sample paths. We do not distinguish between modifications of the same process (see Section 1.1); this allows us in particular to consider only right-continuous martingales (c.f. Remark 1.5.6). If (Xt) and (Yt) are two right- (or left-) continuous processes which are modifications of each other (i.e., Xt = Yt a.s for each t) then we have X = Yt a.s for every t (see Remark 1.1.4). Recall that the notion of martingale is relative to a probability

57 measure P and an increasing family of o-algebras (Ft), while stopping times depend only on the family (Ft) (see Sections 1.2 and 1.5). We emphasize this by speaking of a (P,Ft) martingale (or simply (Ft) martingale, when only one probability measure is involved) and of a (Ft) stopping time. Every stochastic process in this chapter is defined on a single probability space (Q,F,P). Definition 2.1.1: A counting process (Nt) (hereafter abbreviated CP) is a stochastic process having sample paths which are zero at the time origin, right-continuous step functions with positive jumps of size one. As seen in the INTRODUCTION CP's are naturally associated to point processes. If (Nt) is a CP associated to a point process then observing (Nt) to time t tells us of the points occurring up to and including t. Note that this would not be the case had we choosen the sample paths of (Nt) to be left-continuous (Rubin in [R2] makes this unnatural left-continuity assumption). With every CP (Nt) we associate an increasing family (Ft) of o-subalgebras of F to which (Nt) is adapted (see Definition 1.2.1). Loosely speaking the a-algebra Ft represents the information to which we have access at time t (see Section 1.2). In particular we will denote by (N, 0 < u < t) the minimal cular we will denote by Nt u - - o-algebra generated by the CP (Nt) up to and at time t. Numerous results we will be using depend on the right-continuity of the family (Ft) (e.g., optional sampling theorem,

58 existence of right-continuous modifications for supermartingales, etc.). If the family (Ft) to which the CP (Nt) is adapted does not have this property then we will consider instead, following Meyer [M8], the family (Ft+) (see Section 1.2). This family (Ft+) is by construction right-continuous; the a-algebra Ft is contained in Ft+ so that the CP (Nt) is adapted to this family. Thus we will always assume in the remainder of this thesis that the family (Ft) is in fact right-continuous. We also suppose that the probability space (Q,F,P) is complete and that the a-algebra F contains all the P-negligible sets. The points in time at which a CP (Nt) jumps are basic to this study: Definition 2.1.2: The stopping time: ( inf {t: Nt > n} n o if the above set is empty is called the time of nt jump of the CP (Nt). The fact that J is a stopping time with respect to any family (Ft) to which the CP (Nt) is adapted can be easily verified: the set {J < t} = {N > n} belongs to n ~ Ft for every t (see also Example 1.2.6). If (Nt) is a CP bounded by m then for n > m Jn is equal to infinity and is not properly speaking a time of jump of (Nt). But the above definition has the practical advantage that, when the random variable Nt is a.s finite for each t, the sequence (Jn)

59 increases to infinity. For example a given property may be shown to hold on the interval [0,J ] for each n (on this n interval the CP (Nt) has the nice behavior of being bounded by n). Then the fact that the sequence (Jn) increases to infinity shows that this property holds for all t. Recall also that the definition of local martingales involves a sequence of stopping times increasing to infinity. 2.2 A PRELIMINARY RESULT The following result is basic to the establishment of the Likelihood Ratio Representation Theorem (Theorem 3.3.1) given in the next chapter. We state it here because it is also used in this chapter, although not in its full generality. This lemma is basically a generalization to supermartingales of a result on energy for potentials. The known result is the following: Lemma 2.2.1: Let (Pt) be a potential of class (D) and denote by (At) the unique natural integrable increasing process which generates (Pt) (see Definition 1.7.3 and Theorem 1.7.11). Then we have the following chain of inequalities: EA2 < 4E(sup P)2 < 16EA2 00- t 00 t For a proof of this Lemma see Chapter VII, Section 6 of [M1]. If (Xt) is a supermartingale we denote its positive and negative part respectively by (Xt- Xt v ) and

60 -A (Xt A -(Xt A 0)). If (Xt) is of class (D) (or even just uniformly integrable) then by the martingale convergence theorem ([M1], Theorem 6-VI) there exists an integrable random variable X such that a.s and in the mean X = lim X. Define X X v 0 and X -(X A 0). Then bet t 00 o 00 cause the two functions (. v 0) and (- A 0) are continuous we also have by Theorem 4.6 of [R1] lim X = lim(XtvO) = (lim XtvO) = (XvO) X t-+oo t-o t Similarly lim X = X t 00 t-too Now the result Lemma 2.2.2: Let (Xt) be a supermartingale of the class _(D) with respect to a family (Ft). Denote its unique Doob-Meyer decomposition by Xt = Y - At (2.2.1) where (Yt) is a uniformly integrable martingale and (At) an integrable natural increasing process. Then: 2 +2 2 (a) EA < 8[E(sup X) + E(X) ] (b) E(sup Xt) < 8[EA + E(X+) (c) The three following statements are equivalent: (1) E(sup Xt)2 < o and E(X)2 < o t

61 (2) E(sup X ) < o and EX2 < m t oo (3) suip EY < o i.e., (Yt) is a square intet)sEt't t 2 grable martingale and EA < m Proof: (a) (Xt) being of the class (D), it has the unique Riesz decomposition (see Remark 1.6.3(b)) Xt = Pt + E(X Ft) (2.2.2) where (Pt) is a potential of the class (D) and X = lim X t -oo a.s and in the mean. Denote by (Bt) the unique natural integrable increasing process which generates (Pt) (see Theorem 1.7.11). By Remark 1.7.15, the relation X = E(B + X Ft) - Bt (2.2.3) is also a unique Doob-Meyer decomposition of (Xt). Hence we have (see (2.2.1)) A = Bt (2.2.4) Yt = E(A0 + XFt) (2.2.5) t =o E( Now by Theorem 23-VII of [Ml] EA2 = E+ (Pt Pt-)dAt (2.2.6) 0 Using (2.2.2) we get (see also [Ml], Theorem 4-VI)

62 A2 = l(Xt+Xt)dAt - [E(XFt) E(X Ft )]dAt 0 0 Hence 2' + + A0 < Esup(Xt +Xt)A + EI[E(X Ft)+E(XJlFt)]dAt (2.2.7) + + Now sup X = sup X; the process (At) is natural so that t t (see [M1], Theorem 20-VII) the last term in the RHS of (2.2.7) is equal to 2Ef/E(XJFt)dA and by Theorem 16-VII of [M1]: E E(XO Ft)dAt = E(XoA ) 0 So from (2.2.7) we have 2 + EA < 2E [(sup Xt + X)A] and by the Schwarz inequality 2 2 + 22 (EA2)2 < 4E(sup Xt + X) EA0 t Then we finally obtain EA < 4E(sup X + X2 < 8[E(sup Xt) +E(X,)2] t t (b) By (2.2.2) X = P + E(X IF ) < P + E(XIF) X00 t t - t

63 So 2 2 2 E(sup X (sup 2{E(sup Pt) + E[sup E(X+IFt)] } (2.2.8) t t t By Lemma 2.2.1 E(sup Pt) < 4EA2 (2.2.9) t co t Now (E(XIFt)) is a positive martingale. Hence by Remark 2-VI of [M1]: E[sup E(X+IFt)]2 < 4sup E[E(XOIFt)]2 t t Furthermore by the Jensen inequality [E(X+IF) 2 < E[(X) 2Ft] so that E[sup E(XIFt)] < 4sup E{E[(X {) = +F4E(X2) 00 cE{[(u)o2 t] 4E0(X0) t t (2.2.10) Using the two above inequalities (2.2.9) and (2.2.10) in (2.2.8) we get the desired 2 2 +2 E(sup Xt) < 8[EA2 + E(X+)2] t (c) First we show that (1) ~ (2) (1) = (2) If, given a sample path of (Xt), there exists a time t such that X > 0 then sup X = sup X and inf X o t t t = 0. If not, then Xt < 0 all t, sup X+ = O and sup Xt t t = -inf X Hence we have the relations t t'

64 sup X = sup - inf X (22..11) t t t 2 + 2+ 2 (sup Xt) = (sup Xt) + (inf Xt) (2.2.12) t t t 2 2 Also 0 < (inf Xt) < (X ) and this implies from t (2.2.12) 2 +2 2 E(sup X)2 < E(sup X) + E(X) t t and the RHS of this relation is finite by assumption. + Clearly 0 < X, < sup Xt. Hence E(X) is also finite t?.2 by assumption, i.e., EX0 is finite. (1) (2) By (2.2.12) +2 2 2 2 (sup X) = (sup X - (inf X ) < (sup Xt) t t t t t t t t t and obviously E(X ) is finite by assumption. Now we show that (2) (3) (2) (3) We have (see Eq. (2..2.5)) Yt = E(A- + X lFt) By (a) and the implication (2) > (1) then EA2 < oo (2.2.13) By the Jensen inequality 2 2 Yt =[E(A + X IFt)2 < E[(A + X )2 Ft] t 00 co t 00 co t1%

65 Thus sup EY < E(A + X < 2EA2 + 2EX2 t 1 ~00 00 ~00 00 and the RHS of this relation is finite by (2.2.13) and by assumption. (3) > (2) If (Yt) is a square integrable martingale then in particular EY2 < o so that (see (2.2.1)) EX2 2 2 + 2 2 EX2 < 2(EY2 + EA2) < o and by (b) E(sup Xt) <. t 2.3 INTEGRATED CONDITIONAL RATE DOOB-MEYER DECOMPOSITION FOR COUNTING PROCESSES As a direct application to CP's of the Doob-Meyer decomposition of supermartingales into the sum of a martingale and an increasing process we have (see Section 1.7; [Ii]). Theorem 2.3.1: (Doob-Meyer Decomposition for CP's) Let (Nt) be a CP adapted to an increasing family (Ft). (a) If for each t IR+, Nt is a.s finite then there exists a unique natural increasing process (At) such that the process (Mt N - At) is a square integrable (P,Ft) local martingale. The unique decomposition (Nt = Mt + At) is called the Doob-Meyer decomposition for the CP (Nt) with respect to the family (Ft). (b) If furthermore ENt is finite for each t then the process (Mt = Nt - At) is a (P,F) martingale.

66 Proof: (a) Let Jn be the time of nth jump of the CP (Nt) tn t F and define (N Nand ( ) By assumption Nt t Ntjn) t t is a.s finite for each t. Hence the sequence of stopping times (Jn) increases a.s to infinity. Also by construction the stopped process (Nt) is bounded by n. For t > s we obviously have n n E(-NtIF) < -N n n Thus (-Nt) is a bounded (Fn) supermartingale and by the Doob-Meyer decomposition we can obtain (see Theorem 1.7.14 and Lemma 2.2.2) the unique decomposition: n + n Nn = M + A (2.3.1) t t t where (Mt) is a square integrable (Ft) martingale and (At) a natural integrable increasing process. Now for n < m the unique Doob-Meyer decomposition of (Nn) with respect to (Ft) is also given by (see Lemma 1.7.10) N M ^J A^m (2.3.2) t tJ t/J n n Therefore comparing (2.3.1) and (2.3.2) we get Mm = M t^Jn t m _ n Hence we can uniquely define for all t an increasing natural Hence we can uniquely define for all t an increasing natural process (At) and a square integrable local martingale (Mt) by

67 A n A = At for t < J t t n A n M M for t < J t t -In and we clearly have N = Mt + At for every t This proves part (a). (b) If ENt is finite for each t then the process (-Nt) is a right-continuous negative supermartingale. By Theorem 19-VI of [Ml], this supermartingale belongs to the class (DL). Then result (b) follows directly from the DoobMeyer decomposition (Theorem 1.7.14). INTEGRATED CONDITIONAL RATE: DEFINITION For every CP (Nt) with Nt a.s finite for each t and adapted to a family (Ft), the uniqueness of the Doob-Meyer decomposition for this CP (Nt) allows us to propose: Definition 2.3.2: We will call Integrated Conditional Rate (hereafter abbreviated ICR) with respect to the family (Ft) the unique natural increasing process which appears in the Doob-Meyer decomposition of (Nt) with respect to the family (Ft). The terminology "Integrated Conditional Rate" is motivated by the following (see also the INTRODUCTION): when the ICR (At) of a CP (Nt) with respect to a family (Ft) is absolutely continuous (sufficient conditions for that are

68 given in Theorem 2.5.1) it can be expressed as t A = { X ds (2.3.3) t s Furthermore N - N = t+h t X hlim E t F (2.3.4) ^ ~hO t'' so that the process (At) is called the Conditional Rate with respect to the family (Ft). Expression (2.3.3) is then a justification for the terminology introduced in Definition 2.3.2, even though a conditional rate does not generally exist. The existence of CP's which admit a bounded conditional rate with respect to the family of o-algebras generated by the CP itself is shown in Section 3.1. Also if (Nt) is a Poisson process (see [P1] Chapter 4) then the notion of conditional rate with respect to the family of o-algebras generated by the process itself reduces to the usual notion of rate. If the random variable N is not a.s finite for each t then the sequence (Jn) of times of jump of (Nt) does not converge a.s to infinity. Define J = lim Jn. By Theorem n 42-IV of [M1], J is a stopping time. For t > J, Nt = and the best we can do in this case is to consider what is happening on the stochastic interval [0,J) only. If now a local martingale (Xt) is redefined as being a process such that there exists a sequence of stopping times (Rn) increasing to J a.s (instead of o) which makes the stopped process

69 (XtR ) a uniformly integrable martingale then as above n we can associate uniquely to the CP (Nt) an ICR on the stochastic interval [O,J). From now on, when speaking of CP (Nt) we always assume that the random variable (Nt) is a.s finite for each t since this is clearly the weakest condition under which we can define an ICR on the entire positive real line. Note that this assumption is very weak as it is violated only if the times of jump of the CP (Nt) considered converge with some positive probability to a finite time, or, in other words, that the point process associated with the CP (Nt) contains with some positive probability a point of accumulation, an unlikely situation in practice. Hence if (At) is the ICR of a CP (Nt) with respect to a family (Ft) the process (Nt - At), that we will systematically denote by (Mt), is in the general case a square integrable (Ft) local martingale, and a (Ft) martingale when the mean ENt is finite for each t. We will see later on (Corollary 2.4.12) that this Doob-Meyer decomposition (Nt = Mt + At) is intuitively a decomposition into the part of the CP (Nt) which is not predictable or expected (this is (Mt)) and the one which can be perfectly predicted or contains no "surprises" (this is the' ICR (At)). We refer to that as the separating property of the Doob-Meyer decomposition for CP's.

70 EXAMPLES AND FIRST PROPERTIES Let (Nt) be a CP and denote by (Nt) the family of th o-algebras generated by (Nt). Let Jn be the time of n jump. Clearly for each n the stopped process (Nt J ) is n a submartingale with respect to any family (Ft) such that Ft' Nt. Hence we can define an ICR with respect to any such family. By definition an ICR is a natural process. This last property is dependent on the family (Ft) choosen (see Remark 1.7.5(a)). So we expect the ICR of the CP (Nt) to be dependent on the family (Ft) considered. That this is actually the case is demonstrated in Example 2.3.5. For emphasis we therefore speak of a "(Ft) ICR." This Example 2.3.5 is constructed with the help of the two next Propositions which also constitute our first examples of ICR's. Let (Nt) be a CP and (At) its (Ft) ICR. The first example is an extreme case in the sense that the family (Ft) considered is given for each t by Ft = N; hence at each time t, if we think of the available information as being given by the family (Ft = N), everything is known about the process (Nt). In other words the CP (Nt) contains no surprises with respect to the family (Ft = N). Thus in the light of the separating property of the DoobMeyer decomposition the following result was to be expected: Proposition 2.3.3: The (Ft) ICR (At) of a CP (Nt) where for each t F = Nis given by At = Nt.

71 Proof: Let (T ) be a sequence of stopping tines redTlcing the local martingale the local martingale (Mt = Nt - At), i.e., for each n the process (M Mt ) is a uniformly integrable (F n FT) n n martingale which can be expressed as E(MmnFt) by Theorem 1.5.4. Now: N AF Fn cF F = F c N 00 0o o^T o t tT t 00 i.e., for each t and n, Fn =N = Fn. Hence we can write t 0o 0 (recall M = N - A =0): Mt =E(lFt = E(MnlF = Mn= M = 0 which clearly implies Mt = 0 for each t and hence the result. Another proof consists in showing directly that the increasing process (Nt) is natural with respect to the family (Ft = N0). Now for every sequence of stopping times (Sn) increasing to a stopping time S the random variable NS is clearly ( X FS = N) measurable. Also every (Ft = n N ) stopping time R is predictable (the sequence of stopping times (R - l/n) increases to R). Therefore totally inaccesible (Ft = N ) stopping times simply do not exist. Hence the process (Nt) charges no totally inaccessible stopping times. The two conditions (a) and (b) of Theorem 1.7.8 are satisfied and this shows that (Nt) is a natural increasing process with respect to the family (Ft = N0). Hence (Nt = 0 + Nt) is the unique Doob-Meyer decomposition of (Nt), i.e., (0) is a uniformly (Ft = No) martingale and (At Nt). a

72 The last part of the proof shows that the times of jump of the CP (Nt) are predictable. We will show later on (Corollary 2.4.11) that a CP (Nt) has predictable times of jump with respect to a family (Ft) if and only if its (Ft) ICR is given by (Nt) itself. The next example of ICR is about processes of independent increments: Proposition 2.3.4: Let (Nt) be a CP of independent increments with a finite mean mt for each t. Then the (Nt) ICR (At) is given by At = m Proof: For t > s we have E(N - mtNs) = E(Nt - Ns Ns) + N -m mt - m + Ns -t Ns -m i.e., the process (Nt - mt) is a (Nt) martingale. Furthermore the increasing process mt is natural because it is deterministic (see Remark 1.7.5(c)). Now (Nt) has the Doob-Meyer decomposition t = (Nt - mt) mt and the uniqueness requires mt to be the (Nt) ICR. We will reexamine CP's of independent increments later on (Section 2.6) and prove in particular a converse

73 result to the above proposition: namely that if a CP (Nt) has a deterministic (Nt) ICR then it is a process of independent increments. Example 2.3.5: The two above results show that for a CP (Nt) of independent increments with finite mean mt, the (Nt) ICR is given by mt and the (Ft = N ICR by Nt. This example illustrates clearly the dependence of ICR's on the family of conditioning a-algebras. Given a CP (Nt) and its ICR's with respect to two distinct families (Ft) and (Gt) such that Ft Gt Nt, it is natural to ask how these two ICR's are related. This is what we examine now. Assume that the CP (Nt) has a finite mean. We will see that even in this case there is no simple useful answer to this problem. Denote respectively F G by (At) and (At) the ICR of (Nt) with respect to the families (Ft) and (Gt). We know that the processes F A F (Mt Nt - At) (2.3.5) and GA G (Mt N - A) (2.3.6) are respectively (Ft) and (Gt) martingales. But it is easy to show (see Appendix A.3) that the process (Xt - Nt - Ct) (2.3.7) where (Ct E(AtlGt))

74 is a (Gt) martingale. The process (Ct) is not necessarily increasing and more over may not be natural with respect to the family (Gt). This last point is shown in the following example: let (Nt) be a CP of independent increments with finite mean mt. If we choose G = Nt and Ft N then ~~~~~~~~~~~t t t t ~ ~00 we have seen (Propositions 2.3.3 and 2.3.4) that G a dF = mt and A = Nt But A F G Ct E(AtFG) = E(Ntt) Nt ) A t tNt tt t t so that by the uniqueness Theorem 1.7.6 (Ct) cannot be a natural process. The above shows that the relation At = C = E(AtGt) which seems very plausible at first glance does not hold in general. What is true is that the process (Ct) is a (Gt) submartingale: for t > s. F F F E(CtIGs) = E[E(AtIGt)IGs] =E(AtIGs) > E(AGGs)= Cs Suppose (Ct) is in fact a right-continuous version of E(AtG) (the mean EC EAF is right continuous so that such a right-continuous version exists by Theorem 1.5.5). By Theorem 19-VI of [M1] this positive submartingale belongs to the class (DL) and we denote its unique Doob-Meyer decomposition by Ct = Yt + Bt (2.3.8)

75 where (Yt) is a (Gt) martingale and (Bt) a natural (with respect to (Gt)) increasing process. Introdtlcing (2.3.8) in (2.3.7) we get Nt = (Xt + Yt) Bt which is, as (2.3.6), the unique ((Bt) is natural) DoobMeyer decomposition of (Nt) with respect to Gt. Hence the G F relation between (At) and (At) is At = Bt E(At Gt) - Yt F It is also clear that if (At) is in fact adapted to the family (Gt) then F = AG A A t t In conclusion there is no simple way to related the two ICR's (A ) and (A ) in the general case. But when conditional rates with respect to the two families (Ft) and (Gt) exist then these two conditional rates are simply related (see Proposition 2.5.2). We finish this section by two simple propositions. The first one shows the intuitive result that a.s no jump occurs in an interval on which the ICR is a.s constant as a function of time. Proposition 2.3.6: Suppose (Nt) is a CP adapted to a family (Ft) which has an ICR with respect to this family that is a.s constant as a function of time on the stochastic interval [T,S] (T and S are stopping times, finite

76 or not such that T < S a.s). Then (Nt) is a.s constant as a function of time for t c [T,S]. Proof: Let (R ) be a squence of stopping times reducing A the local martingale (Mt = Nt - At) where (At) is the ICR nA of (N). The stopped process (Mt = M ) is a uniformly n A integrable martingale for each n. Define similarly (Nt = Nt R ) and (At A At R) The process (At) is clearly also t.R^ t t.R^ t constant as a function of time for t E [T,S]. The process (Mn) having zero mean we can write: n n n n n n E(NS - NT) = E(MS - MT) - E(AS - AT) = 0 - = 0 But n N N N > 0 a.s Thus n N N = N- a.s Hence N lim N limn = N a.s a S n T T n n Proposition 2.3.7: Let (At) be the (Ft) ICR of a CP (Nt). Then ENt < o if and only if EAt < o and ENt = EAt. Proof: (. ) If ENt < o then by Theorem 2.3.1(b) the process (Mt = Nt - At) is a zero mean (Ft) martingale so that EA = EN < o. t (<) Let Jn be the time of nth jump of (Nt). Then (see the proof of Theorem 2.3.1) the process (MtJ - Nt J ^ n n n

77 is a zero mean martingale. Hence ENt EA. Furthern n more At J increases to At as n goes to o (similarly for n NtJ ) so that by the monotone convergence theorem n ENt = lim EN = lim EA^ EAt < n n n 2.4 REGULAR AND ACCESSIBLE COUNTING PROCESSES DEFINITION AND DECOMPOSITION Let (Nt) be a CP adapted to a family (Ft). Denote by th J the time of n jump. It is natural to classify CP's in terms of the properties of their stopping times J Definition 2.4.1: A CP (Nt) is called respectively regular, accessible or predictable with respect to the family (Ft) in accordance with the total inaccessibility, accessibility or predictability of its times of jump Jn with respect to this same family (see Definition 1.2.7). While a process can be none of these, the next theorem will show that any CP (Nt) can be decomposed uniquely into the sum of a regular CP and an accessible CP. Here again these definitions are dependent on the particular family (Ft) choosen. We will see later on (below Theorem 2.4.7) that a CP can be regular with respect to one family and predictable with respect to another. The term regular was previously used (Definition 1.7.12) to characterize a supermartingale (or submartingale) (Xt) such that for any sequence of stopping times

78 (Sn) increasing to a bounded stopping time S we have lim EX = EX n n We show by the next Proposition that our terminology is consistent: regular CP's as in Definition 2.4.1 are also regular in the above sense (Definition 1.7.12), and conversely. On the contrary Rubin [R2] uses the term "regular CP" in a different sense. It denotes (if anything) a CP with a random rate which must possess numerous technical properties. Proposition 2.4.2: Let (Nt) be a CP. Then the three following statements are equivalent: (a) The CP (Nt) is regular in the sense of Definition 2.4.1 (b) For any stopping time S such that ENS < o the process (Nt S) is a regular submartingale in the sense of Definition 1.7.12. (c) lim ENR = ENR for any sequence of stopping n n times increasing a.s to R and such that ENR < Proof: Let S be a stopping time such that ENS < o and (Tn) ~ o n any sequence of stopping times increasing to T a.s. If the relation lim NT N a.s (2.4.1) TAS T, S n n is verified then by the monotone convergence theorem we have

79 lim EN = ENT (2.4.2) n Tn^,,S T, Conversely if relation (2.4.2) holds we have E(NTS -lim N S)=O by the monotone convergence theorem. As the random variable NT^ lim NT S is positive, relation (2.4.1) must ln n be verified. Hence conditions (2.4.1) and (2.4.2) are equivalent. We show now that (a) is equivalent to (b). If (a) is verified then the times of jump of the submartingale (Nt^S) are totally inaccessible (the time of nth jump of (Nt s) is equal to J on the set {J < S} and to o tLs /n n otherwise). Therefore relation (2.4.1) is verified and, being equivalent to (2.4.2), (b) follows (see Definition 1.7.12). Conversely if (b) is true, relation (2.4.2) is satisfied. Then (2.4.1) holds which implies that the times of jump of (N s) are totally inaccessible (otherwise we reach a contradiction). By taking S = Jn, the time of nth jump of (Nt), we get that Jn is a totally inaccessible stopping time. This is true for each n so that (a) follows. We show now that (b) is equivalent to (c). If (b) is true and (Rn) is any sequence of stopping times increasing a.s to R and such that ENR < o then (Nt^R) is a regular submartingale. In particular (see Definition 1.7.12) lim ENR = EN R so that (c) follows. Conversely if (c) n is true and (Tn) is any sequence of stopping times increasing a.s to T then lim ENT ^ = ENT^R n Rn r which shows that (Nt R) is a regular submartingale. [

80 Now the announced decomposition result: Theorem 2.4.3: Let (Nt) be a CP adapted to a family (Ft). Then there exists two CP's, (Nr) and (Nt) which are respectively regular and accessible with respect to the above family and such that r a Nt = N + N for every t t t t This decomposition is unique. Remark 2.4.4: The (Ft) ICR of (Nt) is given by r a A = At + A t t t where (Ar) and (A) are respectively the (Ft) ICR's of (Nr) and (Na). - t Proof: As usual, denote by Jn the time of nt jump. By'A J we mean the stopping time (Definition 1.2.9) A n J = n *o otherwise for A c F. By Theorem 44-VII of [M1] there exists for n each n an essentially unique partition of the set {Jn < A} into two sets of Fj, A and R, such that the stopping times A R J and J are respectively accessible and totally inaccessn n ible. The two CP's a A Nt I{t>JA} n -n Nr A t = I t>jR n t

81 clearly satisfy the conditions of the theorem. Thie uniqueness of this decomposition follows from the essenti all y uniqueness of the partition of each set Jn < oo}. Example 2.4.5: Take Q = [0,1] and let J1 be a random variable uniformly distributed on Q. Define the random variables Jn+l = J1 + n, for n > 1. Let (Nt) be the CP th having Jn as time of n jump, i.e., Nt = C I{t>J ) n n For each m,n > 1, it is trivial to check that the random variable Tn J + 1 - /m is a stopping time with respect m n to any family (Ft) such that Ft D Nt. But for each n > 1 the sequence (T ) converges to Jn as m goes to o. Hence m n+l the times of jump Jn for n > 2 are predictable. The time of jump J1, being uniformly distributed on Q, is totally inaccessible by Corollary A.4.2. Thus the decomposition Nt = Nt + Na is given by t andt t n>2 {t>Jn In this very simple example the CP (N) is in fact preIn this very simple example the CP (Nt) is in fact predictable. This is not always the case. If we assume in the above example that jumps may be skipped independently of each other with a positive probability then (Nt) and a a (Nt) are still given as above but the CP (Nt) is no longer predictable (see Example 2.4.13).

82 For clarity we outline now some of the results we are going to investigate. First regular, then accessible CP's are studied in detail. In particular we will see that a CP is regular with respect to a family (Ft) if and only if its (Ft) ICR is continuous (Theorem 2.4.7); when the family (Ft) is free of times of discontinuity (Definition 1.2.8) then accessible CP's are predictable (Proposition 2.4.9). Predictable CP's are uniquely characterized by the fact that their ICR is given by the CP itself (Corollary 2.4.11). In other words predictable CP's are natural processes. Combining these facts with the above decomposition for CP's (Theorem 2.4.3) gives, when the family (Ft) is free of times of discontinuity, the separating property of the unique Doob-Meyer decomposition for CP's. Namely if (Nt = Mt + At) is the unique Doob-Meyer decomposition of (Nt) then the local martingale (Mt) contains only jumps of size one which take place at totally inaccessible stopping times while the (Ft) ICR (At) also has jumps of size one but at predictable stopping times (Corollary 2.4.12). In other words (Mt) represents the part of (Nt) which is unexpected and the ICR (At) the one which can be perfectly predicted. The case where the family (Ft) does contain times of discontinuity is more complex. Most of these results are obtained by studying the different terms in the equation ANT = AMT + AAT in relation to the appropriate property of the stopping time T (Theorem 2.4.10).

83 REGULAR COUNTING PROCESSES Let (Nt) be a regular CP with respect to a family (Ft). By definition the times of jump Jn of (Nt) are totally inaccessible. This has the immediate consequence that the probability a jump occurs at time t is zero. Because if not there exists a constant a and an m such that P{J = a} > 0. The sequence of stopping times (Rn J m(a-l/n)) is such that P{lim Rn =Jm < o, Rn < m V n} m n m n m n P{J = a} > 0 which shows that the time of jump Jm is not totally inaccessible, a contradiction. Also if T is a (F) stopping time we cannot make with a positive probability a prediction of any time of jump after T, the prediction being based on the information available up to and at time T. More precisely we have: Proposition 2.4.6: Let (Nt) be a regular CP with respect to a family (Ft) and T a (Ft) stopping time. Assume W is a strictly positive FT measurable random variable. Then for each n P{T + W = J } = 0 n where J is the time of nth jump of (Nt). Proof: By contradiction. Assume that for n = no there exists W = Wo, a strictly positive (Ft) measurable random variable, with P{T + W = Jn } = p > 0 0n

84 By Theorem 38-IV of [M1] the random variable Te = T + (1 - l/e)Wo is a (F) stopping time for each e. This sequence (Te) is increasing and P{lim Te = n = p > 0 e o th i.e., the time of n jump is not totally inaccessible, a contradiction. The next theorem is a direct consequence of Proposition 2.4.2 and a result on the Doob-Meyer decomposition of regular supermartingales (Theorem 1.7.14). Theorem 2.4.7: Let (Nt) be a CP adapted to a family (Ft). Then the (Ft) ICR (At) of (Nt) is continuous if and only -if the CP (Nt) is regular with respect to this family. Proof: Let Jn be the nth time jump and define (Nt - N J) n nA (At At ). (Ft = Ft, ). Note that by the uniqueness t t J^ t t.Jn n n of the Doob-Meyer decomposition (At) is the (Ft) ICR of (Nt) (see Lemma 1.7.10). By Theorem 1.7.14(b) the process (At) is continuous for each n if and only if the CP (Nt) is regular. The result follows then by taking the limit as the sequence (Jn) increases to.. One uses here the fact that on any interval [0,to], = A for sufficiently large n (depending on w). o

85 Examples of regular CP's with respect to the family (Nt) are, by Proposition 2.3.4 and the above Theorem, any CP's of independent increments with continuous mean,:in particular Poisson processes. Note that these processes of independent increments with continuous mean are not regular but predictable if we take the family (Ft = N) (see Proposition 2.3.3). For a regular CP (Nt) with ICR (At) we have just proved that all the jumps are contained in the local martingale (Mt = Nt - At) But these jumps completely determined the CP (Nt). This suggests that there is a direct relation between (Mt) and the ICR (At). This point is made clear in the following theorem. Recall (see Sction 1.9, below Theorem 1.9.6; [Kl]) that if (Xt) is a square integrable local martingale then (<X>t) denotes the unique natural increasing process which makes the process 2 (Xt - <X>t) a local martingale. Also if (Nt = Mt + At) is the unique Doob-Meyer decomposition of (Nt)then by Theorem 2.3.1 (Mt) is a square integrable local martingale. Theorem 2.4.8: Let (Nt) be a regular CP with respect to a family (Ft). Denote by (At) its (Ft) ICR and by (Mt) the square integrable local martingale (Nt - At) We have (a) At = <M>t (b) If EN is finite then so is EMt Property (b) will be used later on to prove a result on martingale representation, a result essential in solving

86 the detection problem for CP's. Proof: (a) We have Nt =Mt + A (2.4.3) Let Mt = M + M where (M) L is the unique decomposition of Theorem 1.9.6. Now relation (2.4.3) shows that (Mt) ~ L n V. Consequently (this is an easy extension of Remark 1.8.17) (Mt 0), (<MC>t 0) and the quadratic variation process (Definition 1.9.7) is given by [M]t = (AMs) (2.4.4) s<t But (Nt) is a regular CP and by Theorem 2.4.7 its ICR (At) is continuous so that AM = AN. Now N is either 0 or 1; ^> s s 2 2 hence (AM) = (AN) = AN which implies by (2.4.4) [Mt = Nt (2.4.5) 2 2 The two processes (M - <M>t) and (M - [M]t) are local martingales (see Section 1.9, below Theorem 1.9.6 and Remark 1.9.8(b)). Thus so is their difference (Xt = [M]t - <M>t) and by (2.4.5) we get Nt = X + <M>t (2.4.6) where (Xt) e L. The increasing process (<M>t) is natural by definition so that relation (2.4.6), as (2.4.3), is the unique Doob-Meyer decomposition of the CP (Nt). Then we must have At = <M>t and Xt = M (b) We have seen above that the process (Mt - [M]t) ~ L or by (2.4.5) (Mt - Nt) ~ L. Let (Tm) be a sequence

87 of stopping times reducing this local martingale i.e., 2 the process (MtT - NtT ) is a uniformly integrable n n martingale. In particular (M2 2 E(MT - NtT ) = E(M N ) = 0 ^n An Hence 2 EM = EN^Tn (2.4.7) tTn ^Tn Since MtT converges to Mt, Fatou's lemma implies n EMt < lim inf EM ^T n n and by (2.4.7) and the monotone convergence theorem (N, T n increases to Nt) we get 2 2 EM < lim inf EMt^T im ENt n n n n ACCESSIBLE COUNTING PROCESSES Theorem 2.4.7, which says that the ICR of a CP is continuous if and only if this CP is regular, implies that the ICR (At) of an accessible CP (Nt) is discontinuous. We could conjecture that the times of jump of the ICR (At) are the same as those of the accessible CP (Nt). As we will see this would be true, and we would have in fact (At E Nt) but for the possible presence of times of discontinuity for the family (Ft) considered (see Definitions 1.2.8 and 1.2.10). Recall that an accessible (Ft) stopping time which is not a time of discontinuity for the family (Ft) is (Ft) predictable (see Theorem 1.2.11). This

88 immediately gives us: Proposition 2.4.9: An accessible CP (Nt) with respect to a family (Ft) which is free of times of discontinuity is predictable. Let (Nt) be any CP with ICR (At). We examine now the jump AAT in relation to the property of the stopping time T. We already know that for a regular CP AAT = 0 for any stopping time T (Theorem 2.4.7). The next result will lead to a unique characterization of predictable CP's (Corollary 2.4.11) and the separating property of the unique DoobMeyer decomposition for CP's (Corollary 2.4.12). Theorem 2.4.10: Suppose (Nt) is any CP adapted to a family (Ft). Denote by (At) its (Ft) ICR. (a) If T is (Ft) predictable then n AAT = E(ANTIV FTn) where (T ) is any sequence of stopping times increasing to T. In particular 0 < AAT< 1, and AAT = 1(or 0) a.s if and only if ANT = 1 (or 0) a.s. (b) If T is (Ft) accessible but not a time of discontinuity for (Ft) then th AA AN (c) If T is (Ft) totally inaccessible then AAT = 0. (d) Let Jn be the time of nth jump of (Nt). Then

89 AAj = ANj = 1 J J n n if and only if Jn is a predictable (Ft) stopping time. In particular AAJ = 1 if J is accessible but not a n time of discontinuity for the family (Ft). th Proof: (a) (see [M1], ~ 51-VII) Let J be thetime of n jump of (Nt), and define (N N^ )(A A ).We t t L\J Lt t J We know (Theorem 2.3.1) that the process (M = N - A) is a t t t square integrable (Ft J ) martingale. By Remark 1.8.2(a) n this martingale is uniformly integrable and by Lemma A.2.1 it is also a (Ft) martingale. Thus for i > m and any set H E FT where (T ) is a sequence of stopping times increasm ing to T we have )dP (M - M IF)dP = E(M ) = 0 H H n n n so that using the relation (M = N A) one gets n n ) dP (A - A )dP = { (N - N )dp T T T T H H Letting i increase to oo one obtains, by the monotone convergence theorem Jindp = AfAAdI AdP ANdP V H FT T TdP H E FT H Hm This implies

90 E(AA IFT ) E(ANnIT ) a.s m m and taking the limit with respect to m, by Lemma 1.5.7 E(AATJ V FT ) = E(AN TV FT ) Tm Tim The process (At) is natural with respect to the family (Ft) (Theorem 1.7.9). Then by Theorem 1.7.81, the random variable AAT is (V F ) measurable. Thus the above relation T m Tm gives AAT = E(ANTIV FT ) and by the bounded convergence theorem we get the desired result letting n go to a. (b) By Theorem 1.2.11, T is predictable so that part (a) is applicable. Furthermore FT = V F (T is not a m Tm time of discontinuity of (Ft)). Hence AAT = E(ANIV F ) = E(ANTIFT) = ANT Part (c) is just a restatement of condition (b) of Theorem 1.7.8 and is given here for completeness. (d) ) Jn is predictable and ANJ = 1 so that by " "n part (a) AA =1 n (=) Assume AA = 1. Let Cn =AA I tJ I t J t J {>J t>J nn - n n The process (Ct) is natural because it satisfies the necessary and sufficient conditions (a) and (b) of Theorem 1.7.8 (if not then the natural process (At) would not

91 satisfy these two conditions, a contradiction). By Theorem 52-VII of [M1] then Jn is a predictable stopping time. o Corollary 2.4.11: A CP (Nt) with ICR (At) with respect to a family (Ft) is predictable with respect to this family if and only if (At = Nt). Proof: (=) (Nt) is predictable so by (d) of Theorem 2.4.10 AAJ = 1 for each n n where Jn is the time of nth jump of (Nt). This implies A > Nt In particular for each n A M - N A^ < 0 (2.4.8) tJ t^,J tJ n n n But (Mt^Jn ) is a zero mean martingale so that (2.4.8) n implies Mt = 0 a.s t^J n or after taking the limit Nt = A a.s ( ) If (Nt = At) then AAJ = 1 for each n and by (d) of n Theorem 2.4.10 J is a predictable stopping time for each n, i.e., (Nt) is predictable.

92 Corollary 2.4.12: Let (Nt) be a CP with (Ft) ICR (A ) and define (Mt N - At) Then if the family (Ft) is free of times of discontinuity (a) The local martingale (Mt) has only jumps of size one taking place at (Ft) totally inaccessible stopping times. (b) The (Ft) ICR (At) has only jumps of size one at (Ft) predictable stopping times. Proof: Let A Nt Nr Na (2.4.9) t t t denote the unique decomposition of Theorem 2.4.3 where (Nt) is a regular CP and (Nt) an accessible CP. Let rest t r a r a pectively (Ar) and (At) be the (Ft) ICR of (Nt) and (Nt). By Theorem 2.4.7, (At) is continuous so that the local martingale M Nr r (2.4.10) t t t has only jumps of size one taking place at totally inaccessible stopping times (namely the times of jump of (Nt)). By assumption the family (Ft) is free of times of discontinuity so that by Proposition 2.4.9 (Nt) is a predictable CP and by Corollary 2.4.11 Aa a a = Na (2.4.11) Itt t Introducing (2.4.10) in (2.4.9) one gets

93 1' r Nt = M + (A + N) which is a unique Doob-Meyer decomposition of (Nt) as, by (2.4.11), (Ar + Nt = At + A) is a natural increasing process. But (Nt = Mt + A) is also such a unique decomposition so that one must have Mt = Mr t t a A = A +Na t t t and the result follows. o Let (Nt = Mt + At) denote the unique Doob-Meyer decomposition of the CP (Nt) with respect to the family (Ft). When this family (Ft) is free of times of discontinuity the above Corollary 2.4.12 completely describes the discontinuities of the local martingale (Mt) and of the (Ft) ICR (At): either (Mt) or (At) (but not both) have a discontinuity which is of size one and can only take place at a time of jump of (Nt). When the family (Ft) does have times of discontinuity the situation is a little more complex. Suppose T is a time of discontinuity for (Ft). Then (see Definitions 1.2.9 and 1.2.10) there exists an event A c FT and a sequence of stopping times (Sn) increasing to S such that S < TA a.s and {S = TA} A V F This has the following consequence for a uniformly integrable

94 (Ft) martingale (Xt = E(XIFt)): we have (see Theorem 13-VI of [M1] and Lemma 1.5.7) AX X- lim X = E(X lFs) - E(X V Fs ) n n n Since the set {S = TA} belongs to FS (Theorem 41-IV of [M1]) but not to V F it may be that AXS is different from n Sn zero with some positive probability on the set {S = TA}. Hence it is not surprising for a uniformly integrable martingale to have a jump at a time of discontinuity of (Ft) and similarly for local martingales. In terms of the unique Doob-Meyer decomposition (Nt = Mt+ A) this suggests that two situations, which do not occur when (Ft) is free of times of discontinuity, may now take place. They will be illustrated in Example 2.4.13. (a) Let T be a stopping time which is a time of discontinuity for (Ft) and such that ANT = 0 a.s. As explained above it is likely that AMT will be different from zero so that both AMT and AAT = -AMT are different from zero with some positive probability although ANT = 0 a.s. By Corollary 2.4.12 this does not happens when T is not a time of discontinuity for (Ft). Let TAand TA, be respectively the accessible and totally inaccessible part of T (see Theorem 1.2.12). By Theorem 2.4.10(c), AA =0 a.sso that T A AM 0 a.s. Ifnow T ispredictable, by Theorem 2.4.10 (a),MA =A 0 a.s and AM =O a s. Hence AAT and AMT may be both different from zero only if TA is not predictable (TA is of course a time of discontinuity for (Ft), as T has this property).

95 (b) Let Jn be the time of nt jump of (Nt) and assume it is a time of discontinuity for (Ft). Suppose also that J is an accessible stopping time (if not we can decompose it into its totally inaccessible and accessible parts (Theorem 1.2.12); on the totally inaccessible part we already know that (Theorem 2.4.7) AA = 0 so that AMj = n n 1). As before it is likely that AMJ will be different n from zero with some positive probability. Now by Theorem 2.4.7 AMJ cannot be one a.s so that both (At) and (Mt) n have a discontinuity at Jn. Recall that this cannot happen when Jn is not a time of discontinuity for (Ft) as, by Theorem 2.4.1(b), AAj = 1 a.s so that AM = 0 a.s in this n n case. That both situations (a) and (b) actually take place is illustrated in the next example. Example 2.4.13: Take Q = {1l,'2} with P{(l) = p where 0 < p < 1. Define the following CP (Nt): 0 t < 1 Nt(Wl) = 1 t > 1 Nt (2) = O t O The family (Nt) is then given by {,Q} for t < 1 t ) { U,{w1},{w^},Q} for t > 1

96 and the unique time of jump of (Nt) by 1 Xo = o1 J(@) ir L = i2 0 0 = 0 This stopping time J is obviously accessible. Observe also that J is a time of discontinuity for the family (N): (see Definition 1.2.10) (Sn =1 - l/n) is an increasing sequence of stopping times, Sn < J for each n and the set n n n {Xo: lim nn = = n n Denote a (not necessarily unique) Doob-Meyer decomposition of (Nt) by N = Mt + A (2.4.12) where (Mt) is a uniformly integrable martingale ((Nt) is bounded) and (At) an increasing (not necessarily natural) process. By Theorem 1.5.4 the martingale (Mt) can be expressed as Mt = E(MIFt) where M is a random variable measurable with respect to 00 N = {,{c{1},{02},Q}. Furthermore we know that Mo =0 so that EM0 = 0. From that it is easy to see that any martingale (Mt) is given by

97 0 V w, t < 1 Mt ( a c = a,,t \ ib w = (2 t > 1 where a and b are two constants such that a lp - - (-) (2.4.13) Then by (2.4.12) we must have (0 V w, t < 1 At() l-a ) = t1,t > 1 - -b = o2,t > 1 By the uniqueness theorem only one set of values a and b makes the increasing process (At) natural. These values are a = 1 - p and b = -p (this choice obviously satisfies (2.4.13)) as in this case At = p[1,)(t) is a deterministic hence natural process (see Remark 1.7.5(c)). Hence the ICR of (Nt) with respect to the family (Nt) is PI[1, (t) and the martingale (Mt = Nt - PI[l )(t)) is given by (see Figure 2.4.14):

98 0 V w, t < 1 Mt - Mt i\ rl-p w = 1,' t > I -P = 2' t> 1 Therefore both the ICR pI[ (t) and the above martingale have a discontinuity at the time of jump J of (Nt). This illustrates case (b). As stated above, this is a consequence of the fact that the accessible stopping time J is not predictable and is also a time of discontinuity for (Nt). Also if we define the stopping time T T = 1 O=2 then 0 0 w = W1 AAT - Ip = 2 even though ANT = 0 for any w. This illustrates case (a). It is easy to check that T is a time of discontinuity for (Nt) which is accessible but not predictable.

99 Nt(W1) CP (Nt) Nt('2) 1 - ~ - -. I: I, 0 -. 1 0' At (W) | ICR (At) At p -— ~ —---— I pt —D ) I 1 1 1-p.I I I I 0 1 0,1 t 0 I1 Ol 0 I1 -p ----— * Figure 2.4.14

100 2 5 CONDITIONAL RATE In the previous section we have seen that we can decompose uniquely any CP (Nt) adapted to a family (Ft) into a sum of two CP's which are respectively regular and accessible with respect to this family (Ft) (Theorem 2.4.3). Regular CP's relatively to a family (Ft) are precisely those which have a continuous (Ft) ICR (Theorem 2.4.7). But a continuous ICR may not have absolutely continuous sample paths. For example consider a CP of independent increments with a continuous, but not absolutely continuous mean. In the next theorem we give sufficient conditions under which the ICR (At) of a CP (Nt) with respect to a family (Ft) is absolutely continuous or in other words when does a random process (Xt) adapted to (Ft) exist such that we can express the ICR (At) as At = Xds (2.5.1) Under these conditions, we also have N -N X = lim E t+h (2.5.2) h+0 t.. and because of this relation we call the process (Xt) the "conditional rate" of the CP (Nt) with respect to the family (Ft). Expression (2.5.1) is then a justification for the terminology "Integrated Conditional Rate" (ICR)

101 introduced in Section 2.3, terminology used even though a conditional rate does not generally exist. Note also that if the (Ft) ICR of a CP (Nt) with finite mean is given by (ft X ds) where (xt) is a right-continuous bounded pro0 cess adapted to the family (Ft) then, the process (Nt - ft x ds) being a (Ft) martingale (Theorem 2.3.1), we have 0 NE Nt t+h ds E h Ft'h t j~y i E H Xs ds F t and by the dominated convergence theorem ((Xt) is bounded) t+ 1 h rI t+h lim E X d = iElm X ds Ft h^Os t h s t t = E(xtlFt) At so that relation (2.5.2) is also verified in this case. Although there is a lot of emphasis in the literature ([Cl],[Bl],[R2],[Sl], [S2],[S3]) on CP's which admit conditional rates, the problem of existence of these CP's has been partially treated, as explained in the INTRODUCTION, only lately by Br6maud in his dissertation ([Bl]), using a technique of absolutely continuous change of measure. We will examine a generalization of this technique but only in the next chapter on detection, these two problems being related. This question of existence of CP's with conditional rates is difficult and may in fact not be of great

102 importance: for example (and this will be demonstrated in the next chapter) the solution to the dcetection p)v rob len1 does not require the existence of conditional rates for the CP' s i nvolved (but they must be regular). We now give sufficient conditions under which a CP with finite mean does possess a conditional rate. Note that these conditions are a kind of conditional version of the conditions which uniquely define a Poisson counting process (see Chapter 4 of [P1]). Theorem 2.5.1: If for a CP (Nt) with finite mean and adapted to a family (Ft) (i) for each t the following limit exists a.s i A lim h Qm(t'h,=) A(t,w) m = 1,2,... where Qm(t,h,w) = PN t+h Nt > m Ft (ii) for almost all w there exists h (w) > 0 such 1 that the series I h Qm(t,h,w) converges uniformly for m h c (0,ho (w)] and is bounded by a function a(t,w) such t that f a(s,w)ds < o for each t. Then 0 (a) The series Z Xm is convergent. Define the prom cess (Xt = Z X ). We have the relation: t m m N N t+h t h= lim E [ thh Ft a.s for every t t O (b) The (Ft) ICR of (Nt) is given by A = I ds 0

1.03 Proof: B ( i) and (i i ~~~~~~~~~~~~~1 1 A limQm(tth) = l nh- (t),u) -t(W h-+O m m h+O in (2,5.3) where the first equality follows by the uniform convergence on (O,ho ()] (see Theorem 7.11 of [R1]). Assumption (ii) also implies for almost all w and h < Ih o() Qm(th,) < a(t,w)h (w) < o m 0 m and this is enough to justify the equality m(Qm - Q+) Q m m m But Qm- Qm+ P{Nt+h Nt mFt} So that the above relation gives for h < h (w) E(Nt+h Nt1Ft) = Qm(t,h,) (2.5.4) m and by (2.5.3) t limh Qm(t,h,w)) = lir EF t h (2.5.5) h O m h-t h - ->^ ^ - 0^ ^-h"+O (b) The CP (Nt) is right-continuous; by Theorem 15.5 there exists a right-continuous modification for the submartingale (E(Nt+hlFt)) (see Definition 27-VII of [MI]) and we denote by (Phxt) a right-continuous modification

104 Nt+h - Nt of the process (E( — th IFt)). We have seen above that lim PhXt = a.s h-+ 0 By (ii) and (2.5.4) 0 < Pht = Qm(thw) < a(t,w) m for h < ho(X). Hence the integral t phxSds 0 is well defined for almost all w and by the dominated convergence theorem t t lim Phds = XSds a.s (2.5.6) h-+0 Denote by (At) the (Ft) ICR of (Nt) and define as usual the martingale (Mt - Nt - A). Let c be any positive constant and define c A Pt E(AclFt) - At^c It is easy to check that (Pt) is a potential (see Definition 1.6.1) and by Theorem 29-VII of [M1] we know that for each t

105 1 C C ao(L1,L ) I E(P - P F )ds. s s+h s)ds tAC 0 hNow (Atc Nt M c) so that by (2.5.7) t^pC _cc P = [E(AclFt) + Mtc] - N where (Mtc) is not only a (Ftc) but also a (Ft) martingale. Hence for s < t and if we choose c > t + h E(P - = E(N - N IF) P+hFs) = Es+h s Thus on the one hand by (2.5.8) t1 a (L1 Loo) jlB E(Ns+h NsIF )ds h-O At j s +h s sh h-O t 0 and on the other by (2.5.6):t 1r I h E(Ns+h s- NslF)ds h- 0 0 so that we must have a.s for each t At = X ds (2.5.9) 0 By the right-continuity of the processes involved relation (2.5.9) is valid for every t a.s (see Remark 1.1.4) and result (b) follows. o

106 The next result shows that the two conditional rates of a same CP (Nt) but with respect to two families (Ft) and (Gt) such that Ft' G Nt are related by a simple expression. Proposition 2.5.2: Let (Nt) be a CP with finite mean. Denote its conditional rate with respect to the family (Ft) by (it). Let (Gt) be another family such that Nt Gt c Ft. Then the conditional rate (Xt) of (Nt) with respect to (Gt) exists and is given by t = E(AtlGt) Note that this result makes good intuitive sense, the conditional rate (t) being the best mean square estimate of the conditional rate (Xt). Proof: Part of this proof is a consequence of the innovation theorem which is given in the next chapter, i.e., the process (Nt - Xt Xds) is a (Gt) martingale. Now tA0 the process (It X ds) is increasing, continuous hence 0 natural (see Remark 1.7.5(d)) and consequently is the (Gt) ICR of (Nt) by the uniqueness of the Doob-Meyer decomposition. 2.6 COUNTING PROCESSES OF INDEPENDENT INCREMENTS Let (Nt) be a CP of independent increments with finite mean mt for each t. We have already seen (Proposition 2.3.4) that the (Nt) ICR of (Nt) is given by mt. Hence

107 this (Nt) ICR is deterministic. Usinig a techniiLquc of proof due to Kunita and Watanabe [K1] and a result of Dol dans - ade [D2] we now show thal.t tlhe cotxe lse i s t 1lr CP's of independent increments will play ain im1portn11t part in solving the detection problem. Theorem 2.6.1: Let (Nt) be a CP with finite mean mt for each t. Denote its (Nt) ICR by (At) Then (a) (Nt) is a CP of independent increments if and only if the ICR (At) is deterministic. (b) If the ICR (At) is deterministic then A = (c) A CP of independent increments is regular with respect to the family (Nt) if and only if its means is continuous. (d) The characteristic function of a CP with independiu(Nt-N ) ent increments is given by E e = exp (ei - ) (mt - ms) } iu iu IT [{1 + (e -l)Am }exp{(l-e )Amv} s<v<t (2.6.1) Proof: The "only if" part of (a) is simply a restatement of Proposition 2.3.4. Now assume that the (Nt) ICR (At) is deterministic. The CP (Nt) is a right-continuous step

108 process with ANt being either zero or one so that for t> s iuNt iuN iuN iuN iliN e - e S= Ace (e s<v<t s<v<t iuAN iuN..V- Z (e - 1)e s<v<t iuN -= (e - l)e VAN s<v<t iu iuN (e - 1) e VdN J v s where I is the sum over the discontinuities of (Nt) in A (s,t]. Using the expression (Nt = M + A in the above gives iuNt iuN e = e t eiuNv t iuN v + (e ) e dM + e dA v vji (2.6.2) A The process (Mt = N - At) E A is a martingale by Theorem iuN 2.3.1, le t- < 1, so that by Proposition 2 of [D1] the process (ft eiUNv_ dMV) is a martingale. In particular 0

109 t iuN E( e dM INs) 0 s - iuN so that by (2.6.2) (multiplying both sides by e s) iu(N -N) t iu(N -N E[e IN ] = l+(e U-l)E e s dAINS] (2.6.3) s We examine now the last term in the RHS. For any set H e Ns one can write by Fubini's Theorem and the definition of conditional expectations (note that we use here the fact that At is a deterministic function): iu(N t iu(N -N) e Ee dA d = eP ]dA V v H s sH We ali. so have t F (e iu(N — N ) J J' ~ s Hs V s e dAd = E[e INdA 1]dA 2dP ~I ~I~ s H~ I s (2.6,4) We also have t iu(N- N) I I iu(N -N) e ^ ^ dA = E[ e dA [N ]dP (2.6 5)

110 so that for any H E N by (2.6.4) and (2.6.5) 1 r;t iu(Nv -N5) Pt Iu t U (N - V Ns d 5 5 E[ e dA]N.s]dP E[e = 1]2 H s H.s Introducing the above relation (2.6.6) in (2.6.3) one gets E[eiu t iu(N -N. E[et I fNM s l] = Jl+ -1-)NE[e v- sN s]dAv (2.6.7) Now ((e -1)At) is a semimartingale which belongs to A. Then by Theorem 1.9.15 the unique solution of (2.6.7) is a semimartingale given by iu (Nt-Ns) - E[ee! N] exp[(e -1)(At-As)] I {[l+(e -1)AA s<V<t ~exp[(l-ei)AAv]) (2.6.8) The RHS f (2.6.8) is a deterministic function and it follows that (Nt) is a process of independent increments. This shows part (a). We have (Proposition 2.3.7) EAt = mt and part (b) follows trivially. Part (c) results from (a) and Theorem 2.4.7 and (d) is a restatement of (2.6.8) where we have used the fact that At = m.

'111 If we define a non-homogeneous Poisson process (Nt) as being a CP of independent increments with a characteriuN istic function Ee given by exp{(e -l)ft5 ds} where X 0 is a nonnegative function called the rate, then we have: Corollary 2.6.2: A CP (Nt) of independent increments with finite mean mt for each t is a non-homogeneous Poisson process if and only if the mean mt is absolutely continuous. The rate At is then given by the Radondmt Nikodym derivative -. Proof: By Theorem 2.6.1(d) it is easy to see that iuN _ tiu t Ee = exp{(e-l) X ds} J 5s 0 if and only if mt Xds a: rt - 0 2.7 PROBABILITY GENERATING FUNCTION PRELIMINARIES Let (Nt) be a CP with finite mean for each t and adapted to a family (Ft). Denote its (Ft) ICR by (At). A Recall that the process (Mt = Nt - At) is a (Ft) martingale (Theorem 2.3.1). The conditional probability generating function p(z,t,s) is defined for t > s by: A (Nt-N ) i(z,t,s)' E[z IFs] = zP {NtNs nl Fs} (2.7.1)

112 where z is a comolex number with Izl< 1 and we have t S S 0.} (z,t s) (2.7.2) * dz z=O V: aC lcI umpII:ULUL Lil priuuability generating function proceeding exactly as in the proof of Theorem 2.6.1 (let z = e) and find (note that (Nt) is not necessarily of independent increments): i(z,t,s) = l+(z-l)E E z( v dAv (2.7.3) Z. d.v ci~,I, ~1- J ^:: - -Ja 5 The above formula can be generalized to the case where the jumps of the process (Nt) are of random size. This formula would then contain, in place of the term (z-l), a random quantity which is a function of the random si'ze jumps AN and z. This additional randomness makes this formula harder to manipulate and practically useless. APPLICATION TO COUNTING PROCESSES OF INDEPENDENT INCREMENTS Suppose now that (Nt) is a process of independent increments with finite mean mt and take Ft =Nt. In this case the (Nt) ICR (At) is given hy mt (Theorem 2.6.1). We compute now the probability generating function 4(z,t,s) and the probability P{Nt- N = n}. The method used to derive these formulas is appealing as it does not require the mean mt to be continuous (in this case the formulas are well known). First, one gets the probability

113 generating function by letting z = e in Theorem 2.6.1(d) i(z,t,s) = exp{(z-l)(mt-m)} IT (l+(z-l)Amv)exp{(1-z)Amv} s<v< t (2.7.4) For a process of independent increments we clearly have P{Nt -Ns = nlN} = P{N - N = n} (2.7.5) 5 s t s If the mean mt is continuous we then immediately get using (2.7.2) 1 - mn P{N- N = n} n!(mt mn) exp[-(mt - ms)] (2.7.6) t S n This relation motivates the following Definition 2.7.1: A CP (Nt) of independent increments with a continuous finite mean is called a Generalized Poisson process. If furthermore (Nt) is a non-homogeneous Poisson process (see the definition above Corollary 2.6.2) with rate Xt then we get the well-known formula L-1 II l P{Nt -N = n} = n! 1 XAdvN expf-f XdvJ (2.7.7) S S However the interesting case is when the mean mt is discontinuous. We know that an increasing function finite

114 for each t has at most a countable number of jumps in any finite interval. Denote by (ti, i= 1,2...) the times of jump of mt in the interval (s,t] and define A = Am = Amt (2.7.8) s<v<t i 1 and t t 6 = mt - m- A (2.7.9) Formula (2.7.4) can be rewritten with the above relations in the form t t p(z,t,s) = exp{-65}exp{z5 t} n[l+(z-l)Amt ] (2.7.10) 1 1 i i We exmine now the infinite product H[l + (z-l)Amt] (2.7.11) i 1 Observe that: (a) for each n the partial product n fn(z) = f [1 [+ (z-l)Amt] (2.7.12) i=l i is analytic on the complex plane and (b) the series I (z - l)Amt I i i is uniformly convergent in the region zl < 1. This last point follows from the Weierstrass test: I(z - )Amt < 2Amt 1i 1i

115 and because the mean m is finite for each t the series t vt A = I Am < Ilt i ti is convergent. Conditions (a) and (b) above imply that the infinite product (2.7.11) converges uniformly to a function f(z) which is analytic in the region Izl < 1 (see [Hl], Corollary to Theorem 8.6.3; or [D3], Theorem 5.4.8). Hence we can get a Taylor series expansion for f(z) = I [1 + (z-l)Amt ] in the region Izl < 1. i i f(z) = aeze = n[ + (z-l)Amt ] (2.7.13) e i i We compute now dl~r t t d(n) texp{zd } i aeze sn Se - e dz -- e dz The power series (I aeze) is uniformly convergent and can.:'. e'..e be differentiated term by term in the region I z < 1 so that dk a 1=e e! e-k C v e ve dzei e (e-k) kdz. [e dz e z e>k (2.7.16) be diffrentiatdk term btermi-hkein z. ota

116 Introducing the two above relations (2.7.15) and (2.7.16) in (2.7.14) we get d(n)) t dn) (exp{z6} az ) = ) (exp{z6}n[l+(z-l)Amt ]) dzn e dz i i (n) tn k t t n e! e-k,.71 k= X k(6t)n-kexp{zst} ( I a' z ) (2.7.17) k:= O s e>k e (e-k)! Evaluating (2.7.17) at z = 0, we have (see (2.7.10)) dn ( (z,ts) = exp{-6t} F 1)n akk! (2.7.18) dz z=0 k=O and finally (see (2.7.2)) n ak 6 k t n-k P{N - N =n} = exp{-6t} ( ( (2.7.19) t s s n-j S k=0 Now if the mean mt has only a finite number of jumps j 1 in the interval (s,t] then the coeffiencts ak are such that J eJ ae = [1 + (z-1)Amt ] e=1 e=l e and can be computed by J "a =J (1 - Aim) (2.7.20) For i For 0 < k < j

117 1 k j ak = k!( /)[, (m)[ (-Am )] all permutations q=l e q=k+l e {eq, q=l,...,j} of {,.........,j} (2.7.21) j aj =:n Am (2.7.22) ",i=l i and finally for k > j, ak =0. If j = 0 (continuous case) then i[l + (z-1)Amt] =1 so that a = 1 and ak = 0, k > 1 and result (2.7.19) reduces to (2.7.6) (6 = m- m in this case). s t s We summarize the above results in Theorem 2.7.2: Let (Nt) be a CP of independent increments A with finite mean m = EN for each t. Let s < t and denote by ti the (at most countable) times of jump of mt on the interval (s,t]. Define t A 6 m m - I Am s t s v s<v<t (a) If the number of jumps of mt in (s,t] is infinite then the infinite product [l + (z-l)Am ] i 1 is uniformly convergent in the region lzl < 1 to an analytic function and we denote by ak the coefficients

118 of the Taylor's expansion of the above infinite product. (b) If the number j of jumps of mt in (s,t] is finite then the coefficients ak can be computed by formulas (2.7.20) to (2.7.22). If mt is continuous (j = 0) then a= l,ak =0, k > land =mt - m 0, _ = m t - m. S (c) The probability generating function p(z,t,s) is given by i(z,t,s) = exp{(z-1)6} l[1 + (z-l)Amt] i 1i =exp{(z-l)6t} 6 azi (d) We have {t k t n-k P{N - n} = exp{-} ( n t s k=0 (n-k)! ( k=0 APPLICATION TO COUNTING PROCESSES WITH A CONDITIONAL RATE Assume now that (Nt) is a CP with finite mean for each t and for which a conditional rate (Xt) with respect to a family (Ft) exists and satisfies the condition N N E(z Vv IFs ) IF)Ez IFs)) (2.7.23) for all v > s. This condition will be discussed later on. From (2.7.3) we get p(z,t,s) = 1 + (z-l)E[ J zv s vdvIFsg (2.7.24) -. -s''s

119 Now the CP has a finite mean so that (Proposition 2.3.7) Eft X ds is finite which implies (lzl < 1) tE (N IXdv <N s Then by Fubini's Theorem (applied as in the proof of Theorem 2.6.1) one gets (N-N) t (N -N) v s. d E[j z dvI] = f [z - s IF ]dv s s Hence by the above relation, (2.7.23) and (2.7.24) one has {t k(z,t,s) = 1 + (z-l) i(z,v-,s)XSdv (2.7.25) V:<..~~~. s.5 ^s s where X E(XVFs) i.e., (A) is the best prediction of (Xv) based on the past information up to and at time s. As before, this equation has a unique solution which is a semimartingale (Theorem 1.9.15): p(z,t,s) = exp{(z-1) Xdv} (2.7.26) - - s and ftA Ns n rtes P{N N = njF} = (X dv) edv)exp{- X dv (2.7.27) t s 5 n! v S s It is interesting to note that both these formulas are identical to the formulas for Poisson processes where we have substituted for the rate X the best estimate

120 All this is very appealing but is true only if condition (2.7.23) is satisfied. This condition which can be rewritten as (Nv —N) (N-N ) E[z V (xv s) l s] = E [z v - S)I IE[(X - (2.7.28) is difficult to interpret. But if we take s = 0 and assume F = {4,Q} (this is in particular the case for N ) 0 O then the above condition (2.7.23) becomes N N E(z v X) = E(z v)E(Xv) v > 0 (2.7.29) and is satisfied if for each t the two random variables Nt- and X are independent. This seems a reasonable assumption if we think that the value of Nt does not influence the rate at time t. Then under this condition (2.7.29) relation (2.7.27) gives P{Nt = ni = ( (EXv)dv)nexp{- (EXv)dv} (2.7.30) 0 0 In conclusion what we have done is to use the formula of change of variables (Theorem 1.9.14) to get an expression (Eq. (2.7.3)) for the conditional probability generating function. This expression contains an integral which can under certain circumstances be manipulated so as to give an integral equation for the conditional probability generating function. Furthermore this integral equation has, by Theorem 1.9.15, a unique solution which

121 is a semimartingale. This method is particularly suitable for CP's of independent increments and can also be applied to CP's with a random rate which satisfies (2.7.23). But in general we cannot obtain an integral equation for the conditional probability generating function.

CHAPTER 3 DETECT ION 3.0 INTRODUCTION In this chapter we examine the detection problem for CP's by the likelihood ratio technique. This approach is well known and will not be motivated here. Recall that the (P,Ft) ICR (At) of a CP (Nt) is the unique natural increasing process which makes the process (Mt Nt - At) a square integrable (P, Ft) local martingale (Theorem 2.3.1 and Definition.. 2.3.2) and that this ICR is continuous if and only if the CP is regular with respect to the family (Ft) (Theorem 2.4.7). We first obtain the likelihood ratio in the case where one of the CP's is of independent increments with continuous mean mt while the other has an (Ft) ICR of the form (at Xsdm) where (t) ~ H(Ft) is a positive process. Note that both these CP's are regular. Then taking advantage of the chain rule for likelihood ratios we extend this result to the more general case where both ICR's are of the form (t Xsdms) for i = 0,1. Stochastic integral equa0 ^ ^ tions which allow us to compute the likelihood ratio continuously in time are also derived. The results described above are given in the last Section 3.4. The method used to obtain the likelihood ratio consists in the three steps procedure introduced by Duncan ([D3],[D4]) and Kailath [K3]: 122

123 Step 1 gives a general description of the likelihood ratio dP /dP where P is a measure under which a given CP is of o independent increments and P a measure absolutely continuo ous with respect to P. Step 2 is a Girsanov type theorem and Step 3 is the Innovation Theorem. In very general terms the Girsanov, respectively the Innovation Theorem, tells us how to transform local martingales of interest into new processes which are also local martingales when a change of measure, respectively a change of family of a-algebras, is made. These two theorems are presented in Section 3.1. It is also shown there that the Girsanov Theorem can be used to prove, under suitable assumptions, the existence of CP's for which the (Ft) ICR is of the form (it XsdAs) where (Xt) E H(Ft) is a nonnegative process and (At) is itself the (Ft) ICR of a CP. Let now (Nt) be a regular CP of independent increments with A mean mt. Recall that the process (Mt = Nt - mt)belongs to t t 2 M2(Nt) (Theorem 2.4.8) where Nt is the minimal a-algebra generated by (Nt) up to and at time t. In Section 3.2 we 2 show that any martingale in M2(Nt) can be represented as a stochastic integral with respect to (Mt). This result is basic to Section 3.3 where the likelihood ratio representation theorem is demonstrated (this is SteD 1).

124 In this chapter we deal with a measurable space (Q,F) on which the probability measures P, Po and Pi are defined. We denote respectively by E(-), Eo( ) and E1(.) the expectation operator with respect to the measures P, Po and P1. The standard notation PO < P means that the measure P o o is absolutely continuous with respect to P1 while Po' indicates that the two measures are equivalent (i.e., P << P and P << P). Every stochastic process is defined on the measurable space (Q,F) equipped with a given probability measure P, Po or P1. The general assumptions of the previous chapter (see Section 2.1) are used again in this one. 3.1 TWO BASIC THEOREMS IN DETECTION ABSOLUTELY CONTINUOUS CHANGE OF MEASURE: THE GIRSANOV THEOREM The Girsanov theorem is a basic step in finding likelihood ratios. The version we present here, in the context of CP's, will also enable us to create from a regular CP with (Ft) ICR (At) other CP's for which the (Ft) ICR's are of the form (ft X dAs) where (Xt) C H(Ft) (c.f. 0 s Definition'1.9.9) is a nonnegative process. The original version of the Girsanov Theorem dates back to 1960 [G1] and was concerned with Brownian motion. The extension of this result to the case of local martingales became possible with the new calculus being developed for these latter processes. We give now the version of Girsanov Theorem which is appropriate for CP's.

125 Theorem 3.1.1: Under the measure P let (N ) be a regular CP with respect to a family (Ft). Denote its (PFt) ICR by (At). Suppose (Xt) H(Ft) is a nonnegative process and define Lt = I XJ exp (1- )dA (3.1.1) where J denotes the time of nt jump of (Nt). (By convenn l tion the product ( n XA ), when empty (e.g., for t = 0), J <t n is set equal to one.) (a) The process (Lt) is a (PFt) local martingale * t which is the unique solution of the stochastic integral equation Lt = + L Ls(X - )dMs (3.1.2) 0 A where (Mt = N - At) L. (b) If (Lt) is a uniformly integrable (P,Ft) martingale define a new measure Po on (Q,F) by PO(A) = I LdP A F A where L is the limit a.s and in the mean of Lt as t goes 00 t to oo. Then the process (Nt - fJ XdAs) is a (Po,Ft) local martingale, i.e., the process (gt XAdAs) is the (Po',Ft) ICR of (Nt).

126 To prove the above theorem we will need: Lemma 3.1.2: Let P be a measure absolutely continuous 0 with respect to P and define the uniformly.integrable (P,Ft) martingale (Lt) by Lt = EL dp Ft E dP Then the process (Xt), adapted to (Ft), is a (Po,Ft) local martingale if and only if the process (LtXt) is a (P,Ft) local martingale. Proof of the Lemma: (c) Assume (LtXt) is a (P,Ft) local martingale and let (Tn) be a sequence of stopping times reducing (XtLt), i.e., (L Xn) is a uniformly integrable nA (P,Ft) martingale for each n where (Ltn Lt ) and n nA nA (Xt = XtT ) For s < t, if (Ft = Ft^T ) - -'n. n dP dP nn n nn n. 0 o n E(XLn XnLn F) = 0 = E[XE(dP I Ft)-XnE(-FnI FS] t t s s s t s c s dP dP n [ d n n on n = [E(X t)-Es d'P' Fs) IFs " ^^t ds dp r s s^ i.e., dP 0 = E[(Xt- XndpF Hence, by definition of conditional expectations, V A e Fn 5

1 2 7 0= JE[(X t- Xn I Fn]dP = (X nn)- d = (n-n)dP s)dPl s Xt s Xt s)dP A A A which implies n n n Eo(Xt - Xs1Fs) = o so that (Xt) is a (po,Ft) local martingale. The argument can be easily reversed. a Proof of Theorem 3.1.1: (a) This is a direct consequence of Theorem 1.9.15. The unique solution of (3.1.2), a local martingale because the process (ft (X -l)dMs) is one (see 0 Theorem 1.9.11), is given by t Lt "= -[l+(AX-l)AMs]exp[(l-X )AMs]exp(f (Xs-l)dMs) s<t0'' 0 The CP (Nt) is regular by assumption so that by Theorem 2.4.7 the ICR (At) is continuous. Thus AMt = ANt =0 or 1 and the product term I (-) is equal to s<t [l+(X -l)AN ]exp[(l-X )AN] = I Xj exp(l-X ) s<t J <t n n n —- ( n Xj )exp[ 1 (1-X)] =( I X)exp( (l-X)dN) J <t n Jt n <t nn J <t s n- n- n- 0 so that upon introducing the relation Mt = Nt - At in the

128 term exp(ft (X -l)dMs) we get the desired result (3.1.1). s' ~' 0 (b) Because of Lemma 3.1.2 we only have to show that the process (LtYt) is a (P,Ft) local martingale where A C -. N t dA (3.1.3) t - t ss Define A Ft = nI XT (3.1.4) J <t n nand A = exp(J (l-xs)dAs) (3.1.5) i.e., Lt = FX t t t and apply the formula of change of variables to the product (FtXtYt). We get LtYt FtXtYt XsY dF + Fs Y dX t t = t t - l~sV s "I" ^^ s-^s t o 0 + I F X dY S S 0 + [XsA(FsYs)-XsYs- AFs s<t S - F X AY i] (3.1.7) S- S 5

129 We have J XY dF = X Y AF ( 3.1. 8 ~0 ^ ~s<t because (Ft) is a step process. By (3.1.5) dXs = -X (Xs -1)dAs so that Ft Ys ds = - L Y _( s-1)dA (3.1.9) s- ds - s- s s 0 0 t Also AY = AN and Ys N = - XdA s (see(3.1.3)) hence 0,t F dYs - F X AY s F X5d(Y -Ns )= -l L X dAs 0 t 0 0 (3.1.10) Using the relations Ys Y - + ANS, F AN = F X AN so Using the.:.rel n s s s) s s that AFs = F s(Xs-1)AN we obtain X XsA(FsY) = X (F Y + F AN - F Y ) =X Y AF X F AN S -S S S S S = X Y F (X-1)AN + X F X AN s _ s- - ( sq s s s s

130 X t s XA(FsY L= Y (; 1)dN + L X dN Xss = S S-S - s s s s s O 0 (3.1.11) Introducing all the above relations (3.1.8)-(3.1.11) into (3.1.7) we finally get LYt = - L-Ys- (Xs-l)dAs F L5 SdA rt t L _Y _(X -l)dAN Ls X d * s s s - - -s s t t rt Ls-Y (X -1)dN + L X dN S- S Ss S 0 LY = L Y5 (X; -l)dM + L A dM (3.1.12) t t S 0 0 The process (Mt) is (P,Ft) local martingale and the following processes belong to H(Ft):(Lt-) (by (a) (Lt) is a local martingale and see Remark 1.9.10(b)), (Xt) (by assumption) and (Yt_) (easy to check). Therefore the above relation (3.1.12) shows that (LtYt) is a (P,Ft) local martingale (see Theorem 1.9.11). o We remark the following. The generalization of the original Girsanov Theorem [G1] was obtained by Brdmaud [B1], Gualtierotti [G2] and us (and maybe others we are unaware of). By now the most general version has been

1.31 obtained by Van Schuppen and Wong [V1] (1973) who took good advantage of the work of Dol6ans-Dade [D2]. But their result ([V1], Theorem 3.2, see in particular their comments on p. 10) is incorrect in the sense that they only assume that the process (Lt) is a positive martingale with ELt =1. But this is not enough. Although we know by the supermartingale convergence theorem that Lt converges a.s as t goes to 0 (denote the limit by Lo) it is not true, unless the martingale (Lt) is uniformly integrable, that the required condition EL = 1 (i.e., Po(Q) = 1) is 00 satisfied (by Fatou's lemma we have EL < 1). There is no reason to believe that positive martingales are necessarily uniformly integrable. In Appendix A.5 we exhibit a discrete positive martingale (Xn), EXn = 1 which is not uniformly integrable: Elim Xn = O. Bremaud makes the same n error in his dissertation (Theorem 2-1-i of [B1]). Let (Nt) be a regular CP with a (P,Ft) ICR given by (At). When the process (Lt) defined by (3.1.1) is a uniformly integrable martingale the above theorem shows the existence of CP's which have (Po,Ft) ICR's of the form (ft A dAs) where (At) c H(Ft) is a nonnegative process. Unfortunately this uniform integrability requirement for (Lt) seems difficult to meet. By the above theorem - at Lt = 1 + Ls (j -l)dM 0 and by Proposition 2 of [D1] a sufficient condition for

132 (Lt) to be a martingale (not necessarily uniformly integrable) is Ej L-.IX1-lIdM < co (3.1. 15]3 i.e., Ef L[jX-1!dA5 <o s- s s. and (3.1 0 0 Ejt Lls s-1dNs Suppose now that (Xt) satisfies <A l c a.s (3.1.15) where c is a constant such that c > 1. By (3.1.1) then N Lt < c exp(At) (3 1.16) so that t Ntr Nt -t t E Ls IjX-lIdA, < cEc cEc exp(At)-l] 0s Nt 1X5-1%das: ~ (3.1.17) and ~r t NN E L _-lit-d < cEN c exp(A (3.1.18) Hence if relation (3.1.15) is satisfied and if

133 N E(Nt + 1)c exp(At) <' (3.1.19) then condition (3.1.13) is satisfied and (Lt) is a martingale. This martingale is not necessarily uniformly integrable and consequently we have to consider finite intervals [0,a] instead of R+. In practice this is the usual case as we deal with finite observation times. For any a, we define a new measure Po on (Q,F) by dP /dP = L 0 0 and by the above theorem the CP (Nt ) under this measure Po has the process (J X dAs) for (PFt^a) ICR. The 0 S 0 L ta condition (3.1.19) is in particular satisfied if (a) the constant c is equal to one and (b) the (P,Ft) ICR (At) is bounded a.s by a deterministic function Kt (this last condition implies by Proposition 2.3.7 that ENt = EAt < K < o). All these conditions are pretty strong. There is an important case where condition (3.1.19) is satisfied: for generalized Poisson processes (i.e., regular CP's of independent increments with finite mean, see Definition 2.7.1). In this situation the (P,Nt) ICR of (Nt) is given by mt and (see (2.7.7)) 1 n P{Nt = n} = exp(-mt) n (mt) so that condition (3.1.19) becomes N.N E(Ntl)ct = exp(-mt). (n+l)c nL (mt) < o n

134 Furthermore for any increasing deterministic function mt with mo = 0 there exists a measure P such that the CP (Nt) is of independent increments with mean mt. Thus we can summarize this last result as Corollary 3.1.3: Let mt be a deterministic increasing continuous function with m = 0. Then for any positive constant a there exists a measure Po and a CP (Nta) for which the (P,Nt ) ICR is of the form (ft"a Xsdms) t0 L ^ao. 0 where (Xt) ~ H(Nt) is a nonnegative uniformly bounded process (i.e., t < c, c is a constant). This shows in particular the existence of uniformly bounded conditional rates with respect to the family (Nt) (take mt = t). INNOVATION THEOREM In the Girsanov Theorem we make a change of measure while keeping the family of a-algebras (Ft) the same. The Innovation Theorem is concerned with exactly the reverse problem: only one measure P but two families of a-algebras (Ft) and (Gt), Ft Gt, are considered; how (P,Ft) martingales of interest (e.g., of the type (Nt - Xdm) should - s shoul be modified to become (P,Gt) martingales is the question answered by this theorem. This result is of much simpler nature than the Girsanov Theorem. Theorem 3.1.4: Let (Xt) and (Yt) be two processes respectively adapted to the families of a-algebras (Gt)

135 and (Ft), where Gt c F. Suppose (Xt - t Ydms) is a (P,Ft) martingale, where mt- V is a deterministic function with m =0. Then if 0 Ei I Ys I dms I < the process (Xt - Y sdm ) is a (P,Gt) martingale, where 0 Yt= E(Yt Gt) Remark 3.1.5: Denote by A the union of all intervals of R on which the function mt is constant. Note that the process (Yt) may be infinite for t e A and that the process (Y) is then not well defined. The value of the integral (t Y dms) is not affected if one changes the values of (Y) for t E A so that, to avoid the above problem, we' t adopt the following convention: for t A we set (Y 1) Proof: E(Xt - XslGs) S S = E[E(Xt - X IF) IG ] (F D G ) -E[E(jt YdmulFs)lIG] = E(ft Y dm IGs) S = t E(YlG )dmu (Fubini's Theorem) S S. = It E[E(YuGu)lG ]dmU (G Gs):U u SU = E[ft E(Yu lGu)dmul G ] (Fubini's Theorem) 5~~

136 = E(ft YdmulGs) "...c... The above theorem is in fact only a trivial modification of Theorem 1.1 of [Bl] which deals with the function t and the family a(X 0 < u t) instead of mt and (Gt) u t and contains the unnecessary assumption that (Xt - t X dm) 0 is square integrable. We give this theorem here for completeness and because of the shortness of the proof. Observe that if ( As dms) is the (P,Ft) ICR of a CP (Nt) 0 ^' with finite mean then the above result shows that (bt dms),. V where X = E(XslGs), is the (P,Gt) ICR of (Nt). Now by Lemma A.3.1 we also know that the process (Nt- E(ft dmsGt)) is a martingale. These two processes, (t AXdms) and 0 (E(ft AsdmsIGt)), are different because the first one is continuous, thus natural, which is not necessarily true for the second. 3.2 MARTINGALE REPRESENTATION Let (Nt) with the measure P carried on (Q,F) be a Poisson process with rate one. Br6maud ([Bl], Lemma 3) has shown by applying results of Kunita and Watanabe ([KI], Theorem 4.2) on additive functionals of a Hunt process that 2 any martingale (Xt) c M (P,N ) can be represented as a t stochastic integral with respect to (Mt = Nt - t). An analogous result for Brownian motions is well known (see [Wl]). Recall also that if (Nt) is a regular CP with

137 finite mean and ICR (At) then by Theorem 2.4.8 the martinA 2 gale (M = N - A) c M. In this section we extend the above result to the case where the CP (Nt) is not necessarily of Poisson type as above but more generally is a regular CP of independent increments with finite mean mt. This is done by making a nonrandom change of time which is motivated by the following lemma. This is essentially Theorem 12-VII of [M1]. Lemma 3.2.1: Let at be a positive right-continuous increas ing extended real function defined on 1R+. Define inf{t: at as, V s> t} t= 00 moo if the above set is empty i.e., to is the first time from which the function a 00 t remains constant. For all t IR+ let finf{s: a > t} i s ct \ o if the above set is empty Then (a) For t> at, ct = c; otherwise the function c+ 00 is finite, right-continuous and increasing (b) inf{t: ct > s} a = s oo if the above set is empty (c) If the function at is finite and continuous, the function ct is strictly increasing and

138 a = t t (d) Suppose a = 0 and let ft be a Borel function on 0 t ]R+ such that either f f da or f da is finite. Then 0 0 a - OO. 00 j f da f =ds (3.2.1) O S 6~i~dcs Proof: (a) For t > at the set {s: a > t} is empty and 00 ct =oby definition. For t < at, this set is non-empty, 00 decreasing as t increases. Hence the function ct is finite and increasing. Suppose that ct is not right-continuous at a point t < a. Then for any positive there exists a 00 number h such that ct < h < c This implies, by the definition of ct and the increasing property of at that t < ah and, V ~ > O, ah < t + i.e., ah < t and we have reached a contradiction. Hence the function ct is right-continuous. (b) By definition co = inf{s: as >at} >I t

139 Thus c > s + > s for any c > 0 S+~ Also Cu > s implies u > inf{t: ct > s} so that aSE > inf{t: c > s} for any > 0 and by the right-continuity of at we thus have as > inf{t: ct > s} (3.2.2) Let t be such that ct > s; then by definition of at, a < t so that as < inf{t: ct > s} (3.2.3) 5 Relations (3.2.2) and (3.2.3) prove part (b). (c) Assume ct is not strictly increasing; then there exists to < t1 such that for any t ~ [t,tl), = = constant. By relation (b), a = inf{t: ct > co} > t}... o and for any >0, a = inf{t: c > C-} < to, i.e., - o. the function a is not continuous at co, a contradiction. Thus the function ct is strictly increasing. From that and (b) we get a5 = inf{t: ct > s}

140 so we can write a inf{t: c c} = s c t —. S s (d) Let ft be the indicator function of the interval [O,s], i.e., ft = I[ s] (t) We first show that relation (3.2.1) is verified for this type of function. We have f da 0 0 Now f c Io,] (c) = I (t) c [O,s] t U Cus t ( {u:c <s} Thus a a fc dt {u c <s}(t)dt 0 0 = length of the interval {u:c < s}" inf{t: ct > s} a s where the last equality follows by (b). Hence relation (3.2.1) is verified in the case where ft is an indicator function. This implies (see the end of the proof of

141 Theorem 12-VII of [M1]) that this relation (3.2.1) is verified for all bounded Borel functions. The fact that any positive Borel function is the limit of an increasing sequence of bounded Borel functions and the monotone convergence theorem show that relation (3.2.1) is true for any positive Borel functions. If f is now any Borel function we apply (3.2.1) to ft and fto get the desired E t result as the sum f ftdat - ftda is well defined by 0 0 assumption. o We can now prove the desired result on martingale representation. Under the measure P let (Nt) be a regular CP of independent increments with finite mean mt. Recall that by Theorem 2.4.8 the martingale (Mt = Nt - mt) M(P,Nt) and <M>t = mt. Theorem 3.2.2: Let (Nt) be the CP described just above. 2 Then any martingale (Xt) E M (P,Nt) has the form t X, = F dM (3.2.4) 0 where (M= Nt- m) and (Ft) is a process belonging to H(Nt) and such that 2 Ef F2dm < o for each t c ]R (3.2.5) 0 Proof: Define t. and ct as in Lemma 3.2.1 where we now use the continuous mean function mt in place of at (see

142 Figure 3.2.3). For each t, ct being a constant is trivially a stopping time. Define N= N c, M M X t C ct c t ~t: t X and N * N. We obviously have C and c -t t N a(N, 0<u< = O<C (Nc 0<c <c) = o(N*, 0<v<t) L t. - t (3.2.6) where the last equality follows because ct is strictly increasing by Lemma 3.2.1(c). Symmetrically for each t mt is a stopping time and N = Nt (3.2.7) m t Consider now the two following cases Case 1: t is infinite (mt may be finite or infinite) 00 and let T* = [0,mt ) Case 2: t is finite (mt is then also finite) and let X ~ ~. ^~~00 T* [0,mt ]. 00 We show now that in these two cases (Mr) and (Xt) are (Nt) martingales with E(Mt)2 < and E(X)K2 0< for t E T*. t t Case 1: By Lemma 3.2.1(a), ct is finite for t E T* and by the Optional Sampling Theorem (X*) and (Mr) are (Nt) martingales. Clearly E(M)2 and E(X)2 are finite for t ~ T*. + E Tp

143 Case 2: By Proposition 2.3.6 the CP (N is a.s constant as a function of time fort > tt; this implies N = Nt 00 for t > to so that the two martingales (Mt) and (Xt) can respectively be expressed as (Mt = E(Mt INt)) and (Xt = ~* ~ ^.-'. -;'-C - ~'' ~00 E(Xt INt)); by Theorem 1.5.4 they are uniformly integrable 00 and by the Optional Sampling Theorem we have for tc IR (although ct = o for t >mt ) that (M =E(Mt IN ) and r (t -t T^ t t N) 2 (X~ = E(Xt |IN~)) belong to M(N). For t c 00 00 M = Mo = Mt and Xt = X = Xt Hence we only have to 00 00 consider the index set T*. Now by Lemma 3.2.1(c) one gets N = M+ m = M* +t (3.2.8) t t ct t i.e., by Corollary 2.6.2 (Nt) is a Poisson process with rate one. We have seen just above that (X) is a (N*) martingale with E(X*)2 < o for t s T*. Furthermore (see (2.3.6)) N* = o(N*, 0<u<t) so that by Lemma 3 of [Bl] there exists a process (Ft) c H(N*) such that for t E T*. E (F*)2ds < o (3.2.9) and Xt'' F*dM* = F dN* - F*ds (3.2.10) S $ S S S' 0 n o Define

. 144 F = F* t m By (3.2.7) the process (Ft) is adapted to (Nt) and in fact (Ft) H(Nt) since mt is a continuous function and (:F) H (N*). By Lemma 3.2.1(c) and (d) we can write c t ct0 ^00. t F dmS =J F* dm' = I (s) F* dm ss J ms s { s<ct m s 0I 0 0 m m ^t J I{ s<ct} (c )F* ds = F*ds o s om s 0 s 0 For tt e T*, t < m so ct 2 t Fsdms Fds (3.2.11) o 0 0 0 Similarly we get Fidm = (F*) ds Hence by (3.2.9). 2 E Fsdm < (3.2.12) - S 0 Also

145 C F dN = F* AN = F*AN <s< C ms s < t u = Jf F*dN* (3.2.13) U u 0 where we have used the change of variables s = cu. Therefore by (3.2.10), (3.2.11) and (3.2.13) we have obtained Cc C C t Ct t Xt = Xc = J FsdN - J F dm = FsdM (3.2.14) t 0 0 0 which shows together with (3.2.12) the desired result for all t in the range of ct. If t is not in the range of ct then t c [totl) where t and tl = sup{t:mt=mt} delimit a flat of mt. We include here case 2 where t t and t1 =. 0 Note that tl belongs to the range of ct (by right-continuity of ct, see Lemma 3.2.1(a)); hence by (3.2.14) (if t =, X = Xt and Mt =Mt are well 1 0 1 00 defined, see case 2) tl X = | F dMs (3.2.15) Let t c [to,t1). By Proposition 2.3.6, Nt = Nt a.s so that Nt =Nt and therefore t ic

8 I 8 ~ ^ _ ^ ^ - _^I N i; i: I"-!::: i * \0 8 0 8 8 r 0 -P 4^-, ^+J +J'. ~ ~ F ~' ^ ^'^'

147 X tNt) E(X Nt) Nt ) = X (3.2.16) r r^ r ^ ^1 ^~~1 Similarly M = M (3.2.17) Also F == F* F* = F t mt mt mt to t t to 1 o 0 Hence we also have in this case It tl s s t t t |-: s - Fss = F dM5 o 0 1 where the first equality follows by (3.2.17), the second by (3.2.15) and the last one by (3.2.16). o We remark the following. If (Nt) is now any regular CP with (Nt) ICR (At) then we can define a stochastic change of time by inf{s: As > t} ct'.o if the above set is empty and using the notation of the theorem we also have that (N* = M + t) is a Poisson process with rate one. But now t t N* is not necessarily given by a(N*, O<u<t) (we have t U _ N 3ao(N*, 0<u<t)). The (Nt) martingale (X1) is not L~~ 0u ~~ ) Th (N

148. 4.. necessarily (o(Nu, 0 < u < t)) measurable so that we cannot in this case apply Br6maud's result to express the martingale (X~) as a stochastic integral with respect to (Mr), i.e., express (Xt) as a stochastic integral with respect to (Mt). Later on we will need the apparently more general result Corollary 3.2.4: Let (Nt) be a regular CP of independent increments with finite mean mt. Suppose T is a (Nt) stopping time. Then if (X) M(N ) there exiss a process (Ft) ~ H(Nt) such that BE FFdm < ~ for each t c R o..o: -.;.:.. S. S 0 F = F t t { t<T} and x, it S S 0 where (Mt = Nt - mt) Proof: By assumption (Xt) E M(Nt T) and by Lemma A.2.1 2 we also have (Xt) c M (Nt). Thus by Theorem 3.2.2 there exists a process (Ft) E H(Nt) such that Et F2dms < 0 and Xt - FsdMs 0

149' For t T, Xt = XT so that one can take t = Ft I{t<T. As a consequence of Theorem 3.2.2 we also have Corollary 3.2.5: Let (Nt) be a regular CP of independent increments with finite mean mt. Then the family (Nt) ~ ~. L- -. L is free of times of discontinuity. Proof: By assumption the CP (Nt) is regular so that the times of jump of (Nt) are totally inaccessible (see Definition 2.4.1); the mean mt is continuous by Theorem 2.4.7 so that the times of jump of the martingale (Mt A Nt - mt) are also totally inaccessible. Let (S ) be any increasing sequence of stopping times. We have to show that V N lim Let A be any set belonging to n n n Nim S and define the bounded martingale n n = E(AINt) By Theorem 3.2.2 there exists a process (Ft) H(Nt) so that Xt = FdM s s 0 The above relation implies that the times of jump of (Xt) are totally inaccessible because as we have seen above those of (Mt) are. Hence

150 lim X X= (3.2.19) n Sn lm Sn n Now by choice of A n n n n Also xs E(IAINS n n and by Lemma 1.5.7 lim X( liX = E(IAi V Sn) n n n n so that (3.2.19) is equivalent to E(IA V N) = n n which implies that for any Ae Nim thenAcVNS orVNS Ni lim S;~ S S limS S n n n n n n n n But we have, because S is an increasing sequence of stopping times VNS lim so that the desired result n n n n n VN = Ni. S has been obtained. n n n

151 3. 3 LIKELIHOOD RATIO REPRESENTATION MAIN RESULT With respect to the measure P* carried on (Q,F) let (Nt) be a regular (for (Nt)) CP of independent increments with finite mean mt. Denote by P the restriction of the measure P* to the a-algebra N, and suppose Po is another measure defined on (Q,N) which is absolutely continuous with respect to P. It is then meaningful to define the process dP Lt = E(dP 0INt) which is a uniformly integrable martingale and where dPo/dP is the limit a.s and in the mean of Lt as t goes.. to ( (see Theorem 1.5.4). Observe that Lt is the likelihood ratio for the interval of observation [0,t] (see Section 3..4: [D6], Chapter VIII ). The following theorem gives a description of this martingale. Theorem: 33.1: Let (Nt) and (Lt) be the two processes defined above. Assume that T is a stopping time with Property (H): There exists an increasing sequence of 2 stopping times (T-) such that E(lnLT ) < o for each n n and T = lim T a.s. n Then there exists a positive process (F ) e H(Nt) such that it^ Fsdm j0s and

152 - LtT^ I <t PT Fsdm) (.3.1) J <t^T n ^ n- 0 where Jnis the time of nth jump of (Nt) and the product term ( II F j) when empty (i.e., for t^T < J1) is set J <t^ T n n equal to one by convention. Remark 3.3.2: Let T, T be two stopping times with property (H) and (Ft), (Fr) their corresponding positive processes. The continuous parts of (LtT) and (Lt^T ) are a.s equal on the set {t < T^Tt}. Hence, from (3.3.1), we have t^TT* t^TTT FF dm = F dm a.s 5sdms s 0 i.e., F F a.s on the set {t < TT}. Note that the values of both these integrals are not affected by a change of the values of (Ft) and (Fe) on the intervals of constancy of mt. By our convention (Remark 3.1.5), Ft = F= on these intervals. The stopping time T, which is the first time after which the likelihood ratio (Lt) can behave badly, may take the value +o. In fact it is desirable for T to be as large as possible. In the next section where we solve the detection problem we will identify the process (Ft) when the measure P0 is generated by another CP. Br-maud

153 (Theorem 5.2.i of [Bl]) states the above result in the case where the CP (Nt) is a Poisson process with rate one, but without assuming property (H). His proof is clearly wrong and we will see later on by an example that at least an assumption of the type E(lnLT ) < o (using the notation of the above theorem) is necessary to prove the result by the technique used. We delay a more extensive discussion and explanation of the above theorem and property (H) as this can best be achieved by first providing a proof of this result. This proof, which requires two additional lemmas is long and for clarity we outline it now: Step 1: We show that there exists an increasing sequence of stopping times (S) converging to T such that (a) the process (Zn ln (Lts))nA process (Zt= ln(Lt S )) is a regular supermartingale of t n class (D) and (b) the martingale (Yt) in the Doob-Meyer decomposition of (Z) belongs to M (P,Nt ) This part will be demonstrated with the help of Lemma 3.3.3 given below. Step 2: By Corollary 3.2.4 we then express (Yn) as a stochastic integral with respect to the martingale (Mt = Nt - mt) and get L S = exp( GsIs<Sn dN5 - B (3.3.2) where (Bn) c and (Gt) E H(Nt). Continuity of (Bn) follows from the regularity of both the CP (Nt) and the

154 supermartingale (Zt) and is necessary for the next step. Step 3: By Lemma 3.3.4 below, the process (Bt) is of the form t B = [exp(GI^ _)- 1]dm t j sis<S_ 0 Introducing the above relation in (3.3.2) and. showing we can take the limit we obtain t^T t^T LtT = exp( Gds + mt^T exp(Gs)dm5 ) 0" *0.. 0 The final result follows then by letting Ft = exp(Gt). Lemma 3.3.3: Let (Ft) be an increasing family of a-algebras and X. be a nonnegative integrable random variable measurable with respect to F. Define the uniformly A integrable martingale (Xt = E(XFt)) and the process A (Zt = In X). Then + 2 (a) The proce i of css (( is of class (D). (b) The two following statements are equivalent (1) (Zt) is a supermartingale of class (D) (2) E(ln-X) < o (c) If E(lnX )2 < oo then (Zt) is a supermartingale 2 Of class (D). Furthermore the process (Zt) is also of class (D). In particular for any stopping time T, EZ2,< ~ EZT *

155 Proof: (a) Recall the relation 2 0 < In x < 4x for x > 1 Now Z = ln(Xt v 1). Hence +2 2 (Zt) = ln(Xt v 1) < 4(Xt v 1) (3.3.3) On the set {Xt < 1}, Zt = 0 and X > so that (3.3.3) implies +2 0 < (Zt) < 4X t t +2 This shows that the process ((Zt) ) is of class (D) because the martingale (Xt) is uniformly integrable and hence of class (D) (see Remark 1.4.2(b) and (c)). (b) (1) > (2) If (Zt) is a supermartingale of class (D) then so is (-Zt) (see Theorem 6-V of [Ml]; Proposition IV.5.1 of [N1]). Hence (Zt) is a submartingale of class (D), a fortiori uniformly integrable (see Remark 1.4.2(a)). So by the supermartingale convergence theorem Zt converges a.s and in the mean to an integrable random variable, say Z-, as t goes to a. Now it follows from the continuity and monotonicity of the logarithm that Zo = ln-X (3.3.4) (even on the set {X = 0}); hence Eln-X = EZ <

15-6 (2) = (1) We have Zt = -ln(Xt ^ 1) (3.3.5) By proposition IV.5.1 of [N1], the process (Xt ^ 1) is a supermartingale (this is in fact a direct consequence of the Jensen inequality). The martingale (Xt) is uniformly integrable, hence so is the supermartingale (Xt^ 1) so that Xt^ 1 > E(X,. ^lft) (3.3.6) By assumption and the Jensen inequality we get -Zt = ln(Xt^1) > ln[E(X^,IFt)] > E[ln(X ^) IFt].-It A i.e., o < Zt < E(lnXlFt) (3.3.7) Now the RHS of (3.3.7) is uniformly integrable (Theorem 1.5.4) and hence of class (D) (Remark 1.4.2(b)) so that the process (Zt) is also of class (D) (Remark 1.4.2(c)) and by Theorem 6-V of [M1] (Zt) is then a positive submartingle of class (D). Now the relation + 2 1 Z+ < (Z ) +-t -t4 together with part (a) and Remark 1.4.2(c) show that (Zt) is of class (D) and consequently so is the process + - (Zt = Zt - Zt). Then by Theorem 6-V of [M1] the process (Zt = In Xt) is a supermartingale of class (D).

157 (c) The relation E(ln-X) < m obviously implies Eln-X <.. We have just shown above that this last 00 condition implies that (Zt) is a submartingale of class (D), hence uniformly integrable (Remark 1.4.2(a)). We can then write (Theorem 13-VI of [M1]) t < E(Z IFt) (3.3.8) Now by the Jensen inequality, relations (3.3.4) and (3.3.8) we have 2 2 22 0 < (Zt) <[E(Z Ft)I < E[(Z )2 Ft] = E[(ln- X) Ft] where the RHS exists by assumption and is a martingale of class (D) (Theorem 1.5.4 and Remark 1.4.2(b)). Thus the above inequality shows that the process ((Zt) ) is of class (D) (Remark 1.4.2(c)). Then since (Z = (Zt) + (Zt), 2 (Zt) is of class (D) by virtue of part (a). o The second lemma is essentially already contained in Lemma 6.1 of [K1]. Our result is more general in the sense we.do not require local martingales to be square integrable nor do we assume the underlying probability space to be generated by a Hunt process. On the other hand it is more restrictive as we only consider processes which belong to V. Lemma 3.3.4: Let (Nt) be a CP regular with respect to a family (Ft) and denote its (Ft) ICR by (At). Suppose that

158 the process (Bt) belongs to V and that (Ft) is a predictable (with respect to (Ft)) process. If the process (Xt) defined by exp J FsdNs - Bt) (3.3.9) 0 is a local martingale then Bt I [exp(F) -l]dA 0 Proof: Let Xt = exp Zt (3.3.10) where Pt A r Zt Js FsdN - Bt (3.3.11) 0 and apply the change of variables formula (Theorem 1.9.14) to the RHS of (3.3.10). We get X=l+ X sdZs+ [exp(Z )-exp(Z )-exp(Z )AZs] t < S <sts s sO<s<t (3.3.12) By relation (3.3.11): AZ = F AN s S s Thus: [exp(Zs)-exp(Z.)-exp(Z-)AZ] 0<s<t

159 = X< {exp(Z )[exp(AZ ) - 1 - AZ] O<s<t S S O<s<t t t X [exp(F ) - 1]dN - X _F d (3.3.13) 0 0 Now: It~t t X _dZ = J X5 F dN 1 - X dB (3.3.14) j: S S S- S S J'' 0 0 0 Introducing (3.3.13) and (3.3.14) into (3.3.12) we obtain t t X 1 = Xs_[exp(Fs) - l]dNs - J XsdB5 (3.3.15) 0 0 or taking into account the relation t= Mt + At t t Xtl+WYt = Xexp(F )dA X d(A +Bs) (3.3.16) Xs s s S5 s- S 0 0 where W = X dM t I sand

160 A: t J S 5 s -"Yt j |xs exp(Fs)dMW 0 The process (Xt) ~ H (see Remark 1.9.10(b)) so that the process (Wt)is a local martingale (Theorem 1.9.11). As we will show later on, the process (Yt) is also a local martingale. Thus (3.3.16) implies that the process ft'(I f~~t Xsexp(FdA - s d(A s+Bs Lc n V 0 0 O and by Lemma 1.9.4 we must have then: Bt it [exp(Fs) - 1]dA$ since the process (Xt) is different from zero (see (3.3.9)), which is the desired result. To complete the proof we now show that the process (Y) is indeed a local martingale. Let n + (F = F n) (3.3.17) s ^ so that n A X Xt exp[ (F Fs)dNs] (3.3.18) t F _ F 0 increases to Xt by the monotone convergence theorem. Introducing the expression (3.3.9) of Xt into the above relation we get

161 = exp (F- Fs)dN Bt] (3.3.19) t. exp[J S S0 Using the change of variables formula as above we find x 1 V -exp(Fn -Fs)dAs I Xn d(A +B ) (3.3.20) t S S s S- S 0 0 where Vn A I Xn [exp(Fn - F) - ]dM 5: S 0 By Remark 1.9.10(b) the process (Xt ) ~ H and so does the process (exp(Fn - Ft) - 1), by construction. Hence the n process (Vt) is a local martingale. Let (Um) be a sequence of stopping times which makes the process (Xt_) locally bounded (see Definition 1.9.9). The process (exp(Fn - F)-'he'rocess'' (Ft t 1) is bounded by n and 0 < Xn < X so that, for each n, t t. (Um) is a sequence of stopping times reducing the local martingale (Vt). Let (Rm) be a sequence of stopping times reducing the local martingales (Mt), (Xt) and (Wt) and (Sm) the sequence of stopping times defined by rinf{t: It X d(As+ B ) > m} fs s -S /'- 0 Sm (om if the above set is empty The sequence (Sm) increases a.s to +o because the process (It Xs d(As+BS)) E V so that the sequence of stopping times 0) also increases a.s to + given by (Tm = Um^Rm^Sm) also increases a.s to +0. Note

162 also that, because of the continuity of the processes (At) and (Bt), t,T ^. m BE Is Xs d(A+B) < m -s- SO - 0 Hence (see (3.3.20) and note, for each n, EV^ = EV = 0) ^"m ^ t^ T t^T EX m = X exp(F5-F)dA -El Xn d(A+B) 0 0 and by the monotone convergence theorem (recall: n + lim (F-F) = F F Fs) E tAT s m s- s T~m:t^T 0m 0 EX -l X-d(A +B5) < 00 (3.3.21) where the RHS is finite by construction of the sequence (Tm) Then (3.3.16) shows that ttT r m. tt^T EYt E m X exp(F )dM <o0 (3.3.22) mt^ s s s and consequently (recall M = Nt - At) (~~~~~~~~Ti) The I331) hw tha

163 tT t^T m m EJ X exp(Fs)ldMl EYtm + 2E X exp(Fs)dAs < 0 0 where the last inequality follows by (3.3.21) and (3.3.22). This and Proposition 2 of [Dl] (and the remark following it) which states that if (Mt) ~ A is a martingale and if (Ct) is a predictable process such that Elt {Cslld(Ms < then CsdMs) is a martingale, finally shows that (Yt = t X _exp(F )dMs) is a local martingale. o Now we go back to the Proof of Theorem 3.3.1: Step 1: By definition Lt = E(L IFt) where L = dP /dP Define the process Zt In Lt. Recall that (Tn) is an increasing sequence of stopping times such that E(lnLT )2 n < oo for each n, and that T = lim T. Let (Sn) be any n. sequence of stopping times with Sn < T a.s. Then we also have 2 E(ln-LS ) < o each n (3.3.23) n This follows directly by applying Lemma 3.3.3(c) to the A stopped process (Zt T = n LtTn ). Define now n n n~~ ~n infft: Lt > e } R-^ ^ if the above set is empty (Rn) is a sequence of stopping times increasing to o

164 because (Lt) is a right-continuous martingale and by Theorem 3-VI of [Ml] the sample paths of such a martingale are a.s bounded on every compact interval. Let s = T Rn I1 n Clearly (Sn) is a sequence of stopping times increasing to nn T and S < T each n. Define then Lt=Lt,Z^^Zt5'and n N. By relation (3.3.23) we have t Nt^Sn E(lnL ) < A. The optional sampling theorem and Lemma -n n 3.3.3(c) imply that (Zt) is a (Nt) supermartingale of class:t t (D) and for any stopping time Q n 2 E(ZQ) < oo (3.3.24) We prove now that the supermartingale (Zt) is regular. Let (.Vm) be any increasing sequence of stopping times converging to a bounded stopping time V. We have (Theorem 13-VI of [M1]) Ln = n V E(LvN m m By Lemma 1.5.7 lim LV =E(L ) (3.25) m m m m By Corollary 3.2.5 the family (Nt) is free of times of discontinuity. Then obviously the family (Nt) has the same property so that

165 n n V NVm NV m m and therefore (3.3.25) implies In n lim L =E(L L V E(LIV ) Lv m m Consequently (by continuity of the logarithm) n n lim Z = ZV m m and because (Zt) is of class (D) we can interchange limit and expectation operations and finally get ~n n lim EZV = EZ V V m m which precisely means that (Zt) is a regular supermartingale (Definition 1.7.12). Denote now the unique DoobMeyer decomposition of (Z) by Z -= Yn- An (3.3.26) t t t n n where (Y) is a uniformly integrable martingale and (A) t t a continuous natural integrable increasing process (see Theorem 1.7.14(a) and (b)). We want to show now that n 2 1n (Y) M2(P,Nt). By construction of the sequence (R ) and the fact that S < R we have Z < n v Z t - Sn

166 which clearly implies 0 < Z n + (n).3.27) t S Hence [sup (+Zt)]2 < 2[n2 + (+Zn t n and by (3.3.24) E[sup ( Z,)]2 < zo (3.3.28) t Then relations (3.3.24) and (3.3.28) allow us to apply Lemma 2.2.2(c) to the supermartingale (Zt) and get the desired result (Yt) ~ M2(P,Nt). In rsume we have obtained t t an increasing sequence of stopping times (Sn) converging n A to T such that (Zt = ln(Lt )) is a regular supermartinn gale of class (D) and the martingale (Yt) in the DoobMeyer decomposition of (Zt) belongs to M (PNt) Step 2: By Corollary 3.2.4 there exists a process (Gn),(Nt 2 c H(Nt) such that E t (Gs)2dms < for each t and t n t G -I SdMs (3.3.29) n t s (s<n}s' By Lemma 1.7.10 (Y - A> n, is a (3.3.26) a unique Doob-Meyer decomposition of (Zt) so that we must have n m t t^S

167 Hence we can define a process (Gt)by n Gt - G for t < S so that = ft sG SI dM s n (3.3.30) t s {s<Sn} s 0 Using the relation Mt = Nt - m we get Yn GI{s dNs - GsI{s<S}dms 0 0 and we can rewrite (Zt) as (see (3.3.26)) Ln - exp(ZG I dN - n (3.3.31) Lt =- exp( G i{s<S ds B (3.3.31) 0 where Bn t I sIs<S }dm + A (3.3.32) 0 n Now by Corollary 3.2.4 Eft GcI d <dm < for eacht. sI~sS }dis < m for each t. 0 n Both the processes (mt) and (At) are continuous because respectively the CP (Nt) and the supermartingale (Z) are t ) regular. Hence (Bt) ~ Vc so that Step 3: By Lemma 3.3.4 we must have Bt = [exp(GsI{5<s_ - l]dm s 0

168 or Bt l [exp(Gs)- ]I{s<S }dm (3.3.33), S {s<S S 0 and ((B) ~ Vc) t f / t T elxp(Gsdm < 0 (3.3.34) p(Gs)I s<S ss 0 Introducing (3.3.33) in (3.3.31) we get Lt L = e GI<Sn }dN + <S a Qon 0 n s (' t - exp(Gs)I {sS dms} (3.3.35) -n Now we take the limit: LHS of (3.3.35): we can write Lt^Sn E(LT INtSn) and by Lemma 1.5.7 lim Lt = E(LTIV Nt S) n ^ n The family (Nt) is free of times of discontinuity (Corollary 3.2.5), i.e., V N N therefore n t^Sn t^T lim Lt^S = E(LTINt^T) L^ (3L.3.36)

169 RHS of (3.3.35): nim f GsI S<S dN lim f GsJ{ s 1N t lim GsIN lira G+I dN n s {s<S } s n n 0 rt t I dNs -, G I dN s I{s<T}d s {s<T} S 0 0 - where the last equality follows by the monotone convergence theorem. Hence lim f GI< 5 dNs =f GsdNs (3.3.37) n n 0 0 Similarly we get rt ^T 1 im f I T< c ndm = dm m (3.3.38) fs<S S S t^T n 0 and t tA^T lim J Xexp(G )I <S dm = exp(G )dm (3.3.39) s{s<Sn s s s n -: 0 Hence we finally get rt^T ^T Lt T =exp{tJ G dN +mtT - exp(Gs)dm } 0 0 or

170 LtJT <t^T n nL = F epf - 0dm where we have defined F = exp(Gs) and relation (3.3.34) S S implies t^T F dm <K S s DISCUSSION OF ASSUMPTIONS We use the notation introduced in Theorem 3.3.1 and its proof: Lt = E(dPo/dPINt) and Z = In Lt. In this theorem we assume the existence of a stopping time T with property (H): There exists an increasing sequence of 2 stopping times (Tn) such that E(ln-LT ) < c for each n n and T = lim Tn a.s. Consider now the weaker condition n (H'): There exists an increasing sequence of stopping times (Tn) such that Eln-LT < o for each n and T = lim T n n a.s. By Lemma 3.3.3(b), (H') is a necessary and sufficient condition for the stopped process (Zt T ) to be a supern martingale of class (D) for each n. Thus (H') is the weakest condition which allows us to carry on the first part (part (a)) of Step 1. We give now a concrete example where this assumption (H')i is not satisfied.

171 Example 3.3.5: Let (Nt) be a Poisson process with constant rate A. Denote its time of first jump by J1 We have P{Nt = k}= e (xt) (k!) and the set equality {J1 > t}= {Nt = 0} so that the probability distribution and the probability density of J1 are respectively given by FJ (t)=P{J <t}= -P{Jl>t} = -P{Nt=0 = 1 - e 1 t fJ (t = Xet 1 Define now dP Lt = E( dINt (3.3.40) where dP dP - =ca exp(-X/Jl) (3.3.41) dPl and a is a normalizing constant making E P- -1. We want to show that in this case the random variable ln L is not integrable for any choice of stopping time R. To do that it is enough to consider stopping times R < J since by construction dPo/dP is a (NJ ) measurable random variable and hence LR = L for R > J1. Now by Proposition A.4.1 any stopping time R J1 is of the form R = Jl^a where a is any positive constant. We have

172 F 3- on the set A = {J1 < a} L LL = R J:^a3 X~R J1^a < ca otherwise Hence dP 0 ln LR IAn (dP ) + I nc so that dP Eln LR < EIA in(p) + In aP{AC} (3.3.42) Now by (3.3.41) dP ln P = n n - X/J1 and by introducing the above relation in (3.3.42) we get: Ein LR < In a - EIAX/Ji (3.3.43) Since EIA/Ji =x _ e 2dt = +0, (3.3.43) shows that Eln LR = -o, which is the desired result. Br6maud's Theorem 5.2.i [B1], which consists of Theorem 3.3.1 in the case where the CP (Nt) is a Poisson process with rate one, is stated without assumption (H'). The above shows that this is a mistake. In fact the error in his proof is easy to pinpoint. Define the sequence of stopping times

173 inf{t: Lt < l/n or L > n} V n.o' if thle above set i.s empty Now Brdmaud's proof requires the process (In Lt V) to n be a bounded supermartingale. But this is not necessarily so as (Vn) may be a time of jump of (Lt) and hence n t (LtV )may not be bounded (see above example). At this n point Brdmaud follows too closely Duncan's proof of Theorem 3 [D3]. There the problem of detecting a stochastic signal in white noise is examined; in this situation the martingale (Lt) is continuous so that the process (In Lt V) is now a bounded supermartingale as required by the rest of the proof. It may be worth noting that when (Lt) is continuous our assumption (H) is automatically satisfied by taking inf{t: Lt < l/n} T = n o if the above set is empty so that we could recover Duncan's result (Theorem 3, [D3]) by the technique of proof of Theorem 3.3.1. We now examine assumption (H), which is stronger than (H'). We have just seen above that (H') is the weakest assumption which implies that the process (ZtT ) n is a supermartingale of class (D). Now (step 2 of the proof) we would like to express, using Corollary 3.2.4,

174 the martingale in the Doob-Meyer decomposition of (ZtT ) n as a stochastic integral with respect to the martingale (Mt Nt - mt). Hence this martingale should be square integrable, which is not necessarily the case as Example 3.3.6 below will show. This is why we need assumption (H) as it implies (see the proof of Theorem 3.3.1) the existence of a sequence of stopping times (Sn) increasing to T and such that the martingale (Yt) in the Doob-Meyer decomposition of (Zt ) is square integrable. Now the n converse is also true: if there exists a sequence of stopping times (Sn) increasing to T and such that the martingale (Yt) in the Doob-Meyer decomposition of (Zt S ) is square integrable then using Lemma 2.2.2(c) it is easy to show that assumption (H) is satisfied. Hence property (H) is the weakest assumption under which step 2 of the proof can be undertaken. Example 3.3.6: This is a case where (H'), but not (H), is satisfied. The CP (Nt) is as in the previous Example 3.3.5. We let now dP A - dP = a exp(-X/vJ7) where a is again a normalizing constant which makes A An L E dPo/dP = 1. As usual Lt E(d P /dPNt) and Zt n Lt It is easy to show that in this case the random variable ln-Lo is integrable and hence by Lemma 3.3.3(b) the process (Zt) is a supermartingale of class (D). But for any

175 positive stopping time R the random variable (ln-LR) is not integrable, i.e., assumption (H) is not satisfied. In conclusion if we could extend Theorem 3.2.2 (on martingale representation), and consequently Corollary 3.2.4, to non square integrable martingales then Theorem 3.3.1 could be proved for the wider class of stopping times T satisfying the weaker assumption (H'). We have tried to generalize the above Theorem 3.3.1 doing the following. Let Ln L + l/n and define L= E(LN). Clearly Ln + 1/n ie. Ln Clearly Lt + ln, i.e., Lt converges uniformly to Lt in (t,W). Now we can apply Theorem 3.3.1 to (L as,~t) as assumption (H) is obviously satisfied and obtain an n expression of the form (3.3.1) for (Lt). The problem here is in taking the limit: we have not been able to show that this expression of (Ln) converges to the desired result. 3.4 DETECTION FORMULAS INTRODUCTION The problem of detection by the likelihood ratio technique is now considered. Let (Q,F) be a measure space on which two measures Po and P1 are defined. Suppose that (Nt) is a CP defined on (Q,F) and denote as usual by Nt the minimal a-algebra generated by (Nt) up to and at time t. The notation Ei() for i = 0,1 is intended for the expectation operator with respect to the measure Pi.

176 Definition 3.4.1: For a (Nt) stopping time R (possibly infinite) denote by P for i = 0,1 the restriction of the measure P. to the a-algebra NR. 1 R We have the inclusion NR F so that if P << P R o 1 -R R R o n~, ~'' ~ ~~ ~ ~.i ~d d P - then PR << and the Radon-Nikodym derivative dP/dP o IP1* is well defined. We examine now the meaning of this RadonNikodym derivative. In the case where the stopping time R is equal to a constant a then N = N = u(N, 0<u<a) R a u _ so that dPa/dP is the likelihood ratio for testing the two hypotheses H. for i = 0,1: Pi is the probability measure on (Q,F),by observations on the CP (Nt) for t < a. The detection scheme then consists in selecting Ho or H1 according as dpa/dP is above or below a given threshold. Now in the case where R is a stopping time which is not a constant we know that NR D a(NuR, 0 < u) (this follows from the fact that NuR is (NR) measurable by Theorem 1.3.4) but the reverse inclusion is not necessarily true. For this reason dPo/dP1 is not the likelihood ratio for our detection problem when the time of observation is the stochastic interval [0,R], as one could have conjectured. But one can interpret dPR/d a a likelihood ratio if we assume that the information accessible to the observer is NR and not simply a(NuR, 0 < u). Let now L dP /dP, i.e., L is the likelihood oo 1'oo ratio when the time of observation of the CP (Nt) is the positive real line ]R, and define

177 t = E1 (L INt) (3.4.1) Then it is easy to see Lemma 3.4.2: dPR = L -R L dP1 Proof: N D N so that V A NR we can write co R dR dI. dR -~ —-- 00 R -R FR = Td dPdp = R(A)= ~ (A) -R P 1 dPl 0 0 and by definition of conditional expectations. 0 N 0 1 d dP = ( d -RdJd j N dP1 V A NR 1 = A A which implies dPR -dPR E(LINR) = LR 1 p which implie

178 As a result of the likelihood ratio representation and the Girsanov and Innovation Theorems, an expression for the martingale (Lt) can be found undersuitable circumstances. LIKELIHOOD RATIO: FIRST RESULT With the measure P1 carried on (Q,F) we suppose that (Nt) is a regular CP of independent increments with mean E1Nt = mt. Under the measure PO assume that (Nt) is a CP which has an (Ft) ICR of the form (t X dm5) where (Ft) is a family of c-algebras such that Ft Nt and (Xt) ~ H(Ft) is a nonnegative process. Theorem 3.4.3: Let (Nt) be the CP described above. Assume (a) P << P1. Define the uniformly integrable martingale (see (3.4.1)) L = E1( _INt) t ~dP (b) The stopping time T is such that there exists an increasing sequence of stopping times (Tn) for which T li T a.sand E(lnL)2 < for each n. n n t (c) Eo X5dm5 < o 0 S 0 Then -__ T dPt^ ^ t^T ^ 1t^T LtT l H n ]JJexp(m T I Asdms) (3 42)

I 779 A th where t = Eo( ( Nt) and J is the time of n jump of (Nt). By convention tile Irocuct T[(.) = I for. > t AT. We adopt here the same convention as in Remark 3.1.5, i.e., we set (At = 1) on the intervals of constancy of mt. Condition (c) then insures that the process (At) is well R -R defined. Recall that the meaning of dPR/dP has been given in the Introduction to this section. In particular if t < T so that t ^ T = t then (3.4.2) is the likelihood ratio for our detection problem when the time of observation of (Nt) is the interval [0,t]. Proof: By assumption (c) and Proposition 2.3.7 we have ENt < - so that by Theorem 2.3.1(b) the process (Nt - ft Adm ) is a (PFt) martingale and by the Innova~ t f s o' t tion Theorem 3.1.4 (Nt- f ASdm5) is a (Po,Nt) martingale, A:. 0 0 where At = Eo(XAtNt). Then by the Optional Sampling Theorem the process t^T T (Xt^T NtT dm) (3.4.3) -~~..'.- ^'..' ~ ".. ~ ~' 0 is a (Po,NtT) martingale. Now by (a) P P because N c F. Thus the martingale Lt = El(dPo/dPlINt) is well defined. Then by assumption (b) and the likelihood ratio representation Theorem 3.3.1 there exists a positive process (F+) c H(Nt) such that t^T Fsdm < Xo (3.4.4) 0'

180 and t^ T LtT = t T FJ exP(mt^ - Fsdms) (3.4.5) L <t^T n n- 0 By Girsanov Theorem 3.1.1(b) (applied to (NtT)) the process (YT NtT F dms) (3.4.6) (Yt^T Nt^T s s is a (Po,Nt^T) local martingale. Subtracting (3.4.3) from (3.4.6) one gets that rT ^ (Y^T- X^T J (X- F)d (3.4.7) is a (P,Nt T) local martingale. By assumption (c) and Fubini's Theorem fttT t EoJ 5dm <c Eo X dm = E X dm = E d< 00 0 s SI s o o (3.4.8) Furthermore mt is continuous (the CP (Nt) is assumed regular, see Theorem 2.4.7) so that by (3.4.4) and (5.4.8) the RHS of (3.4.7) belongs to Vc. Hence (Yt^T XtT) e L n Vc and by Lemma 1.9.4, (Yt^T Xt^T) is identically zero a.s which implies X = F. The result then follows by introducing this relation in (3.4.5). Finally by Lemma.4.2, L =dP- tT/dP t^ Lemma 3.4.2, L+ T =dPo t^dP1

181 GENERALIZATION We now take advantage of the chain rule for RadonNikodym derivatives to extend the previous result. For i = 0,1, with the measure Pi carried on (Q,F) suppose that i i the CP (Nt) has the process (ft Xidm5) for (Ft) ICR, where i i i i (Ft) is a family of o-algebras with F Nt, (Xt ) E H(Ft) is a positive process and mt is an increasing deterministic function with m = 0. By Theorem 2.6.1 there exists a measure P which makes (Nt) a CP of independent increments with mean ENt = mt. Theorem 3.4.4: For i = 0,1 let (Nt) be, under the measure Pi, the CP described above. Assume (a) P << P and P X P1 and define for i = 0,1 the (P,Nt) martingale ddP L E( Nt) tdpo (b) For i = 0,1 the stopping time T' is such that there exists an increasing sequence of stopping times i i i 2i 2 (Tn) for which T = lim T a.s and E(ln L i) n Tn. Let for each n. Let T =T^ T (c) For i = 0,1 Ei t Xidm < 0 s s Then d-atT = n 1 exp (A -X )dm (3.4.9) dPt J_<t^ T x( I J n~ J U

182 ^i A wher e where At = Ei(AtlNt) for i = 0,1 and Jn is the time of th n jump of (Nt). By convention the product l(*) = 1 for J > t^T. Remark that this indeed generalizes the preceeding result with A1 1. t Proof: By the previous Theorem 3.4.3 we get for i = 0,1 i ^i t AI exp(m t- T dm) (3.4.10) tAT A exs'' dP^T J <t T n. ^t n~- 0 ^i A i sttT t T where t Ei(X tiNt). Now P P1 so that P^ T P Indeed _ -1 t^, -t ^ dtP dP1A d_ J since Nt^ c F; hence dP d^ dpP dP dP^ dP ^ dpt^T and the result follows by a simple computation from (3.4.10). o INTEGRAL EQUATIONS FOR LIKELIHOOD RATIOS We show here that the likelihood ratio of our detection problem can be obtained as the unique solution of a stochastic integral equation.

183 Theorem 3.4.5: The likelihood ratio dPt^/dPt^ of 0 1 Theorem 3.4.4 is the unique solution of the following stochastic integral equation: Zt 1 Z sdX (3.4.11) 0 where tr [o F "1 ^1 X =f - -I 4dNs + (X X )dms (3.4.12) 0 Ls 0 Proof: By assumption (),i = 01 is positive (see above Theorem 3.4.4) and is a.s finite for all t (by condition (c) of Theorem 3.4.4). Furthermore the process (N) has a finite number of jumps in any finite interval so that the process (t^T ((x/ - l)dN) C V. The process U tAT 1 o (Jft^ (X1 A~)dms) also belongs to this class V by assumpS Ss 0 tion (c) of Theorem 3.4.4. Therefore (XtT) c V is a semimartingale with <XC>tT = 0 (see Remark 1.8.17). Then by Theorem 1.9.15 the unique solution of (3.4.11) is given by Zt = exp(Xt ) n (1 + AX )exp(-Xs ) (3.4.13) t P(Xt*,T sT:sT s <t Now AXsT = (() - )AN and b/\l~ 5 5 b/T

184 (.) s<t [I [i4 l ANT] s expL' - SAT] J 0 0 n A ^ n exrpr n<t^T [ 1 n r |- [ ) X in (3.4.13) gives the densi nreesult (compare with.4.9)) A 1 and expq. (3 412 )d beome Z" j J<t.: T ^ T U ^ X dP Observe that if under then measure P1 the CP(Nt) is a process of independent increments with mean m3 then4 P - Pli' = 1 andEq. (3.4.12) becomes rt X =t (As- l)d(Ns - m) (3.4.14) A The process (Mt - Nt - mt) is a (P,Nt) martingale. Hence (3.4.14) shows that the process (XtT) is a local martinA gale. This in turn implies by (3.4.11) that the process (Zt) is a local martingale. This is consistent with what we have seen in the previous section (Theorem 3.4.3) since in this case Z= Lt^T El(dP /dPINt ),i.e. (Zt) is in fact a uniformly integrable martingale. In the more general case the likelihood ratio (Zt = dP /dP ) is

185 not necessarily a local Imartingalle. In application Eqs. (3.4.11) and (3.4.12) give a way of implementing the computation of the likelihood ratio continuously in time. They represent recursive equations if one also obtains the best estimates (XA) in a recursive way. The block diagram of this implementation is given in Figure 3.4.6.

186 N MINIMUM MINIMUM MEAN MEAN SQUARE SQUARE ESTIMATOR ESTIMATOR ~t t' 0. ~'.'' ~t.. i\ I/.LIKELIHOOD \ ~~/; - FUNCT ION: ^css)dms Z d= /dP 0 t (x~ t t Figure 3.4.6

CONCLUSION We mention here —for future research —some of the problems which have not been solved in this dissertation. In Chapter 2 (Chapter 1 is a mathematical review) Counting Processes (CP) and their Integrated Conditional Rates (ICR) were examined. It was shown that, given a CP (Nt) adapted to a right-continuous increasing family of a-algebras (Ft) and for which —sole assumption —the random variable Nt is a.s finite for each t, there always exists a (Ft) ICR and this ICR is unique (Theorem 2.3.1 and Definition 2.3.2). Now given a natural increasing process (At) with respect to a family (Ft), there does not always exist a CP (Nt) adapted to (Ft) and for which (At) is the (Ft) ICR (e.g., take (At = 2110 )(t)), see Corollary 2.4.11). Hence the following problem: (1) Find necessary and sufficient conditions for a natural increasing process (At) to be the ICR of a CP (Nt). Then the question arises: (2) If (At) is a natural increasing process which is the ICR of a CP (Nt), when is this CP unique? If the process (At) is continuous then the two above problems can be reformulated as (see Theorem 2.4.8): (1') Find necessary and sufficient conditions for (At) to be the natural increasing process associatedwith a square integrable local martingale (Mt) (i.e., At = <M>t) 187

188 such that the process (Mt + At) is a CP. (2') Do there exist two distinct square integrable local martingales, (Mt) and (Mt), to which is associated the same natural increasing process At = <M>t = <M>t and such that both (Mt + At) and (M* + At) are CP's? In Section 2.5 we give sufficient conditions for the existence of conditional rates. It is obviously desirable to find: (3) Necessary conditions for the existence of conditional rates. In Chapter 3 the Girsanov Theorem was used to prove the existence of CP's with (F ) ICR's of the form t (ft X dAs) where the processes (Xt) and (At) ((Xt) - H(Ft) 0 is a nonnegative process and (At) is itself the ICR of a given CP) were such that the process (Lt) (see Theorem 3.1.1) was a martingale (which had to be uniformly integrable if one wanted to consider the positive real line instead of finite time intervals). Some strong conditions on (At) and (At) which insured that (Lt) was a martingale were provided (Chapter 3, p. 133). Weaker conditions might be obtained. Hence: (4) Find necessary and sufficient conditions on the processes (Xt) and (At) for which the process (Lt) (see Theorem 3.1.1) is a martingale, or a uniformly integrable martingale.

189 The detection problem for a large class of CP's was solved. The generality of the method was limited mainly by the scope of the Martingale Representation Theorem (a basic step in proving the Likeliho.od Ratio Representation Theorem) which was demonstrated in the context of CP's of independent increments. Hence (5) Find a larger class of CP's for which the Martingale Representation Theorem is still valid. Finally, the likelihood ratio was obtained as a function ^i of the best. estimates (X), i = 0,1 (Theorems 3.4.4 and 3.4.5). Thus to get a complete solution to the detection problem (6) Recursive equations to compute (Xt) should be obtained.

APPENDIX APPENDIX A.1 The following lemma does not appear in our main reference [Ml] but in [M3]. For ease of reference we provide here an original proof, due to Prof. F. J. Beutler, of this result. Lemma 1.5.7: Let (F, n E I) be an increasing family of c-subalgebras of F and F be the a-algebra generated by the union of the Fn. Let (F, n es N) be a sequence of random variables bounded in absolute value by an integrable random variable G and converging a.s to a random variable F. Then E(FnlFn) converges a.s to E(FIF). Proof of Lemma 1.5.7: Let U = inf Fn and V = sup Fn - m n m n n>m n>m The sequences (Um) and (Vm) are, respectively, increasing and decreasing, and both converge to F. We also have for n > m: U < F < V m~ n~ m which implies E(UmIFn) < E(FnlFn) < E(VmIFn) Fix m and let n tend to infinity. By the supermartingale convergence theorem (Theorem 6-VI of [M1]), the uniformly integrable martingale E(UmIFn) tends to E(UmIF ). Similarly for E(VmlFn). Thus we get the following chain of 190

inequalities: E (Um F) < lim inf E(Fn IF) lim sup E (FnFn) < E(Vml F) n n Letting m go to infinity we have by the monotone convergence theorem: E(FIF ) < liminfE(FnIFn) < lim sup E(FnlFn) < E(FIF) n n which implies the result. a APPENDIX A:.2 The proof of Theorem 1.7.9 will be clearer if we show first the following simple result: Lemma A.2.1: Let (Ft) be a right-continuous increasing family of a-algebras and T be a (Ft) stopping time. Suppose (Xt) is a (FtT) local martingale (resp. martingale). Then (Xt) is also a (Ft) local martingale (resp. t t martingale). Proof: First we show that the lemma is true when (Xt) is a uniformly integrable (FtT) martingale. By Theorem 1.5.4 (Xt) can be expressed in this case as Xt = E(XTFt^T) Define now the uniformly integrable (Ft) martingale Yt = E(XTIFt) It suffices to show Yt = Xt. In the first place

192 ~t^T ( E(XTIFt^T) = Xt so Yt = X on the set {t <T}. Now T vt is a stopping time (see Theorem 36-IV of [M1]) such that T v t > T. Hence Tvt E(XTIFTvt) XT Tvt E(XTIF(Tt)T) = XT This shows Xt = Yt on the set {t > T} and the lemma is verified for uniformly integrable martingale. Finally if (Xt) is a local (Ft^T) martingale, let (Tn) be a sequence of stopping times reducing (Xt), i.e., (XtT ) is a unin formly integrable (Ft^T T) martingale. Then the above shows that (Xt T) is also a uniformly integrable (Ft) n martingale, i.e., (Xt) is a (Ft) local martingale. o Note that the above result is a kind of converse to the Optional Sampling Theorem since this latter implies that if (Y) is a (Ft) martingale then (Yt^T) is a (FtT) martingale. We now go back to Theorem 1.7.9. This theorem appears in [M1] (Theorem 19-VII) but there is a gap in the proof which is supplemented by the above lemma. Theorem 1.7.9: Let (At) be a natural increasing process and T be a stopping time. Then the increasing process (At T) is natural with respect to the two families

1.93 (Ft) and (Ft^T). t t^T Proof: If (Y) is a bounded positive (Ft) martingale then:- t - t by the optional sampling theorem (Yt = YtT) is a (FtT) martingale. By Lemma A.2.1 (Yt) is also a (Ft) martingale so that by definition of a natural process (Definition 1.7.4) EJ YsdAs = E Y dA and proceedingas in [M1] (Theorem 19-VII) we get t^T t^T Ej YdA = E Y -dA O 0O The LHS of the above expression can be rewritten as t^T at E YsdAs = YdA 0 0 and likewise for the RHS. Hence (At ) is natural with respect to (Ft). Finally since t E YsdA = E Y dA J s s T J s s ^T for any bounded (Ft) martingale, it holds in particular for bounded (FtT) martingales, since the latter are also (Ft) martingales by Lemma A.2.1. This shows that (At T) is natural with respect to (FtT)

194 APPENDIX A.3 Lemma A.3.1: Let (Nt) be a CP with finite mean and (Nt) the family of a-algebras generated by (Nt). Suppose (F) and (Gt) are two families of a-algebras such that ~. Ft' Gt Nt. Denote by (At) the (Ft) ICR of (Nt) and define the process Ct A E(AtlG) (A.3.1) Then the process (Nt - Ct) is a (Gt) martingale. Proof: We can write for t > s E(Nt CtGs) = E[Nt - E(AtlGt)lG] (by (A.3.1)) = E(Nt - AtlGs) (Gt z Gs) =E[E(Nt - AtlFs)Gs] (F Gs) E(Ns - As IGs) ((Nt-At) is a (Ft) martingale by Theorem 2.3.1) = N - C5 - (Ns is(G) measurable and by (A.3.1)) which shows the result. a APPENDIX A.4 Proposition A.4.1: Let (Nt) be a CP and J its time of first jump. Let R be any (Nt) stopping time such that R < J1 a.s. Then R is of the form

195 R = Ji ^ c a. s where c is some nonniegiVtive consta;lt. Corollary A.4.2: Let J1 be as in the above Proposition. Then J1 is totally inaccessible with respect to the family (Nt) if and only if P{J1= a} = 0 for any nonnegative constant a. Proof of Proposition A.4.1: The a-algebra No is given by {q,Q} so that either R = 0 a.s or R f 0 a.s. In the case where R = 0 a.s the proposition is trivially verified so suppose that R # 0 a.s and define the positive number c = sup{e: P{R > e} > 0}. For any 0 < b < c, the set {R > b}, which belongs to the a-algebra Nb (see Theorem 41-IV of [Ml]), is a set of positive measures and up to sets of measure zero {R > b} c {J1 > b} since R < J a.s. Now the set {J1 > b} = {Nb = 0} is an atom (for a definition see [H2]) of Nb so that one must have modulo sets of measure zero {R > b} = {J > b} for any 0 < b < c. This implies R= J1 a.s on the set {R < c}, and proves the result in the case where c is infinite. When c is finite P{R > d} = for any d > c so that R = c a.s on the set {R > c}.: o

196 Proof of Corollary A.4.2: (=) By contradiction: if there exists an a such that P{J1 = a} = p > 0 then let (Sn = (a-l/n)^J1). We have P{lim Sn = J S < J1} p> which shows that J1 is not totally inaccessible, a contradiction. ({) By Proposition A.4.1 any increasing sequence of stopping times inferior or equal to J a.s is of the form S = Jl^an a.s where an is any increasing sequence of numbers. Let a =lim a. Then n P{lim Sn = J1' Sn < J1} { a} =0 n that is: J1 is totally inaccessible. o APPENDIX A.5 We give here an example of a positive discrete martingale which is not uniformly integrable. Let Q = [0,1) and P be the Lebesgue measure on [0,1). Let F = {fQ} and Fn be the c-algebra generated by the disjoint sets Am, m = 0,..,3 -1 where An = [m. -n (m+l).3n ) m (' Define X = 1 and n X = 31 I {n An (3n-1)/2 We have

197 n+p{An + } 1 n+l n+l E E(X ~ii)dP XndP n J E(X+l F)dP= n+ld for m =(3 -1)/2 A A m m o0 otherwise and -lif m = (3n-1)/2 XndP IAn XnLO 0 otherwise which implies E(X IFFn) = Xn, i.e., (Xn) is a positive n+ n n n martingale with EXn = 1. By the supermartingale convergence theorem the limit X, = lim Xn exists a.s. In this n case we clearly have X = 0 a.s so that Xn f E(XoIFn) = 0 which shows, by Theorem 1.5.4, that (Xn) is not uniformly integrable.

REFERENCES Al. D. Austin, "A Sample Function Property of Martingales," Ann. Math. Stat., vol. 37, pp. 13961397, 1966. B1. P. M. Br6maud, "A Martingale Approach to Point Processes," Memorandum No. ERL-M345, Electronic Research Laboratory, University of California, Berkeley, August 1972. B2. I. Bar David, "Communication under the Poisson Regime," IEEE Trans. on Information Theory, Vol. IT-15, pp. 31-37, January 1969. C1. J. R. Clark, "Estimation for Poisson Processes with Application in Optical Communication," Ph.D. Thesis, M.I.T., September 1971. D1. C. Doleans-Dade and P. A. Meyer, "Integrales stochastiques par rapport aux martingales locale," Seminaires de Probabilites IV, Lecture Notes in Mathematics No. 124, PP. 77-107, Springer-Verlag, Berlin, 1970. D2. C. Doleans-Dade, "Quelques applications de la formule de changement de variables pour les semimartingales," Z. Wahrscheinlichkeitstheorie verw. Geb., 16, pp. 181-194, 1970. D3. T. E. Duncan, "On the Absolute Continuity of Measures," Ann. Math. Stat., Vol. 41, pp. 30-38, 1970. D4. T. E. Duncan, "Likelihood Functions for Stochastic Signals in White Noise," Information and Control, 16, pp. 303-310, 1970. D5. J. Depree and C. C. Oehring, "Elements of Complex Analysis," Addison-Wesley, 1969 D6. J. L. Doob, "Stochastic Processes," Wiley, New York, 1953. Fl. D. L. Fisk, "Quasi-Martingales," Trans. Amer. Math. Soc., vol. 120, pp. 369-389, 1965. 198

199 F2. P. Frost and T. Kailath, "An Innovation Approach to Least-squares Estimation - Part III: Nonlinear Estimation in White Gaussian Noise," IElE Trans. on Automatic Control, Vol. AC-1l, No. 3, June 1971. G1. I. V. Girsanov, "On Transforming a Certain Class of Stochastic Processes by Absolutely Continuous Substitution of Measures," Theory of Probability and Its Applications, Vol. V, No. 3, pp. 285-301, 1960. G2. A. F. Gualtierotti, "Some Problems Related to Equivalence of Measures: Extension of Cylinder Set Measures and a Martingale Transformation," Department of Statistics, University of North Carolina at Chapel Hill, Mimeo Series No. 834, July 1972. H1. E. Hille, "Analytic Function Theory," Blaisdell Pub. Co., New York, 1963. H2. P. R. Halmos, "Measure Theory," New York, Van Nostrand, 1950. Il K. Itoand S. Watanabe, "Transformation of Markov Processes by Multiplicative Functionals," Ann. Inst. Fourier, Grenoble, Vol. 15, No. 1, pp. 13-30, 1965. 12. K. Ito, "On a Formula Concerning Stochastic Differentials," Nagoya Math. J., 3, pp. 55-65, 1951. Jl. G. Johnson and L. L. Helms, "Class (D) Supermartingales," Bull. Amer. Math. Soc., t. 69, pp. 59-62, 1963. K1. H. Kunita and S. Watanabe, "On Square Integrable Martingales," Nagoya Math. J., Vol. 30, pp. 209Mrins... 245, 1.967. K2. H. Kunita, "Stochastic Integrals Based on Martingales Taking Values in Hilbert Space," Nagoya Math. J., Vol. 38, pp. 41-52, 1970. K3. T. Kailath,-"A Further Note on a General Likelihood Formula for Random Signals in Gaussian Noise," IEEE Trans. on Information Theory, Vol. IT-16, pp. 393-396, July 1970.

200 MI. P. A. Meyer, "Probability and Potential," Blaisdell, Waltham, Mass., 1966. M2. P. A. Meyer, "Une majoration du processus croissant naturel associe a une surmartingale," Sminaire de Probabilites II, Lecture Notes in Mathematics No. 51, pp. 166-170, Springer-Verlag, Berlin, 1968. M3. P. A. Meyer, "Un lemme de theorie des martingales," Seminairede Probabilites III, Lecture Notes in Mathematics No. 88, pp. 143-144, Springer-Verlag, Berlin, 1969. M4. P. A. Meyer, "Martingales and Stochastic Integrals I," Lecture Notes in Mathematics No. 284, Springer-Verlag, Berlin, 1972. M5. P. A. Meyer, "Square Integrable Martingales, a Survey," Lecture Notes in Mathematics No. 190, pp. 32-37, Springer-Verlag, Berlin, 1970. M6. P. A. Meyer, "Non Square Integrable Martingales," Lecture Notes in Mathematics No. 190, pp. 38-43, Springer-Verlag, Berlin, 1970. M7. P. A. Meyer, "Integrales Stochastiques I, II, III and IV," Sminaire de Probabilites I, Lecture Notes in Mathematics No. 39, pp. 77-162, SpringerVerlag, Berlin, 1967. M8. P. A. Meyer, "A Decomposition Theorem for Supermartingales," Illinois J. of Math., t..6, pp. 193205, 1962. N1. J. Neveu, "Mathematical Foundations of the Calculus of Probabilites," Holden-Day, San Fransisco, 1965. P1. E. Parzen, "Stochastic Processes," Holden-Day, San Fransisco, 1962. R1. W. Rudin, "Principles of Mathematical Analysis," McGraw-Hill, New York, 1953. R2. I. Rubin, "Regular Point Processes and their Detection," IEEE Trans. on Information Theory, Vol. IT-18, No. 5, pp. 547-557, September 1972. R3. K. Rao Murali, "On Decomposition Theorems of Meyer," Math. Scand.,24, pp. 66-78, 1969.

201 R4. B. Reiffen and H. Shermann, "An Optimum Demodulator for Poisson Processes: Photon Source Detectors," Proc. IEEE, Vol. 51, pp. 1316-1320, October 1963. S1. D. L. Snyder, "Filtering and Detection for Doubly Stochastic Poisson Processes," IEEE Trans. on Information Theory, Vol. IT-18, No. 1, pp. 91-102, January 1972. S2. D. L. Snyder, "Smoothing for Doubly Stochastic Poisson Processes," IEEE Trans. on Information Theory, Vol. IT-18, No. 5., pp. 558-562, September 1972. S3. D. L.Snyder, "Information Processing for Observed Jump Processes," Information and Control, Vol. 22, No. 1, pp. 69-78, 1973. S4. A. V. Skorokhod, "Studies in the Theory of Random Processes," Addison-Wesley, Inc., 1965. V1. J. H. Van Shuppen and E. Wong, "Transformation of Local Martingales under a Change of Law," Electronic Research Laboratory, Memorandum No. ERLM385, University of California, Berkeley, May 1973. W1. A. D. Wentzel, "Additive Functionals of Multidimensionals Wiener Processes," D.A.N.S.S.R. 139 (1961), pp. 13-16, (translated) Soviet Math. 2, pp. 848-851.

UNIVERSITY OF MICHIGAN II3 9015 02947 514511111 3 9015 02947 5145

3 3 2 4 4 4 01234566 /66543210 A4 Page 6 543210 AIIM SCANNER TEST CHART #2 Spectra 4 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 6 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 8 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 10 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 Times Roman / 4 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 6 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 8 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvw xyz;:''",./?$0123456789 10 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 Century Schoolbook Bold 4 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 6 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 8 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$0123456789 10 PT ABCDEFGHIJKLMNOPQRSTIJVWXYZabcdefghijklmnopqrstuvwxyz;:",./?$01 456789 I Li m,,,ge-l~, in [7! PMr;. II Rr vt~ i' Bodoni Italic 4 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;: ",./?$0123456789 6 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;: ",./?$0123456789 8 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz;: ",./?$0123456789 10 PT ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz; ",.7 0123456789 Greek and Math Symbols / 4 PT ABrAESEHIKAMNOn<lPZTYfslEhZoapveO9t1pKKi.glvonopoTxuCMY>+",./_<=~~><= / 6 PT ABrFAE-EHIKAMNOnrI)PETYQ5^ Z(xa3yEt0et(pKXuvorcOpo(7t oziW>~",./<=~o>< / 8 PT ABrtAEEHIKAMNOnI)OPZTYQ5^ZToly~40niq(pK3r vo7ipo(T)pTO >+~",./<=~o>< / 1i PT ABFAE0EHIKAMNOnI)PETYQ52 ZoTZapy~90tq(pKtlvoc)pOTp(y) ~o,./<=~o><= White Black / Isolated Characters e m 1 2 3 a 0123456 1 / 6543210 4 5 6 7 0 o 0 8 9 0 h I B A4 Page 6543210 MESH HALFTONE WEDGES I..... I................. I................. I................. I.................. I.......... I................. I.. I.................. I.................. 65 E - SS:gii.i.i.i..........S..........l _.................................................. 8 5 t00:.................................... o o i ~i i -: -i i i i iiiiiiiiiiiiii~iii -iiiiiiiiiiiiiiiiiiiiiiiiiijiiiiiit -iiiiiiiiiiiiiiiiiiiiiiiiiiiii!iiiii ~ - i!/iB iiii M Miiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiit jiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii i~'" iii! i1 i;;iii';i;'!'iiiiiiiiii'ii iiiil 1 3 3.....................................................l / A4 Page 6543210 0123456 / 6543210 O1 ^ o O iiiiiiiiiiiiii ~ t' i~~i ~ ~~........................ n: 1111911 _ I_~~ ~_ -_ll~i, ___~ ~ ~ ~ 3~............................ ~ rssll 1 3 3 i i~~~~............................................................................. sss8 ~ 11~818~1818~1~ llllllllll 1111111................................................................~l 6 5................................................ 1 5 0 1111111111111111111111- jlll.........................iiiiiiiiiiii......................................................................~~~~~A Pae 65321 0123456 6543210.......................................................................~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~~~~ ~ ~ ~................... ~ ~ ~ ~ ~ ~ ~......................................................................................

Production Notes Form Univ. of Michigan Preservation/MOA 4 Project MOA 4ID#: i Shipment#: _ Call #: Date Collated: S/!b Collated by: C) d_ Total # of Pages: __ Illustrations:'' Yes _No tN A J, Foldouts/Maps: Yes V No Bookplates/Endsheets: Yes yNo Missing Pages: Yes X No Irregular Pagination: OthYes Pr n No Yes Y No

RIT ALPHANUMERIC RESOLUTION TEST OBJECT, RT-1-71 coo CN c V n n o Mr' m o 0 6 R 2E 0I5 "": _ - 4 3 I ill,rm, 55 5 C I U 5 C;R2 8 5 aE - O I6 E3B 7E3EB CO (' CO C6' LO c~ -J 0 IIJ -I O S murum1 3NO'AONH31 iD 3illSNI 31S3H 9~09 3NO',K9 01ONH -331 JO 3_n.LILSNI E31S3H1-1EI