Technical Report No. 224 036040-19-T THE THEORY OF SIGNAL DETECTABILITY: CYCLO-STATIONARY PROCESSES IN ADDITIVE NOISE by J. R. Lapointe, Jr. Approved by:_/ _._ _ Or_ Theodore G. Birdsall for COOLEY ELECTRONICS LABORATORY Department of Electrical and Computer Engineering The University of Michigan Ann Arbor, Michigan Contract No. N00014-67-A-0181-0032 Office of Naval Research Department of the Navy Arlington, Virginia 22217 October 1973 Approved for public release; distribution unlimited.

ABSTRACT Cyclo-Stationary (CS) processes are those nonstationary processes that appear to be stationary when observed at integral multiples of a basic interval. Wide Sense Cyclo-Stationary (WSCS) processes possess autocorrelation functions and autocorrelation matrices with a cyclic structure for continuous and discrete time respectively. Discrete time WSCS processes normally arise from sampling continuous time WSCS processes. Other sampling schemes such as multiplexing samples from different stationary random processes or multiplexing samples from the sensors of an array also generate random processes with a cyclic structure in the autocorrelation matrix. The optimum detector for the fixed time forced choice detection of discrete time WSCS processes in additive noise is designed according to the likelihood ratio. The detector design is constrained to preserving the cyclic structure of the signal autocorrelation matrix followed by a signal enhancement filter followed by energy detection. The structure of the signal enhancement filter is clearly identifiable with the cyclic structure of the signal autocorrelation matrix. A suboptimum detector is also presented and is the low input signal-to-noise ratio form of the optimum detector. iii

The optimum and suboptimum detector performance is evaluated for a discrete time real zero mean CS Gauss-Markov process in the region 0.01 < PD 0.99 and 0.01 < PFA 0. 9. The optimum and suboptimum Receiver Operating Characteristics (ROC) are binormal in this region. ihere is little difference between the optimum and suboptimum performance though the suboptimum ROC is more binormal than the optimum ROC. iv

FOREWORD Detection of the existence of some sort of periodic structure in a reception is the motivation for this research. The research is theoretical and deals with the techniques of defining the problem and establishing its mathematical solution. Of primary importance to the theoretician is the realization that the vector form of the problem is the same as that of a point-sensor antenna array. Thus the rich field of array processing can be tapped for formal solutions. The type of periodicity investigated is a common physical occurrence, but its formal description is obtuse enough that it has not been a subject of detection theory previously. The physical picture involves a periodic mechanism that produces a turbulence or other random process. The periodicity of the generator is then hidden in the process as a cyclo-stationary characteristic: if sampled at the period, the samples are stationary, but if sampled at a multiple of the period, the sample statistics depend on the local position within a period. This research establishes a foundation for applied research in the detection and analysis of cyclo-stationary processes. v

TABLE OF CONTENTS Page ABSTRACT iii FOREWORD v LIST OF ILLUSTRATIONS x LIST OF APPENDICES xii LIST OF SYMBOLS xiii CHAPTER I: INTRODUCTION 1 1.1 Basic Problem 1 1. 2 Cyclo-Stationary Processes 1 1. 2. 1 Continuous Parameter CycloStationary Processes 2 1. 2.2 Discrete Parameter CycloStationary Processes 4 1. 2.3 Cyclic Structure of the Autocorrelation Matrix 5 1. 3 Generation ob Cyclo-Stationary Processes 9 1. 3. 1 Continuous Parameter CycloStationary Processes 9 1. 3. 2 Discrete Parameter CycloStationary Processes 9 1. 3. 2. 1 Sampling Continuous Parameter Cyclo-Stationary Processes 10 1.3. 2.2 Sampling Continuous Parameter Stationary Random Processes 10 1.4 Procedure 11 1. 5 Historical Background 12 1. 6 Organization of this Study 13 1. 7 Notation 13 vii

TABLE OF CONTENTS (Cont. ) Page CHAPTER II: BACKGROUND 15 2. 1 Review of Detection Theory 15 2. 1. 1 Detector Design 16 2. 1. 2 Detector Evaluation 18 2.1.3 Normal ROC 20 2.1.4 Binormal ROC 22 2.2 Performance Evaluation and Characteristic Functions 24 CHAPTER III: OPTIMUM DETECTOR FOR WIDE SENSE CYCLO- STATIONARY PROCESSES 27 3.1 Introduction 27 3.2 Detector Design 27 3. 2. 1 The Sufficient Statistic and Three Common Interpretations 28 3. 2.2 Expansion of the Sufficient Statistic 33 3, 3 Detector Description 35 3.4 Summary 37 CHAPTER IV: PERFORMANCE EVALUATION OF THE OPTIMUM DETECTOR 39 4.1 Introduction 39 4.2 Model Description 39 4. 2. 1 Noise Model 39 4.2.2 Signal Model 39 4. 3 Evaluation Procedures 42 4. 3. 1 ROC Generation 43 4. 3. 2 Input Signal-to-Noise Ratio 45 4. 3. 3 Detectability Index 46 4.4 Performance 48 4. 5 Summary 61 CHAPTER V: A SUBOPTIMUM DETECTOR 62 5. 1 Introduction 62 5.2 Suboptimum Detector 62 5. 3 Evaluation Procedures 64 5.4 Performance 65 viii

TABLE OF CONTENTS (Cont.) Page 5. 4. 1 Decision Variable Statistics 66 5. 4. 2 ROC Comparison 69 5.5 Summary 74 CHAPTER VI: CONCLUSIONS 75 6. 1 Summary and Conclusions 75 6.2 Contributions 78 6. 3 Suggestions for Future Work 79 RE FERENCES 102 DISTRIBUTION LIST 105 ix

LIST OF ILLUSTRATIONS Figure Title Page 2.1 Illustration of the basic signal detection problem 15 2.2 Normal ROC's with detectability index d' 21 2. 3 Binormal ROC's with detectability index d' and slope SLOPE 23 e 3.1 Triangularization 30 3.2 Simultaneous diagonalization 31 3.3 Estimator-correlator 32 3.4 Optimum detector for WSCS processes 43 4.1 ROC curves for optimum detector for K = 4, P = 4, p 0.25 49 4.2 ROC curves for optimum detector for K=16, P = 4, p 0.25 50 4. 3 ROC curves for optimum detector for K = 64, P = 4, p = 0.25 51 4. 4 ROC curves for optimum detector for K= 1, P- 1, p =0.707 52 s 4.5 ROC curves for optimum detector for K =4, P= 16, p =0.707 53 s 4.6 ROC curves for optimum detector for K=16, P-16, p =0.707 54 4. 7 Performance summary of the optimum detector as a function of d' for p = 0. 707 57 e s 4.8 Performance summary of the optimum detector as a function of SNR for P = 16 58 x

LIST OF ILLUSTRATIONS (Cont.) Figure Title Page 4.9 P as a function of SNRI for = 0.707 59 5.1 A suboptimum detector for WSCS processes 63 5.2 Behavior of the terms in the expressions for the mean and variance of the optimum and suboptimum decision variables under H0 and H1 68 5.3 Performance summary of the suboptimum detector as a function of d' for p =0.707 es 72 s 5.4 Performance summary of the suboptimum detector as a function of SNR for P = 16 73 5.5 Comparison of optimum and suboptimum detectabilityindexes for p = 0.707 and P = 16 73 LIST OF TABLES Table Title Page 4. 1 Summary of optimum detector performance 56 5. 1 Summary of suboptimum detector performance 71 xi

LIST OF APPENDICES Page APPENDIX A: Simultaneous Diagonalization 81 APPENDIX B: Realizability 88 APPENDIX C: Parameters of the Binormal ROC 89 APPENDIX D: Eigenvalues and Input Signal-toNoise Ratios 93 APPENDIX E: Example of Factorizing the Autocorrelation Matrix of a WSCS Process 95 xii

LIST OF SYMBOLS Symbol Definition A PxP modulation autocorrelation matrix A PxP crosscorrelation matrices nm C KPxKP lower triangular carrier matrix, T C C*C CS cyclo- stationary d' detectability index for a normal ROC d' detectability index of optimum detector es E signal energy E { } expected value operator f0 carrier frequency in hertz f[ iH0] probability density function under HO f[. IH1] probability density function under H1 g(t) elementary waveform H linear filter that produces S from Y H0 noise alone hypothesis H1 signal and noise hypothesis I PxP identity matrix p K a non-zero integer indicating the number of periods in a CS process L[-] likelihood ratio, L[-] = [- FHL]/f[- H0] M PxP lower triangular modulation matrix, A = M*M xiii

LIST OF SYMBOLS (Cont.) Symbol Definition MK KPxKP diagonal matrix with M repeated along the diagonal m mean of the decisi A variable under H0 m1 mean of the decision variable under H1 N noise random process N(B, D) normal multivariate probability distribution with vector mean B and autocorrelation matrix D No noise power per hertz P a non-zero integer called the period indicating the number of elements in the period of a CS process PD probability of detection PFA probability of false alarm Q matrix of eigenvectors ROC receiver operating characteristic RN autocorrelation matrix of the noise N r (n) diagonal elements of RN RS autocorrelation matrix of the CS process S r (n) diagonal elements of RS r (n, m) elements of RS R(t1, t2) autocorrelation function of the CS process s(t) Rx(T) autocorrelation function of x(t) S discrete parameter cyclo-stationary process S minimum mean-square estimate of S, S = HY xiv

LIST OF SYMBOLS (Cont. ) Symbol Definition SLOPE slope of the optimum ROC SLOPE slope of the suboptimum ROC SNRI input signal-to-noise ratio SSCS strict sense cyclo-stationary s(t) continuous parameter cyclo- stationary process T KPxKP carrier autocorrelation matrix TD observation time Tp a positive real number called the period of a continuous parameter CS process T sampling interval t n'th sampling time TR[- trace of the matrix in brackets V output of whitening filter V' output of weighting filter W whitening filter WSCS wide sense cyclo-stationary x(t) a real zero mean wide sense stationary Gauss-Markov process Y observation random process y elements of Y n Z[L log-likelihood ratio Z[l 1 optimum decision variable (modified log-likelihood ratio) xv

LIST OF SYMBOLS (Cont.) Symbol Definition ZN sum of the squares of N independent zero mean Gaussian random variables with different variances Z[ ] suboptimum decision variable 6(t) Dirac delta function 0 starting time A real diagonal matrix of eigenvalues d0[ ] optimum decision rule 6 (y) normal probability distribution function (y) _ 1 e-t/2 dt V^T~ -ce, Z (w) characteristic function of ZN ZN 1(- ) a monotonic function that maps infinity to infinity ii (') inverse of II( ) p0 sample-to-sample correlation coefficient p p period-to-period correlation coefficient, p = p 02 Pvariance of the decision variable under H s Al2 variance of the decision variable under H1 variance of the decision variable under H1 U12 nA eigenvalues distributed according to * complex conjugate of the transpose xvi

CHAPTER I INTRODUCTION 1.1 Basic Problem The fixed time forced choice detection of Gaussian random processes in additive Gaussian noise via the likelihood ratio reduces to interpreting a quadratic form when the signal and noise autocorrelation functions or matrices are known. The literature is full of different interpretations for this quadratic form (Refs. 1, 8, 16, 17, 19, 20, 21, 25, 26, 31). The topic of this dissertation is the detector design for the fixed time forced choice detection of Gaussian Cyclo-Stationary (CS) processes in additive Gaussian noise. CS processes are those random processes possessing autocorrelation functions or matrices with a cyclic structure. A technique is presented that permits interpretation of the detector quadratic form in a manner that preserves the cyclic structure of the signal autocorrelation function or matrix. 1. 2 Cyclo-Stationary Processes The feature that distinguishes Cyclo-Stationary (CS) processes from other random processes is a cyclic structure in the autocorrelation function for continuous parameter CS processes and in the autocorrelation matrix for discrete parameter CS processes. This cyclic structure has a different manifestation for continuous and

2 discrete parameter CS processes. 1. 2. 1 Continuous Parameter Cyclo-Stationary Processes. A complex nonstationary random process {s(t), te TD} is a Strict Sense Cyclo-Stationary (SSCS) process if and only if there exists a positive number, Tp, called the period such that the joint probability distribution of is(tl+qTp), sI2+qT),..., s(t +qT ) equals the joint probability distribution of {s(tl), s(t2),.., s(tn)} for any integer q so that the translated parameter values are also parameter values. A SSCS process appears to be a strict sense stationary process when observed at integral multiples of the period, T. It is also possible to define a CS process with the concept of Wide-Sense Stationarity. A complex nonstationary random process {s(t), t eTp} with autocorrelation function Rs(t1,t2) is a Wide Sense Cyclo-Stationary (WSCS) process if and only if there exists a positive number, Tp, called the period such that E { s(t) 2} < co for teTD, E{s(t=qTp} = E{s(t)}, and R (t+qTp t+ qT) Rs(t,t2) for any integer q so that the translated parameter values are also parameter values. E{ } denotes the expected value of the quantity in brackets. A WSCS process appears to be a wide sense stationary process when observed at integral multiples of the period TP. The autocorrelation function of a WSCS process has a cyclic structure due to the wide sense stationary behavior of a WSCS process. The autocorrelation function of a SSCS process also has a cyclic

3 structure if E{ls(t)2 } < ox for t TD. It is the cyclic structure of the autocorrelation function of a CS (SSCS or WSCS) process that distinguishes a CS process from all other random processes. Contrast this to the autocorrelation function of a wide sense stationary periodic process, R(+ Tp) = R(T). The second order statistics are equal when observed at time differences that are integral multiples of the period. An example of a WSCS process is a clocked waveform (Ref. 24). Consider the clocked waveform s(t) consisting of an elementary waveform g(t) which is clocked at the rate T ps(t) an g(t - nTp n=- 0 where a - + 1 with a probability of 1/2. 0 E{s(t)} = E E{an g(t- nTp) 0 n=- o0 n o Rs(t1,t2) E{s(t )s(t2)} = R(m-n) g(tl-nTp) g(t2-mTp) n, m=- c for R(m-n) = E{an a } n m oo R1(t +Tp, t2+Tp) - R(m-n) g[t (n- 1)Tp g[t- (m-1)Tp] T2 pn, m=- oLet n' = n-i and m' = nm-1. Then

4 oO Rs(tl+Tp,t2+Tp),- I R(m'-n') glt-n'Tpl g[t2-z'Tp n,m'-oo R (tt2) From this one can conclude that -locked waveform consisting of clocked pulses, g(t) = 6(t), is a WSCS process, andany other clocked waveform can be generated by inputting clocked pulses into a filter with impulse response g(t). 1. 2. 2 Discrete Parameter Cyclo-Stationary Processes. A complex nonstationary random process {s(n); n=1,2,3,..., KP} is a Strict Sense Cyclo-Stationary (SSCS) process if and only if there exists a nonzero integer, P, called the period such that the joint probability distribution of {s(l+qP), s(2+qP),..., s(n+qP)} equals the joint probability distribution of {s(1), s(2),.., s(n)} for any integer q so that the translated parameter values are parameter values. It is again possible co define a CS process with the concept of Wide Sense Stationarity. A complex nonstationary random process {s(n); n-1,2,3,..., KP) with autocorrelation matrix RS= {r (n,m)} for n, m 1,2,3,..., KP is a Wide Sense Cyclo-Stationary (WSCS) process if and only if there exists a nonzero integer, P, called the period such that E{Is(n)l2} < o for n=1,2,3,.., KP, E{s(n+qP)} - E{s(n)}, and r(n+qP, m+qP) = r (n,m) for any integer q so that the translated parameter values are also parameter

5 values. Just as in the continuous parameter case, a CS process appears to behave as a stationary (strict or wide sense stationarity) process when observed at integral multiples of the period P. The autocorrelatlon matrix of a WSCS process has a cyclic structure as does the autocorrelation matrix of a SSCS process if Et{s(n)i2 } < o. It is this cyclic structure that is a unique feature of CS processes. 1. 2.3 Cyclic Structure of the Autocorrelation Matrix. CS processes will refer to discrete parameter CS processes for the remainder of this dissertation unless indicated differently. The cyclic structure of the autocorrelation matrix of a CS process is expressible in a manner that permits factoring the autocorrelation matrix in a meaningful form. Consider the CS process S - [s(l), s(2),..., s(P),..., s(KP)]T which is a column vector. S has a period P and there are KP elements in the observation, i. e., there are K periods in the observation. The autocorrelation matrix is R - {r (u,v)} for u, = 1, 2, 3,..., KP Subdivide RS into PxP dimensional matrices A for I1o nl,. Knm n,m 1,2,3,..., K.

6 RS - [An J for n,m - 1,2,3,..., K The A have elements nm A - = [r8[(n-1)P+u, (m-l)P+v]] for u,v= 1,2,3,..., P. A is the correlation matrix bepween the P elements in the nth nm period and the P elements in the mth period. The cyclic structure in RS for CS processes is expressible as S An+q, q [r[(n+q- 1)P+u, (m+q- 1)P+v]] [r [(n-1)P+u, (m-1)P+v]] = A nm for q any integer such that n+q and m+q = 1, 2, 3,..., K. It also follows that A - A for n-1,2,3,..., K. A is called the nn modulation autocorrelation matrix and is the autocorrelation matrix for any period. RS then has the form ^A A12 A13' ~ A1K 12 A12.. 1K-1 R - A A A A S 13 12 1, K- 2 13 12 A... A *. A AA* A* A 1, K 1,K-1 1,K-2 where * denotes the complex conjugate of the transpose. This is the

7 cyclic structure that distinguishes CS processes (SSCS and WSCS) from all other random processes and permits factorizing RS in a meaningful manner. Since RS is a positive definite Hermitian matrix, so is A a positive definite Hermitian matrix. There then exists a lower triangular matrix, M, called the modulation matrix such that A - M* M. M is nonsingular because it is the square root of RS. In order to show the key effect of M, all the A can be written in terms of nm it. Let T M*-1 A Mnm nm T = Ip, the PxP identity matrix for n- 1,2,3,..., K so that A - M*T M. nm nm RS is then factorable as

8 M T* I T T MT - p 12 13 1,K 12 p 12 1K- RsM* T* IT* I. T M RS = C M1 3 12 P 1,K-2 RM* T~ T, I M K Kl K- 2 2 P l JL L Jp EDK_ -M*TM -KT MK* K K' M is a KPxKP dimensional matrix with the modulation matrix M K repeated on the diagonal. T is KPxKP dimensional matrix of the T's. T is called the carrier autocorrelation matrix and indicates nm how to combine the information in the modulation autocorrelation matrix to form the A's. It also follows that T is a positive definite nm Hermitian matrix because RS is positive definite Hermitian. There then exists a lower triangular matrix, C, called the carrier matrix such that T C* C. RS is then factorable as R M* C*CM RS MK C MK This is the form o the cyclic structure of the autocorrelation matrix of a CS process that is preserved in the detector design. An example of factoring the autocorrelation matrix of a CS process is presented

9 in Appendix E. 1. 3 Generation of Cyclo-Stationary Processes There are many ways of generating continuous and discrete parameter CS processes. It is the intent here to list a few of these ways. 1. 3. 1 Continuous Parameter Cyclo-Stationary Processes. Four representative cases that produce CS processes are listed below: 1. Random processes describing propeller and reciprocating engine noise. 2. Amplitude-modulated random process, s(t), of the form s(t) = x(t) p(t) where x(t) is a stationary random process, and p(t) is a periodic function. 3. Random processes arising in meteorology. 4. Clocked waveforms described in Section 1. 2. 1. 1. 3. 2 Discrete Parameter CS Processes. Sampling continuous time random processes is the basic technique for generating discrete time CS processes in this dissertation. Discrete time CS process can arise either from sampling continuous time CS processes or from sampling continuous time stationary random processes in certain ways.

10 1.3. 2.1 Sampling Continuous Parameter CycloStatiolltary Pr'ocesses. Salmpling contiluous ti ll(e CS processes must be performed in a prescribed manner if the cyclic structure exhibited by CS process is to be preserved. The CS process must be sampled so that there are exactly an integ al number of samples, P, in each period of the process. Let T and Tp be the sampling interval s P and the period of the sampled continuous time CS process respectively. Then PT = Tp if the cyclic structure is to be preserved. The time that the sampling started must also be known in order to calculate the autocorrelation matrix. 1.3. 2.2 Sampling Continuous Stationary Random Processes. Multiplexing samples from each of the sensors of a P sensor array which is observing a stationary random process generates a CS process with a period P. The modulation correlation matrix A is the autocorrelation of the samples taken at the same sampling instant from each of the P sensors. The form of A arises from the spatial modulation introduced by m-ne location of the observed stationary random process with respect to the sensors. A is the correlation nm matrix of the samples taken from the P sensors at the nth and mth sampling instants. The form of A also arises from the spatial nm modulation introduced by the array geometry when the stationary random process is observed at the nth and mth sampling instants. The array processing problem is well understood and extensively studied (Refs. 3, 6, 21, 28, 29, 30). Studying the array

processing problem as a problem in detecting CS processes is expected to only add new insight and not new knowledge. It is also possible to generate a CS process with period P by multiplexing samples from P different stationary random processes. The A matrix is the autocorrelation matrix of the P samples taken at a sampling instant, and A is the correlation matrix between the P samples taken at the nth and mth sampling instants. 1. 4 Procedure The criteria for receiver design and accepting observations is presented to prevent confusion in future discussions. The optimum detector presented in this dissertation is designed according to the likelihood ratio. There are many criteria such as the Bayes, NeymanPearson, and Weighted Combination to name a few that support the likelihood ratio as the optimum decision rule for a detector. Birdsall (Ref. 2) showed that the detector which bases its decisions on the likelihood ratio yields optimum performance for the class of criteria which considers correct decisions "good" and incorrect decisions "bad. " The three criteria listed above fall into this class of criteria. The observations used in designing and operating the receiver are finite length vectors. There is no interest in this dissertation in considering any continuous time forms of the detector. The vectors are transformations, such as sampling, of the continuous time random

12 processes actually observed. The transformations must preserve the Cyclo-Stationary properties of the continuous time random processes. 1. 5 Historical Background The literature on detecting Cyclo-Stationary processes is sparse. Surprisingly little of that deals with optimum detectors. Deutch (Ref. 9) studies the demodulation of CS processes resulting from the amplitude modulation of a stationary random process by a periodic function. It is shown that a linear filter may be used to enhance a CS process out of several interfering CS processes. Parzen and Shirer (Ref. 22 ) generalize Deutch's work by covering the frequency band occupied by the CS process in question with several filters non-overlapping in frequency followed by square law detectors. Kincaid (Ref. 18 ) derives the optimum detector for a specific CS process in additive Gaussian noise. The periods of the CS process are statistically related in the first order Markov sense. The detector is specialized for the small signal-to-noise ratio case and a suboptimum approximation to this is evaluated. The suboptimum detector is a circulating delay line preceded by a signal enhancer and noise processor and is followed by an energy detector. Hariharan (Ref. 13) states that all detectors are basically

13 nonlinear devices. He concentrates on studying the output signal-tonoise ratio at the output of a vth law device for Gaussian CS processes and additive Gaussian noise at the input. The dissertation by Hurd (Ref. 14) is an excellent study on the mathematical properties of CS processes. There is also an extensive analysis of spectral analysis and estimation of CS processes. 1. 6 Organization of This Study Background material is presented in Chapter II. This includes a quick review of detection theory and a presentation of some important evaluation techniques. The optimum detector is designed and discussed in Chapter III. In Chapter IV, a signal-and-noise model are presented and used to evaluate the optimum detector performance. A suboptimum detector is presented and evaluated in Chapter V. Chapter VI contains the summary and conclusions, contributions of this study, and suggestions for future work. 1. 7 Notation The basic notation for the remainder of the dissertation is defined. All vectors are column vectors unless otherwise noted and are written as the transpose of a row vector. All vectors and matrices are denoted by capital letters. The exact meaning will be clear by the context. Given a matrix U, U is the transpose; U is the complex conjugate; U* is the complex conjugate of U; and

14 IUi is the determinant. All observations are complex valued random vectors with KP elements. The signal and noise vectors are S and N respectively with elements T S = {s s2...sp, and N = {n1 n2,,n K where s and n are complex valued single observations. The signal and q q noise correlation matrices are RS and RN respectively. RS and RN are Hermitian positive definite matrices. R E [S- E {S}] [S- E{S}]* and RN E [N - E{N}] [N- E{N}]* where E{ } denotes expectation. The symbol ~ denotes "is distributed according to." An example is the normal probability distribution function. X - N(M,R) indicates that the vector X is distributed according to the multivariate normal probability distribution function with mean vector M and correlation matrix R.

CHAPTER II BAC KGROUND 2.1 Review of Detection Theory The pertinent facts of signal detection theory are reviewed in order to present the techniques used in this dissertation. Classical fixed-time forced-choice signal detection theory was aptly formulated by Peterson, Birdsall, and Fox (Ref. 23) in 1954. The theory has since been extensively refined and extended. The basic signal detection situation is presented schematically in Fig. 2.1. s(t) Y^tL~..... ~-00 ~ 1 DetectDecision n (t) Fig. 2.1. Illustration of the basic signal detection problem The noise process is n(t), and s(t) is the signal process. The detector is presented with an observation y(t) during the time interval 13 and 3 + TD. The observation either consists of noise alone, hypothesis Ho, or signal and noise, hypothesis H. When the signal is present, it is present for the entire observation interval. The hypotheses are mutually exclusive. At the end of the observation 15

16 interval, the detector must decide whether the signal is present or absent, that is which of the two possible hypotheses is in effect. The signal detection problem can be expressed as a hypothesis testing problem which is expressible in shorthand as: n(t), h y(t) = < <t< 1+TD. s(t) + n(t), H1 The random process y(t) is customarily described by a vector representation in order to use statistical decision theory. According to the Shannon sampling theorem, y(t) can be represented as the vector T Y {Y1' Y2, Y3' YN } where N = 2WT and y~ - y (fy +2 for n 1,2,3,..., N if y(t) is timelimited to an interval of length T and Fourier Series bandlimited to an interval of width W. 2.1.1 Detector Design. Birdsall (Ref. 2) showed that the detector which bases its decisions on the likelihood ratio yields

17 optimum performance for the class of criteria which considers correct decisions "good" and incorrect decisions "bad. " Three commonly used criteria that fall into this category are the Bayes, NeymanPearson, and Weighted Combination criteria. The optimum decision rule is 1, L[Y] > c Y] r, L[Y] = c O, L Y] < c where fl Y H1] 1. L[ Y] = f[ YHo is the likelihood ratio. 2. f[ Yj H ] is the observation probability density function under hypothesis H. for i = 0 and 1 3. c is the pre-assigned threshold. 4. 0[ Y ] is the probability of deciding that a signal is present given the observation Y. 5. 0< r< 1. The case for L[ Y I = c describes a randomized decision rule. The threshold c is selected according to the chosen criteria. Birdsall (Ref. 2) has also shown that the detector which bases its decisions on a monotonic function of the likelihood ratio also yields optimum performance. The monotonic function must map infinity to

18 infinity. This property is referred to as the monotone property of likelihood ratios in this dissertation. 2. 1.2. Detector Evaluation. Detector performance is succinctly summarized by the receiver operating characteristic (ROC). The ROC is a plot of the probability of false alarm, PFA versus the probability of detection, P. PFA is the probability of deciding hypothesis H1 occurred when hypothesis H0 occurred, and PD is the probability of deciding H1 occurred when indeed it did. Birdsall (Ref. 2) showed that the ROC of all likelihood ratio detectors is convex. For any detector, P = E {0[ Y] H1} (2. 1) and PFA = E {0[ Y]J Ho (2.2) where 0[ Y] is the decision rule. PD and PFA become o0 PD Iff YJH1] d Y (2.3) L[ Y]>c and oo PFA = fl Y H0 dY (2.4) L[ Y]>c

19 For a likelihood ratio detector if and only if thi, probability that L[ Y I =c is zero. Let II () be a monotonic function that Iaps infinity to infinity and II'[Y] = II [L[Y]] (2.5) Equations 2. 3 and 2. 4 then become fP - fLUiriHId~i' (2.6) c PD f f[ H' H1 ] dlI' (2. 6) C and PFA fLII' H0o dII' (2.7) where c' = I (c). These relations can be further simplified by a theorem proved by Birdsall (Ref. 2) which states that the likelihood ratio of a monotonic function of the likelihood ratio is the likelihood ratio. That is f lI LYLY]] H1 L[ Y (2.8) f II [LLY] Hot0 Substitute Eqs. 2. 5 and 2. 8 into Eqs. 2. 6 and 2. 7. oD PD II II' f[II'IHo dI' (2.9) c' and

20 FA = f[II'lH] dII (2.10) C -1 where II () is the inverse of the function H(-). It is then seen that the ROC for the optimum detector is completely specified once f[II' H0] is known. 2.1.3 Normal ROC. The normal ROC is a standard for comparing ROC's because it is parameterized by one parameter, d'. A ROC is called normal if it can be parameterized by the normal distribution as: P D (X + d') and FA where () 1 ex2/2dx ^(X) - ~ I e^e dx /2n -oc The parameter d' is called the detectability index though sometime it is referred to as the quality of detection. Normal ROC's are usually plotted on normal-normal graph paper because normal ROC's plot as straight lines. A family of normal ROC's with detectability index d' is plotted in Fig. 2.2.

21 0.99 0/ 0.90 PD 0.50 0.10 0.01 0.01 0.10 0.50 0.90 0.99 FA Fig. 2.2. Normal ROC's with detectability index d' Physical significance can be attributed to d', and this accounts for the attractiveness of normal ROC's. The pe)rfornllcl(c ol the optimum detector for signal known exactly in white Gaussian noise is described by a normal ROC with d' - 2~

22 where E is the signal energy and NO is the noise power per hertz. The normal ROC provides a convenient quantitative measure of performance for the comparison of ROC's. When ROC's are almost normal, an equivalent detectability index, d', as measured on the negative diagonal, P + PF = 1, indicates the performance. 2.1.4 Binormal ROC. The binormal ROC appears in many situations as the result of normal ROC's and the use of normal-normal graph paper for plotting ROC's. On normal-normal graph paper, normal ROC curves plot as a straight line with a slope of unity while binormal ROC curves plot as a straight line with a slope less than unity. Consequently binormal ROC's can be parameterized by a SLOPE and a detectability index d' d' is that point where the binormal ROC e e curve intersects the negative diagonal, PD + PFA = 1. A few binormal ROC's are presented in Fig. 2.3. The binormal ROC arises in situations where the decision variable is normal under H and H1 but with different first and second order moments under H0 and H. Let Z be the decision variable. A binormal ROC arises when N(m0,02), H Z N(m1, (l2) H d' and SLOPE are (see Appendix C) e

23 0.90 \' P 0.50 0.10 / 0OoOl I,|, i,, I:| 0.01 0.10 0.50 0.90 0.99 FFA Fig. 2.3. Binormal ROC's with detectability index d' and slope SLOPE e 2(m -mi) d_ 1 0 (2.11) e 0or1 + 0 and SLOPE =- a0/. (2.12)

24 SLOPE is a measure of the difference in the variance of, and d' is e a measure of the difference in the mean of Z under H0 and H1 The binormal ROC is not convex and consequently cannot be the ROC of likelihood ratio detector (Ref. 2). However a region of the optimum ROC may behave as a binormal ROC. It then becomes convenient to label and to parameerize the optimum ROC by a d' and a SLOPE in this region. 2. 2 Performance Evaluation and Characteristic Functions The statistics of the decision variable, Z, under H0 and Hi must be known to generate the ROC. Many times Z, under H0 and Hi, is the sum of the squares of independent zero mean Gaussian random variables with different variances. The probability density function of the sum of the squares of independent zero mean Gaussian random variables with different variances is a noncentral chi-square distribution. The noncentral chi-square probability density function is well known in the form of series expansions (Ref. 15). Use of the series expansions for the probability density functions require approximations in the form of truncating the series. A different technique is presented for approximating this probability density function numerically. The ROC may then be generated by Eqs. 2.9 and 2.10 or Eqs. 2. 1 and 2.2 depending if Z is or is not respectively the optimum decision variable. Let

25 N ZN x n=1 where 1. The x are independent. n 2. x -N(O, an ). n n Let f[ ZN] be the probability density function of ZN; ZN(w) the characteristic function of ZN; and ~ (w) the characteristic funcX n tion of x. The characteristic function of ZN is defined as n N 00 jwZ Dz (w) = f(ZN) e d. (2. 13) N -oo It then follows that co -jwZN f(ZN) =2 S J z (w) e dw. (2. 14) -oo N Since x are independent, N z (w) = II x (w). (2.15) N n=1 n The x are chi-square random variables with characteristic funcn tion (Ref. 7)

26 x (w) = (2.16) /1 - j2a w n Substitute Eq. 2. 16 into Eq. 2. 15. N D (w) = II1 (2.17) ZN n=1 1-j2 2 w n f[ ZN ] follows by substituting Eq. 2. 17 into Eq. 2. 14. f[ ZN] is evaluated numerically from Eqs. 2. 17 and 2. 14. Equation 2. 14 is evaluated by using the Fast Fourier Transform algorithm at a considerable saving in computer time over using the Discrete Fourier Transform (Refs. 4 and 5).

CHAPTER II THE OPTIMUM DETECTOR FOR WIDE SENSE CYCLO- STATIONARY PROCESSES 3. 1 Introduction The optimum detector for Gaussian WSCS processes in additive Gaussian noise is derived using the likelihood ratio as the optimum decision rule. The detector is derived in a manner that isolates the cyclic structure of the signal correlation matrix. The detector for WSCS processes has the form of noise reduction followed by signal enhancement followed by energy detection. 3. 2 Detector Design The detector problem is a fixed time forced choice detection problem with completely known statistics. Observations are gathered until there are KP observations at which time a decision must be made as to the absence or presence of a signal. The detector design is based on the optimum decision rule, the likelihood ratio. The signal is a zero mean complex Gaussian WSCS process, S, with autocorrelation matrix RS The noise, N, is also a zero mean complex Gaussian process with autocorrelation matrix RN and is independent of the signal. The observation, Y, under the hypotheses H0 and H1 is 27

28 N, H Y = S+N, H The observation statistics under H. and H1 are )0 1 N[0, RNj, H Y - N[ 0, R + RN, H1 The detector is designed in two steps. The first step is deriving the sufficient statistic for making optimum decisions and presenting three common interpretations of the sufficient statistic. The second step is expanding the sufficient statistic in a manner that permits preserving the cyclic structure of RS in the detector. 3. 2. 1 The Sufficient Statistic and Three Common Interpretations. The observation statistics are required to form the likelihood ratio, L[ Y]. The observation statistics are: f[LYjH0 = (277) 2RNI exp{-YR -1Y/2} and -KPA - -1 f[ YH1I (2) IRS + RNI exp{-YL RS +RN Y/2}. The likelihood ratio is defined as L[ Y] = fl Y|H1 / f YlH 0]

29 and L[Y] = [IRNI/ IRS+RQNI expl Y [RN - [Rs+RN]1] Y/2 The log-likelihood ratio, Z, is defined as Z[YY] = n [LY]] By the monotone property of likelihood ratios discussed in Section 2. 2. 1, Z is a sufficient statistic for making optimum decisions. Z[Y] = 1/2 n [IRNI / IRS+RN] + Y [RN 1 RS+RNJ ]Y /2 Fince all the matrices, RS and RN are known, 1/2fn [IRNI/ IRS+RN] is a known constant. Consequently by the monotone property of likelihood ratios, a sufficient statistic for making optimum decisions is the modified log-likelihood ratio, Z[ Y], where Z[YJ YR [ RRN Y (3. 1) The sufficient statistic, Z[ Y], is aquadratic form. Z[ Y] must be interpreted in a manner to isolate RS and preservethecyclic structure of RS. Three frequently mentioned interpretations of Z[ Y j are presented below as a contrast to the interpretations presented

30 in this dissertation. 1. Triangularization If RN - [RS RN ]1 is a positive Hermitian matrix, there exists (Ref. 11) a lower triangular matrix, B, such that RN [+ ]- + R B B. N S N The detector becomes an energy detector with filtering, Z[ Y]= IBYJI and is shown in Fig. 3. 1. Y KP B S! [Y] Fig. 3. 1. Triangularization 2. Simultaneous Diagonalization -1 Let R be a positive definite Hermitian matrix and FS + RN ] be a Hermitian matrix. There then exists (Appendix A) a matrix D and a real diagonal matrix, v, such that -1 * R =DD and

31 [Rs+RN] D D Z[ Y] becomes ]r a Z[Y] = I[I- ]21 DYj Z Y ] is still an energy detector with prefiltering. In this interpretation, there is a term identifiable with the input signal-to-noise ratio. The detector is implemented in Fig. 3.2. Y ~1iIKP 1D I[Il_ ~ S ~- 2 Fig. 3. 2. Simultaneous diagonalization 3. Estimator- Correlator Given Y - S + N, Kailath (Ref. 16) shows that the linear filter, H, that generates the minimum mean-square estimate, S, of S from Y has the form — 1 H - RSLR + RN r- NL R~ + RN |. (3. 2)

32 Multiply both sides of Eq. 3. 2 on the left by RN and rearrange. RN H = RN -[R +RN (3. 3) Substitute 3intoEq. 3 into Eq. 3. 1. - 1 Z[YI = Y R HY (3.4) If RN is a positive definite Hermitian matrix, it can be triangularized (Ref. 12). -1 * RN = F F where F is lower triangular. Consequently Eq. 3.4 becomes Z[Y] = [Y F J [FS] where S = HY. Besides providing a sufficient statistic for making optimum decisions, this interpretation for 2[ Y ] also generates the minimum mean-square estimate of S. The block diagram for Z[ Y] is shown in Fig. 3. 3. YIYI Conjugate _n Transpose ~ m Fig. 3.3. Estimator-correlator

33 3. 2. 2 Expansion of the Sufficient Statistic. To preserve the cyclic structure of RS, Z[Y] must be expanded to isolate RS The expression for Z[ Y J is repeated below. Z[YJ = Y [R L - [R+RNI Y. (3.1) RS must be isolated in RN - [RS+ RN + 1 in order to isolate RS in Z[ Y 1. This can be partially accomplished by use of the following matrix identity. RS and RN are both positive definite. Then R+ -1 R-1 R-1 VR-1 R -11 RN-1 [Rs+RN1~I R RN1 _ -[Rs RNl+ RN (3.3) Substitute Eq. 3. 3 into Eq. 3. 1. Z[ Y becomes z[YJ - Y RN [Rs + RN ] ~ - Z[ Y] SY* [ Rg R - ] RN- Y (3.4) Now RS must be further isolated in order to preserve the cyclic structure of RS. RS is a positive definite Hermitian matrix and RN is a positive definite. There then exists a matrix of eigenvectors, Q, and a real diagonal matrix of eigenvalues, A, such that (Appendix A) RS = Q Q and (3.5)

34 RN Q A Q Since RS is the correlation matrix of a WSCS process, it can also S be expressed as * * R = XK C M where MK and C are lower triangular matrices. It is then possible to find (Appendix A) a unitary matrix, W, such that Q = WCMK (3.6) Substitute Eq. 3.6 into Eqs. 3.5 and then substitute into [ R + RN 1 of Eq 3. 4. Z[ Y] then becomes -~ ] * -1 * * * -1 -1 Z[Y] Y RN MK C W [I+A 1 WCMKR N R y 1 2 I[I+ A-11 WCMKRNY|. (3.7) The cyclic structure of R is thus isolated in the terms C MK. S K The detector is implemented as shown in Fig. 3. 4. Fg 3.4 piu eetrfrWC rcse-s [IA+ A-1]- zY Fig. 3.4. Optimum detector for WSCS processes

35 3. 3 Detector Description The optimum detector for WSCS processes designed in Section 3. 2 is described. Each of the detector blocks appearing in Fig. 3. 4 is called a filter and is explained separately. Noise Reduction -1 RN is a noise reduction filter. The noise reduction has N the character of reducing the input power spectrum to RN 1 by the square of the noise power spectrum. If the noise power spectrum is stable with a few large isolated spikes, the noise reduction takes on the character of a notch filter. It should be noted that this filter is unrealizable in the sense that future inputs are required to form the present output. Signal Enhancement The filter combination MKC is a signal enhancement filter. The signal enhancement has the character of enhancing the input power spectrum to MKC by the signal power spectrum. For WSCS processes the structure of the signal enhancement is identifiable with the cyclic structure of RS. MK is a modulation emphasis filter. It emphasizes each period (P elements) of its input by the modulation matrix M. C is a combining filter. It combines periods of its input, which have each been emphasized by M, according to the carrier matrix.

36 Whitening W is a whitening filter. It whitens in the sense that under either H0 or H 1, the output vector has independent components with different variances. Let V = WCMKRN y (3.8) with autocorrelation matrix RV. Rv E{VV*} V =WCMK RN E{Y Y} R MK C W (3.9) MK C A CMK, H0 E{YY - (3. 10) MK C [I+A] CMK, H Substitute Eqs. 3. 10 into Eq. 3. 9. Then A-, H Rv -= (3. 11) A [l+Al], H1 and the elements of V are independent for the Gaussian input Y under H0 and H1. It should again be noted that W is an unrealizable filter.

37 Weighting -1 2 [I + A 1 is a weighting filter. A is a real diagonal matrix with diagonal elements, Xn, the noise-to-signal ratio behavior on the nth eigenvector. The eigenvector noise-to-signal ratio is inversely proportional to the input signal-to-noise ratio (Appendix D). If the input signal-to-noise ratio is small, the A are large, and [I+A l I (3.12) where: is read "approximately equal to. " On the other hand if the input signal-to-noise ratio is large, X are small, and i - 1 1 The weighting filter modifies the detector structure for departure from(33) from the small input signal-to-noise ratio case. The detector structure in Fig. 3. 4 with the weighting filter bypassed is the small input signal-to-noise ratio approximation of the detector. This is an attractive feature in the sense that the small input signal-to-noise ratioform of the detector is directly identifiable without making approximations to all the filters. 3. 4 Summary The optimum detector for Gaussian WSCS processes in additive Gaussian noise is designed so as to preserve the cyclic structure of the signal autocorrelation mnatrix. The detector is a noise

38 reduction filter followed by a signal enhancement filter followed by an energy detector. The structure of the signal enhancement filter is clearly identifiable with the cyclic structure of the signal autocorrelation matrix. The low input signal-to-noise ratio form of the detector is directly identifiable by making an approximation in only one of the detector filters.

CHAPTER IV PERFORMANCE EVALUATION OF THE OPTIMUM DETECTOR 4.1 Introduction The performance of the optimum detector is evaluated and explained. A signal model is described for use in evaluating the performance,and expressions for characterizing performance are derived. 4.2 Model Description The signal and noise models are selected to permit meaningful detector evaluation with a minimum number of parameters. 4.2.1 Noise Model. The noise model is selected to permit identification of performance characteristics with the signal model characteristics. The noise model selected is a real zero mean wide sense stationary white Gaussian noise with power spectral density N. 4.2.2 Signal Model. The signal model selected is a real zero mean WSCS Gauss-Markov process. This model is selected because the behavior of the correlation function is completely characterized by the correlation coefficient. The signal, S, is the sampled version of the time continuous WSCS process s(t) - x(t) cos (277 fot + f) ) 39

40 x(t) is a real zero mean wide-sense stationary Gauss-Markov process with autocorrelation function R (T) = R (0) e, and x x and f0 are known. The autocorrelation function of s(t) is Rx(t -t n R (tn t) = 2 n [cos[2f 0(t +tn) + 23] + cos 2f(t-tn)]. s n m 2 L0 m n 0 m nj Sample s(t) at the rate T = so that there are P samples in s Pf a period l/f0. Sample at the time instants tn = T (n-l) until there are KP samples, i.e., n = 1,2,3,..., KP. The autocorrelation matrix of S is R with elements r (0,0) for 0,0 =1,2, S s 3,..., KP. rs(O( ) = 2 p 50 [cos [ (0+0(-2) + 2] + cos- (0- 0] where ps = R (l/Pf0)/Rx(0) is the sample-to-sample correlation coefficient. -= R (n/Pf0)/R (0). Subdivide RS into PxP submatrices A for n,m = 1,2,3,..., K. nm A =r [(n-1) P+u,(m-1)P+v]], u,v=l, 2,3,.., P

41 Now rs[(n-I)P+u, (m-1)P+v] - Rx() (m-n)P+v-u I 21T 2] cos - [(m+n-n2)P+u+vJ + 2P13+ cos -p [(m-n) P+v-u]} - I (m-n) P+-ul Cos[ (u+v)+ 2] + cosj (v-u) - 2 Os - Consider the following three cases for r [(n-l) P+u, (m-l1)P+v] 1. n=m Rx(0) Iv-ul r [(n-l)P+u, (n-l)P+v] c2 P cos (u+v)+21 27T f + cos 2p (v-u) 2. n- 1, m - 2 Rx(O) I P+v-uI rs u, P+vl - 2 P cos 2 [(u+v) 2]]+ cos (v-u) 3. m> n

42 rx i) (m-n) P+v- u i 27, I rs[(n-l)P+u,(m-l)P+v] = 2 cosp (u+v)+2j 2 s -cos2p (v-u)} + COS 2,a V- U) Im-n-11 Rx(0) P+v-u I P 2 PS {cos[ (u+v) + 2j + cos 2 (v-u)} where P p = PP is the period-to-period correlation coefficient. It is possible to conclude from these three cases that for a WSCS Gauss-Markov process I m-n-1 1 p A12 m> n Am A n= m nm I m-n-1 A n> m P A n RS is then completely specified once A, A12, and p are known. A and A12 are also completely specified once P, K, ps, and R (0) are known. x 4.3 Evaluation Procedures The procedures used in evaluating the performance are presented and expressions for performance characterization are

43 derived. The block diagram of the optimum detector is repeated below. The techniques used in performance evaluation and characterization only require knowledge of the eigenvalues, X N K [I+ A- 1[Y] Fig. 3.4. Optimum detector for WSCS processes 4.3.1 ROC Generation. Given the statistics of Z[Y] under HO and H1, f Z[YIIH] and f [Z[Y H1], the probability of detection, PD, and the probability of false alarm, PFA, are (see Section 2.1.2) PD - f[Z[YIHi] dZ (4.1) c and PFA - f[ZLY Y IH dZ. (4.2) C

44 The ROC's will only be generated for the ROC evaluation region. The ROC evaluation region is either that region of the ROC where 0.01 < P < 0.99 and 0.01 < PFA < 0.9 or is that region of f[Z[Y] H0] and f[Z[Y] H1] such that when f[Z[Y] H0] and f[Z[Y] H1] are substituted in Eqs. 4.1 and 4.2 0.01 < PD < 0.99 and 0.01 < P < 0.9 - FA - The exact meaning will be clear by the context. It was shown in Section 3.3 that the KP dimensional vector V, V =WCMRN Y, is a zero mean Gaussian vector with independent components, Eq. 3.11. It then follows, that the correlation matrix of V', Rv, where V' - [ 1 1 - V' [I+ ] V is

45 RV (4.3) -1, HAt H Consequently under H and H1, ZIY] is the sum of the squares of independent Gaussian random variables with different variances where the variances are the diagonal elements of RV, f Z[Y]I HO and f [Z[Y]I H] are obtained by taking the inverse Fourier Transform of the characteristic function under H and HI and integrating as in Eqs. 4.1 and 4.2. See Section 2.2 for more details on the procedure. 4.3.2 Input Signal-to-Noise Ratio. The input signal-to-noise ratio, SNR, is a meaningful measure of the amount of signal relative to the amount of noise. Let the diagonal elements of the signal and noise correlation matrices be respectively rS(n) and rN(n) for n S N 1, 2, 3,..., KP. SNRI is defined as KP KP rs(n) n=l SNR = K (4.4) I KP rN(n) KP N n=1 For the noise model under consideration r (n) - N0 Therefore

46 KP KP rN(n) - N (45) n=l0 Substitute Eq. 4.5 into Eq. 4.4. Then KP rS(n) SNR = 1 \ S I KP ~ N n=i 0 K1 TR[RSRN KP RSN] where TR[ ] is the trace of the matrix in brackets. It is shown in Appendix A that KP TR[RSN] - n X n=l where the X are the eigenvalues. It then follows that 1 P SNI KP n n 4.3.3 Detectability Index. A meaningful measure of the detector performance is the point where the ROC curves cross the negative diagonal, PFA + PD = 1. The detectability index, d', is the measure of that crossing point and is easily calculated for a normal ROC. Assume that the ROC for the optimum detector is normal or can be closely approximated by a normal ROC. d' is then defined

47 as (Ref. 2) EZ[Y] lH - E Z[Y]lH0 d' H1] - E [[YI Ho] (4.6) 2 It is seen that, Section 4.3.1, KP 1 -1 x (1 + A ) Ho (4.7) n=1 E [Z[Yj I ] KP k2-1 n' H1 (4.8) n=1 n 1 Substitute Eqs. 4.7 and 4.8 into Eq. 4.6. Then KP 1 _ 1 K 2 -1 - 0d' = 1 2X-2(1 + A. )]. (4.9) 2n=1 n n The eigenvalues are initially calculated for a SNRI of unity because any other SNRI can be obtained by multiplying the eigenvalues by a constant. The performance is obtained for the detectability indices of 0.25, 0.5, 1.0, and 1.5. f3 is set at 0.0625 so as not to sample at the zeroes of cos 27rf t. The eigenvalue multiplicative modifying constant is found by the Newton-Raphson method used to search for the stated d', and the SNRI follows from the new eigenvalues.

48 4.4 Perfornance The ROC's and performance data are presented and discussed followed by specific conclusions. It is observed, in the ROC evaluation region, that the ROC's are binormal. This only indicates ahat the optimum ROC's behave as binormal ROC's in the ROC evaluation region. See Section 2. 1. 4 for more details on binormal ROC's. Six representative ROC's are presented in Figs. 4. 1 to 4. 6. Since a binormal ROC curve can be parameterized by two parameters, the slope and detectability index, the optimum ROC's are summarized in Table 4. 1. The parameters listed in Table 4. 1 are defined below. K: number of periods in an observation P: number of samples in a period p p: period-to-period correlation coefficient, p = ps p: sample-to-sample correlation coefficient d': desired detectability index d': actual detectability index SNRI: input signal-to-noise ratio SLOPE: slope of the ROC curve For a binormal ROC (see Appendix C),

49 0.99 0.90 0.01 ~ 0.01 0~10 0.50 0.90 - I ~. FA Fig. 4.1. ROC curves for optimunm detector for K= 4, P = 4, ps 0.25 0. 50 &-.10

50 0.99 0.90 NA Fit, 4.2. ROC curves for optimum detector for 0. 50 0 D 0.10 0.01 0.01 0.10 0.50 0.90 FA Fig. 4.2. ROC curves for optimum detector for K =16, P=4, ps = 0.25

51 0.99 0.90/ -01 L I O. 50 - 0.10 0.01 I I..,. 0 01 0.10 0.50 0.90 PFA Fig. 4.3. ROC curves for optimum detector for K-64, P-4, ps -0.25'

52 0.99 0.90 ~0~~~~ 50 /~~0 0.50 / PD 0.10 - 0.01 ~ 0.01 0.10 0.50 0.90 PFA Fig. 4o4. ROC curves for optimum detector for K= 1, P =16, ps - 0.707

53 0.99 0.90 0. 50 - 0.01 I 0 01 0 10 0.50 0.90 PFA Fig. 4.5. ROC curves for optimum detector for K-4, P =16, p- 0.707

54 0.99 0.90 0. 50 - PD i / / 0.10 0O 01 Oo 01 0.10 0.50 0.90 2FA Fig. 4.6. ROC curves for optimum detector for K —16, P=16, s - 0.707

55 2( - i0) d', (2.11) e 1 +ar0 and SLOPE - /a1 (2.12) where 1. m0 is the mean of Z[ Y] under H0 2. m1 is the mean of Z[Y] under H1, 3. aO2 is the variance of Z[ Y] under Ho 4. a12 is the variance of Z[ YJ under H1 It is seen that the slope is a measure of the differences in the variance of and d' is a measure of the difference in the mean of e Z[ Y under H0 and H. The relationships between the various parameters in Table 4. 1 are plotted in Figs. 4. 7-4. 9 for a representative example as an aid in interpreting Table 4. 1. It is seen from Table 4. 1 and Fig. 4. 7 that for a given K, P, p the ROC is normal for low d' and d' equals d'. However as d' increases, the ROC deviates from the normal and d' becomes e smaller than d' for small K and is approximately d' for large K This is due to the degree of similarity between f [Z[ Y] H] and f [Z[ Y] I H] in the ROC evaluation region. The variance of Z[Y] under Ho and H1 are approximately

d' = 0.25 d' = 0.5 d' = 1.0 d' = 1.5 K P P Ps...... d' | SNR SLOPE de' SNR| SLO PE d' ~~~~NISLOPE d' SNR SLOPE 4 4 0 25 0 0.707 0.25 0.059 1.000 0.50 0.128 0.980 0.97 0.300 0.860 1.42 0.520 0.854 16 4 0.25 0 0.707 0.25 0.028 1.000 0.51 0.059 1.0 0 0 1.03 0.13 0 0.921 1.50 0.210 0.880 64 4 0.25 0 0.707 0.25 0.014 1.000 0.50 0.028 | 1.000 1.00 0.05 8 0.990 | 1.50 0.091 0.954 4 4 0.00391 0.250 0.25 00068 1.000 0.50 0.146 0.980 0.98 0.331 0.890 1.41.559 0.860 16 4 0.00391 0.250 0.25 0.033 1.000 0.50 0.069 1.000 0.98 0.1416 0.940 1.50 0.234 0.905.~~......,._........ 64 4 0.00391 0.250 0.25 0.016 1.000 0.55 0.033 1.000 1.00 0.069 0.990 1 1.50.106 0.991 1 16 0.00390 0.707 0.25 0.055 1.000 0.50 0.121 0.980 0.91 0.290 0.851 1.39 0.512 0.784._S._... _....... I _ O 4 16 0.00390 0.707 0.26 0.026 1.000 i 0.50 0.055 1 1.0 00 1.00 0.121 0.1 1.41 0.199 0.90 5 16 16 0.00390 0.707 0.25 00o013 1.000 0.51 0.026 1.000 1.00 0.055 0.931 1.50 0.086 0.924 1 16 2. 33xl0 0.250 0.25 0 074 1.000 0. 50 00157 1.000 0.97 0.356 0.860 1.45 0.599 0.829 -10 4 16 2.33x 10 0.250 0.25 0.036 1.000 0.52 0. 074 1.000 1. 00 0.1 57 0.990 1.55 0.251 0.954 16 16 2.33x 101 0.250 0.25 0.018 1.000 0 0.51 0. 036 1.000 0 1.10 4 0. 990 11.49 0.14 0.96 5 ~....__ 1 16 0.250 0.917 0.25 0.038 1.000 0.48 0 0871 0.890 0.88 0.225 0.815 1.13 0.420 0.730 4 4 0.707 0.917 0.25 0.039 1.000 0.48 0. 090 0.927 0.85 0.232 0.806 1.14 0.430 0.826 1 16 0.707 0.979 0.25 0.030 1.000 0.48 0.070 0.930 0.85 |0.189 0.800 || 1.10 0.366 0.666 Table 4.1. Summary of optimum detector performance

57 1.5 K K=16 1.5 K - 4 1.0 - 1.0 - 0. 5 0. 5 0 0 0 0.5 1.0 1.5 0 0.5 1.0 1.5 d' d' e e (a) d' vs. d for P- 16 (b) d' vs. d' fioi P 4 e e 1. 0- 1.0 -- I ^^ H ^\ ^^^K-64 a~ KK=16 0 0.9- 0 0.9 - ^ \ ^ K-4 0.8- 1 0. 8 0.7 I I 0.7 I I 0 0.5 1.0 1.5 0 0.5 1.0 1.5 d d' e e (c) SLOPE vs. d' for P 16 (d) SLOPE vs. d' for P 4 e Fi 4. 7. Performance summary of the optimum detector as a function of d' for ) -0. 707 e s

58 1.5 K-16 K=l 1 0 0.(a) d SNR for p 0.7 (b) SLOPE vs K=i 0A _________ ___1. 7 L- 0- _ o 0 I 1 0.. I I 0.01 0.1 1.0 0.01 0.1 1.0 SNRI SNRI (a) de' vs. SNR for p 0.707 (b) SLOPE vs. SNRI for p -0. 707 1.0d' =0.25 d'-l. 5 y of 0.25 o ddl. 5 20 a 0. 8 c 1^-\ ^0.6l~~~~o I I I ~o0.4 i 0. 2 L - 0.01 0.1 1.0 0.01 0.1 1.0 SNRI SNR (c) K vs. SNR for p 0.707 (d) p vs. SNR for K- 1 Fig. 4.8. Performance summary of the optimum detector as a function of SNRI for P = 16

59 20 2010 10: 5 - d'=0. 25 d'=l. 5 d'O. 25 d'=l. 5 0 I, I 0 I.1 I 1 0.01 0.1 1.0 0.01 0.1 1.0 SNRI SNRI (a) P vs. SNRI for KP- 16 (b) P vs. SNRI for KP -256 Fig. 4. 9. P as a function of SNRI for 0. 707 equal, and the ROC is approximately normal for low SNRI (Figs. 4. 8a and 4. 8b). However as SNRI increases, the variance of Z[ Y] under H9 and H1 begin to differ more, and the ROC becomes less normal. The fact that the ROC is binormal indicates that f [Z[Y]I H] and f [Z[Y] IH] behave approximately as Gaussian probability density functions in the ROC evaluation region. The degree of similarity between f [Z[ Y]I H] and f [Z Yj H1] decreases as p or p increases for a given P and any K (Figs. 4. 8b and 4. 8d). This is due to an increasing difference between the detector input statistics under H0 and H. Some of tills effect can be reduced by increasing K but the value of p or

60 S is the controlling factor. The SNRI required for any d' decreases as the observation time (KP) increases for constant p or p and constant period (P). This is expected and is a result of the integrating filter (Fig. 4. 8c). The SNRI required for ai.y d' decreases as p or p increase for a constant period (P) and a constant observation time (KP). This is a result of the increasing statistical difference between H0 and H1 as p or ps increase (Fig. 4.8d). The SNR required for any d' decreases as the period (P) is increased for a constant observation time (KP) and a constant p or ps (Fig. 4. 9). This decrease in required SNRI is slight but does indicate a trend. The performance results indicate that the only ways to reduce the required SNRI for a given d' is to increase p or ps, P, K, or any combination thereof. One usually has no control over the parameters p, ps, and P. Consequently the only way to improve performance is to increase KP by increasing K. It should be observed that for the signal model selected indicates that increasing the sampling rate to increase p or p and P improves performance. The signal model further allows for a never ending increase in sampling rate. This is a result of the infinite bandwidth of the signal s(t). In actuality signals have finite

61 bandwidth. Consequently increasing the sampling rate over the Nyquist rate may improve the performance slightly. However increasing K will improve the performance much more than increasing the sampling rate. This is a consequence of the fact that once a CS process is specified, P and p are usually fixed. 4.5 Summary Performance results are derived for a real zero mean WSCS Gauss-Markov process with period P. The autocorrelation matrix for this process is completely specified once the autocorrelation matrix for any period (A), the correlation matrix between adjacent period (A12), and the period-to-period correlation coefficient (p) are known. It is found that the required SNRI for a d' can be reduced by increasing the sample-to-sample correlation coefficient (p ) or the period-to-period correlation coefficient (p), the size of the period (P), and the number of period observed. It is noted that in actuality the Nyquist sampling rate imposes a severe limitation on the control one has over p or p and P by varying the sampling rate. Once a WSCS process is specified, P and p or p5 are specified and improving performance is limited to increasing the observation time (KP).

CHAPTER V A SUBOPTIMUM DETECTOR 5. 1 Introduction An optimum detector was derived in Chapter III, and its performance evaluated in Chapter IV. A suboptimum detector that performs almost as well as and is easier to implement than the optimum detector is highly desirable. It is usually a suboptimum detector that is implemented in practice. The suboptimum detector is derived. Its approximate performance for the signal model presented in Chapter IV is evaluated and compared to the optimum performance. 5. 2 Suboptimum Detector The block diagram of the optimum detector is repeated below in Fig. 3. 4. It was shown in Section 3. 4. 1 that the weighting filter, [I + A-1] - Z[Y] Fig. 3. 4. Optimum detector for WSCS processes 62

63 1 [I+ A 1], approaches the identity filter, I, for small SNRIs. The weighting filter modifies the optimum structure for departures from the small SNRI case. The small SNRI form of the optimum detector is the suboptimum detector studied and is shown in Fig. 5. 1. Z LYj is the subs optimum decision variable. The suboptimum detector is expected to perform as well as the optimum detector for small SNRI. It is also expected that the suboptimum performance is close to the optimum performance for large SNRI. For large SNRI, the signal is easily detected, and the exact detector structure should not be critical. It was shown in Section 3. 3 that V is a Gaussian vector with independent components under Ho and H, Eq. 3. 11, where V = WCMRN Y I Y i Fig 5. 1 —S C i} z.[Y] Fi. 5. 1. A suboptimum detector for WSCS processes

64 Consequently Zs[Y] is the sum of squares of independent Gaussian random variables under Ho and H It follows that the mean and variance of Z [Y] under H0 and H1 are KP ( X1, H 0 n - H1 mean = (5.1) KP Y x1 [1~x1 I X- [1 + A ] H n n 1 n=l and KP 2 - 2 2 E n [1 + X H n=l1 variance =. (5.2) KP i2 2, H n=l1 5. 3 Evaluation Procedures The ability to parameterize the suboptimum ROC in the ROC evaluation region by one or two parameters is highly desirable. This would facilitate the comparison between the optimum and suboptimum perform ance. According to the Central Limit Theorem, f [Z[Y] lHO] and f [Z[Y] IH1] may be approximated by the normal density function with means and variances as in Eqs. 5. 1 and 5. 2. Consequently the approximation to the suboptimum ROC in the ROC evaluation region

65 will be a binormal ROC. This approximation is further supported by the fact that the optimum ROC is binormal in the ROC evaluation region. The approximate suboptimum ROC can then be parameterized by a d' and a SLOPE. es s The d', and SLOPE for a binormal ROC were derived in es Appendix C, explained in Chapter II, and are repeated below. 2(m- m0) d' - (2.11) es c1 + cr0 and SLOPEs = (0/o1 (2.12) where 1. m is the mean of Z [Y] under H0 2. m1 is the mean of Z [Y] under H. 3. (o2 is variance of Z [Y] under H. 4. a12 is the variance of Z [Y] under H1 The approximate suboptimum performance is then easily calculated given m0, mIn, (0,' and al for the SNRI's and detectability indices used in optimum performance calculations. 5. 4 Performance The suboptimum performance will be compared with the optimum performance in two ways. The first comparison will be the

66 statistics of the optimum and suboptimum decision variables under H0 and H1. This comparison will indicate the stability of the optimum and suboptimum statistics and has a strong bearing on the second comparison. The second comparison will be the optimum and suboptimum ROC's. 5. 4. 1 Decision Variable Statistics. The means and variances of Z [Y] are listed in Eqs. 5. 1 and 5. 2. The means and variances of Z[Y] are easily calculated, Eq. 4. 3, and are listed below. KP ^i1[1i ( E Al [1+X"], Ho n^1 n n, H0 n=l mean = (5. 3) KP nl A, H n=1 n' n=l variance = (5.4) KP 2 H1 2 n1, H n=l It is seen that the terms in the expressions for the statistics of Z[Y] and Z [Y] vary differently with the eigenvector Ks

67 signal-to-noise ratio. The different manners in which the terms vary with the eigenvector signal-to-noise ratio are listed below. x( + x)-, H optimum mean a (5.5) x, H1 1 x (1+ x) 2 H optimum variance a (5.6) l2' H1 x, H x H 0 suboptimum mean a (5.7) x(l + x), H and 2 H0 - H 0 suboptimum variance c (5.8) 2 2 x (1 + x) H1 where 1. ca is read "proportional to". 2. x is the eigenvector signal-to-noise ratio, n The expressions in Eqs. 5.5 to 5.8 are plotted in Fig. 5.2. It is seen that the optimum statistics are unbounded under H and bounded under H0. This is a desirable feature because the statistics under H0 do not increase unboundly as the signal power is increased.

68 / Suboptimum 100.0 / Suboptimum 100.0 Optimum 10.0 10.0 w 1.0 Optimum 1.0 0.1 0.1 0.01 0.01 0.01 0.1 1.0 10.0 100.0 0.01 0.1 1.0 10.0 100.0 Eigenvector SNR Eigenvector SNR (a) Means under HO (b) Means under H1 Suboptimum Suboptimum Optimum 10. 00.0 100.0 ~- 10.0 10.0 ~1.0 Optimum 1.0 0.1 0.1. 01 O. 001 il I I 1i 0.01 0.1 1.0 10.0 100.0 0.01 0.1 1.0 10.0 100.0 Eigenvector SNR Eigenvector SNR (c) Variances under H0 (d) Variances under H1 Fig. 5.2. Behavior of the terms in the expressions for the mean and variance of the optimum and suboptimum decision variables under H0 and H1

69 For low t(o imo(drat( SNRI the opt il li stat istics Id( ir 11 and H1 increase linearly with SNRI. This accounts for the fact that f [Z[Y] IHO] and f [Z[Y] IH1] are similar; and consequently the SLOPE is unity, and d' = d' for low to moderate SNR. However as SNRI increases, f[Z[Y] H0] doesn't change but f [ZLYj inH continuously changes. This explains the decreasing SLOPE and d' for increasingly large SNRI. The suboptimum statistics are unbounded under Ho and H. This is undesirable because the statistics under Ho increase unboundly as signal power increases. It is seen that under HI the suboptimum statistics vary nonlinearly with SNRI while the suboptimum statistics vary linearly with SNRI under Ho. This indicates that for low SNR, f[ZS[Y] {H] and f[Z [SY] H1] are similar However for moderate and large SNRI, the difference between f [ZsY] IHO] and f[Z[Y] IH1] increases nonlinearly with SNRI On the basis of statistic stability, the optimum detector is more desirable than the suboptimum detector. This is unexpected. The usual case is for the suboptimum detector to have more stable statistics thani the optimum detector which is tuned to the exact statistics of the input. 5. 4. 2 ROC Comparison. The suboptimum ROC data is summarized in Table 5. 1 for the binormal approximation in the ROC evaluation region. The suboptimum ROC data is calculated with the eigenvalues obtained for evaluating the optimum performance. The

70 parameters listed in Table 5. 1 are defined below. K: number of periods in an observation P: number of samples in a period P p: period-to-period correlation coefficient, p = p p: sample-to-sample correlation coefficient d': desired detectbility index d': actual suboptimum detectability index es SNRI: input signal-to-noise ratio SLOPE: slope of the suboptimum ROC curve. The relationships between the various parameters in Table 5. 1 are plotted in Figs. 5. 3-5. 5 for the representative example used in Figs. 4. 7-4. 9 as an aid in interpreting Table 5. 1. It is seen, by comparing Tables 4.1 and 5. 1, that the relationships between SLOPE, K, P, d, d', and SNRI are the same for e I the optimum and suboptimum detectors (compare Figs. 4. 7 and 5. 3 and Figs. 4. 8 and 5. 4). For small SNRI, d' = d' which is expected because the I es suboptimum detector is the small SNR form of the optimum detector (compare Figs. 4. 8a and 5. 4b). However the divergence of d es from d' for moderate to large SNR is larger than the divergence of d' from d'. Simultaneously SLOPE is always less than e s SLOPE and SLOPE respectively for increasing SNRI is due to the instability of the suboptimum statistics.

d' = 0.25 d' = 0.5 d' = 1.0 d' = 1.5 K P P _ P_ _ _........... d'e SNR SLOPE d's SNRI SLOPE d' SNRI SLOPE' S des I s es s es NR SLOes csN LP,.,,,, _.,...',........,, ",, 4 4 0.250 0.707 0.25 0.059 0.836 0.50 0.128 0.700 0.92 0.300 0.496 1.27 0.520 0.361.._....!............ 16 4 0.250 0.707 0.25 0.028 0.914 0.50 0.059 0.830 0.97 0.1 30 0.690 1.42 0.210 0.576 64 4 0.250 0.707 0.25 0.014 0.954 0.50 0.028 0.911 0. 99 0.058 0.830 1.48 0.091 0.758 4 4 0.00391 0.250 0.25 0.068 00882 0,50 0.146 0.778 0.97 0.3 31 0.608 1.40 0.559 0.479 16 4 0.00391 0.250 0.25 0.033 0.939 0.50 0.069 0.882 0.99 0.146 0.778 1.47 0.234 0,.687 64 4 0.00391 0.250 0.25 0.016 0,.969 0.50 0.033 0.939 1.00 0.069 0.882 1.49 0.106 0.829 1 16 0.00390 0.707 0.25 0.055 0.806 0.50 00121 0,.652 0.89 0.290 0.437 1.21 0.512 0.304 4 16 0.00390 0. 707 0.25 0.026 0.897 0.50 0.055 0.807 0.97 Oo121 0,.652 1.40 0.199 0.525 16 16 0.00390 0.707 0.25 0.013 0,.948 0.50 0.026 0. 898 0.99 0.055 0.807 1.47 0.086 0.725 - - t t ~ - 0- - - _ - _ _ _ _ _ _ - - ~ - --- - t - - _ _ _ - _ _ _ _ _ —-- _ t -10 1 16 2.33x10 0.250 0.25 0.074 0.875 0.50 0.157 0.764 0.953 0o356 0.585 1.36 0.599 0,454 41 6 2.33x 101025 0.2525 0.036 0.940 0.50 0.074 0.879 0.99 01 57 0.761 1.46 0.251 0.666 1 616 2.33xl100 0.250 0.25 0.018 0.967 0.50 0.036 0.935 1.00 0.074 0.873 1.50 0.114 0.816 1 16 0,250 0.91 025 0.038 0.726 0.47 0,.087 0.533 0.82 0.225 0.304 1.05 0.420 0.190 4 4 0.707 0.917,0.25 0~039 00729 0.47 0.090 0.538 0.82 0.232 0.310 11.05 0.430 0.195 1 16 0.707 0, 979 0.25 0. 030 0.704 0.47 0.070 0.502 0.82 0,.189 0.271 1.03 0.366 0.161 Table 5.1. Summary of suboptimum detector performance

72 1. 5 K=1 =16 1.5 K KK=64. 1.0- % 1.0 0.5 -0.5 0 0 0 0.5 1.0 1.5 0 0.5 1.0 1.5 d d es es (a) d' vs. d' for P 16 (b) d' vs. d' for P - 4 es eo 1.0 1.0 0.9 0.9 \ 3.8- 0. 8 K-64 ^7 0. 7 K=16 ^ 0. 7 o 0.6 \ ^ 0.6 \ 0.5 0. 5 04- K 0. 4 31 1 K 4 0. 3 0. ____ 1,_ 3 0 0.5 1.0 1.5 0 0.5 1.0 1.5 d d' es es (c) SLOPE vs. d' for P 16 (d) SLOP2E rs. d' for P-::^ es es Fig. 5. 3. Performance summary of the suboptimum detector as a function of des' for 0. 707 es'

73 1.0 0.4 K-lf6 0. 8 0.7 m 0.6 \ 1.5 - \K =! K41 0.5 1.0 0. 4 0. 5 0.3 0.. --------—.... 0.3S I _1 I 0\ I 0. 01 0. 1.0 0. 01 O. 1 0 SNRI SNRE (a) SLOPE vs. SNR for =0. 707 (b) dS vs. SNR for p =0.707 S I S Fig. 5. 4. Performance summary of the suboptimum detector as a function of SNRi for P - 16 1. 5 - K 16 K-l1 0 _1. 5 0.i 5 0 0.5 1.0 1.5 d Fig.. 5 5. Comparison of optimum and suboptimum detectability indexes for p ---. 707 and P -16 s

74 It is finally seen that for the d' considered, the suboptimum performance, d' is sufficiently close to the optimum performance, d', in the ROC evaluation region to say that the suboptimum detector e performs almost as well as the optimum detector (Fig. 5. 5). Consequently the optimum detector can be replaced by the suboptimum detector without any appreciable degradation in performance. 5.5 Summary The suboptimum detector selected is the small SNRI form of the optimum detector. The suboptimum performance is approximated in the ROC evaluation region by a binormal ROC on the basis of the Central Limit Theorem and the optimum ROC. The suboptimum detector performs as well as the optimum detector. The slope of the suboptimum ROC is more binormal than the suboptimum ROC. The relationships between the detector parameters (K, P, ps, p) is the same for the optimum and suboptimum detectors. However the suboptimum statistics are unbounded with increasing SNRI under H0 and H1 while the optimum statistics are bounded under Ho and unbounded under H1.

CHAPTER VI CONCLUSIONS 6. 1 Summary and Conclusions The problem studied in this dissertation is the fixed time forced choice detection of Cyclo-Stationary (CS)processes in additive noise. CS processes are defined in Chapter I as those nonstationary random processes possessing autocorrelation matrices with a cyclic structure. CS processes normally arise from sampling continuous time CS processes. However other sampling schemes such as multiplexing samples from different stationary random processes or multiplexing samples from the sensors of an array observing a stationary random process also generate CS processes. It is highly desirable to preserve the cyclic structure of the CS autocorrelation matrix in optimum detector expansions. Preserving the cyclic structure permits identification of the detector performance with specific properties of the autocorrelation matrix, as well as introducing a new interpretation for detector expansions. The optimum detector becomes, when the cyclic structure of the CS autocorrelation matrix is preserved, a noise reduction filter followed by a signal enhancement filter followed by energy detection. The structure of the signal enhancement filter is clearly identifiable with the cyclic structure of the CS autocorrelation matrix. 75

76 The performance for the optimum detector is evaluated in Chapter IV for a WSCS Gauss-Markov random process. The performance evaluation is limited to a region called the ROC evaluation region where 0.01 PD< 0.99 and 0.01 < P FA 0. 9. The optimum ROC behaves as a binormal ROC in the ROC evaluation region, This does not imply that the complete optimum ROC is binormal. The optimum ROC is normal for low SNRI and deviates from that as SNRI increases. Simultaneously the optimum performance is the same as the desired performance for low SNRI and is less than the desired performance for large SNRI where the desired performance is based on a normal ROC. The optimum performance can be improved, less SNRI for a given d', by increasing the period-to-period correlation coefficient (p), the sample-to-sample correlation coefficient (s), the number of samples in a period (P), and the number of periods (K) observed, or any combination thereof. The CS signal is usually specified in detection problems. Consequently p, s, and P are fixed, andthe only way to improve the performance is to increase the observation time (K). A suboptimum detector is derived in Chapter V. The suboptimum detector is presented in the belief that suboptimum detectors are generally easier to implement than optimum detectors. The suboptimum detector studied is the small SNRI form of the optimum detector.

77 The statistics, mean and variance, of the optimum and suboptimum decision variables, Z[ Y 1 and Z [ Y I respectively, S under H0 and H1 are studied. The statistics of Z[ Y I under H0 are bounded with increasing SNRI but the statistics of Zs[ Y ] become unbounded under Ho. The statistics of Z[ Y I and Z s Y under H1 are unbounded with increasing SNR. Consequently the statistics of s[ Y 1 are unbounded under H0 and H1. The performance of the suboptimum detector is approximated in the ROC evaluation region by a binormal ROC on the basis of the Central Limit Theorem and the optimum performance. The suboptimum performance, like the optimum performance, can only be improved by increasing p, p, P, K, or any combination thereof. The suboptimum performance is the same as the optimum performance for small SNRI but is less for larger SNR. Simultaneously the slope of the suboptimum ROC is always less than the slope of the optimum ROC though for small SNRI the suboptimum ROC is almost normal. The divergence in performance and slope between the optimum and suboptimum detectors is a result of the instability of the statistics of Z [Y ] with increasing SNRI. The suboptimum detector studied is an attractive alternative to the optimum detector. The suboptimum detector performs almost as well as the optimum detector and is easier to implement. However the instability of the statistics of Z [ Y ] tempers the attractiveness of the suboptimum detector.

78 6.2 Contributions The sufficient statistic for making optimum detections via the likelihood ratio for Gaussian signal in additive Gaussian noise is a quadratic form when the signal and noise autocorrelation matrices, RS and RN respectively, are known. A new interpretation for the quadratic form is presented which permits preserving the cyclic structure of the CS signal autocorrelation matrix. The quadratic form is Y Y* [RN1 [RS+RN Y] (3. 1) The interpretation involves isolating RS in two steps. The first step is applying the matrix identity,1 ~~-1 Rs+RN RN RN [Rs N RN to partially isolate RS in RN - [ R + RN. The second step is to complete the isolation of RS by simultaneously diagonalizing R and RN This is a new approach to simultaneous diagonalization because the usual approach is to simultaneously diagonalize the observation autocorrelation matrices under H0 and Hi, RN and RS + RN respectively. Detecting a stationary random process with an array of sensors is equivalent to detecting a CS process. The carrier matrix comes from the correlations between the time samples at any

79 sensor. However the cyclic structure and the mtodulatioll matrix come from the spatial sampling introduced by the sensors. The array provides spatial information through the modulation matrix which is entirely different than classical beamforming. The modulation provides spatial information through the correlation properties between sensors while classical beamforming provides spatial information by introducing time delays in the sensor outputs to form beams. 6. 3 Suggestions for Future Work The whitening filter, W, is a nonrealizable filter in that future inputs are required for present outputs. The detector expansion would be more attractive if the whitening filter were realizable. If W is diagonal, the whitening filter is realizable; and if W is almost digonal, the whitening filter is almost realizable. Under what conditions does the whitening filter become realizable or almost realizable? The suboptimum detector studied is the small SNRI form of the optimum detector. The suboptimum decision variable consists of the sum of the squares of independent Gaussian random variables. Since the whitening filter, W, does not add any energy to the output, the suboptimum detector would be more attractive if the whitening filter were absent because the simultaneous diagonalization problem need not be solved. However if the whitening filter is removed, the suboptimum decision variable becomes the sum of the squares of

80 dependent Gaussian random variables. The performance then becomes harder to evaluate. Studying the suboptimum detector without the whitening filter is.an interesting problem for future work. More work is needed to truly understand the relationship between the modulation matrix and spatial information for an array. How is the space around an array observed through the modulation matrix? Is the concept of steering an array meaningless when applied to a modulation matrix?

APPENDIX A SIMULTANEOUS DIAGONALIZATION A theorem on simultaneous diagonalization will be stated, and the proof (Refs. 11 and 27) paraphrased, and some properties resulting from the theorem will be derived. Before stating the theorem, some basic notation will be defined. a. A matrix, A, is Hermitian if A = A* (A.1) where * denotes the complex conjugate (-) of the transpose (T). b. TR[A] is the trace of the matrix A. c. The inner product of two column vectors, X and Y, is denoted as (X, Y) and is defined as (X, Y) - X*Y (A. 2) d. A Hermitian matrix, A, is positive definite if (X, AY) > 0 (A. 3) for all nonzero complex X and Y. Since A is Hermitian, it should be noted that (X, AY) = (AX,Y). (A. 4) 81

82 Theorem: If A and B are Hermitian matrices, and if A is positive definite, there exists a complex matrix, U, of eigenvectors and a real diagonal matrix, A, of eigenvalues such that U*AU = I, and (A. 5) U*BU =. (A. 6) Proof: Solve the following eigenvalue problem where A and B are N x N dimensional matrices. Find the eigenvalues, Xk, for which the equation [B-XkA]Xk = 0 (A. 7) has a nontrivial solution. It follows that k is an eigenvalue if and only if the following determinant is zero: B - XkA = 0. (A. 8) Eq. A. 8 is in general of the Nth degree, and there will be N values for A. For every distinct value of Xk, there exists an eigenvector, Xk an N dimensional column vector, satisfying Eq. A. 7. It is assumed that there are N distinct eigenvalues, Xk, and corresponding eigenvectors, Xk. Now from Eq. A. 7

83 (Xk BXk) = k(XkAXk), and (A. 9) (BXk Xk) = k(AXkXk) (A. 10) Since A and B are Hermitian, Eq. A. 4 (Xk BXk) = (BXk,Xk), and (A. 11) (Xk AXk) = (AXk,Xk). (A. 12) Therefore by Eqs. A. 9 - A. 12, k k=X' (A. 13) and the eigenvalues are real. Since A and B are Hermitian, and the eigenvalues are real, Eqs. A. 4, A. 7, and A. 13, o = (Xk BXm) - (BXk X ) = X (Xk AX ) - k(AXk'X ) = ( m k) (XkAX ) (A. 14) It then follows that for A m A Xk, the corresponding eigenvectors, n k' X and Xk, are orthogonal with respect to A. (Xk,AX) =-0 for k / m. (A. 15)

84 Since A is positive definite, Eq. A. 4, (Xk,AXk)> O. (A. 16) It then follows that the Xk can be normalized so that (X kAXm) - km and (A.17) (Xk BX ) = (XkX mAXm) = X6km (A.18) where 6 is the Kronecker delta ~kmrn~ I k 1, k=m km (A. 19) 0, k/m Now let U be the matrix whose columns are the normalized eigenvectors X, and A be the real diagonal matrix whose diagonal elements are the eigenvalues Xk. It follows that the theorem is proved, Eqs. A. 17 and A. 18. Corollary 1 (C1): If B is positive definite TR[A ] = TR[AB ]. (A.20) Proof: Since A is nonsingular, U has an inverse. Let

85 U Q. (A.21) Then B = Q*AQ; (A.22) B1 = Q-1A-1Q*- 1 = UA U*, and (A. 23) A = Q*Q (A.24) Then AB- = Q*Q Q-1 A1 Q*-1 =Q* A1U*. (A.25) AB has the elements N -i I - -1 -A pAB- = qp Xp Upm for k,m=1,2,..., N (A.26) where qk and um are the elements of Q and U respectively. kp pm N N TRIAB ]] Ap pk k=1 p pk N N - Zi1P qX-kp Upk (A.27) p f p k 1 Now for any p = 1, 2,..., N, Eq. A.21 implies

86 N ql kp pk (A 28) Therefore Eq. A.20 holds. Corollary 2 (C2): There exists a unitary matrix W and a lower triangular matrix L such that A = L*L, and (A.29) WL = Q. (A.30) The existence of the lower triangular matrix L is proved by the Guilleman (Ref. 12). The unitary matrix W has the following property: W*W = WW* = I. (A.31) Since A has an inverse so does L. From (A.30) W = QL. (A.32) Multiply the right sides of Eq. A.32 by W*. WW* = QL L-1 I Q* = QA- Q* = I (A.33) by Eq. A.29. Multiply the left sides of Eq. A.32 by W*

87 W*W = L' Q*QL -I* -1 = L A L = I (A.34) by Eq. A.29. Q.E.D.

APPENDIX B REALIZABILITY Realizability is a nebulous concept when applied to digital computers. Digital computers can easily delay inputs by using storage. However in many cases the storage is limited, and the concept of providing increasingly large delay breaks down. The concept of realizability really implies that future events do not affect the present response. Let the matrix H be a filter with input X and output Y. Let H, Y, and X have the elements hlk, Y1, xl respectively. Then N Y hlk Xk (B. 1) k=l H is said to be realizable if I 1 k1 hlk k (B.2) Therefore from Eq. B.2, H is a realizable matrix if H is a lower triangular matrix. 88

APPENDIX C PARAMETERS OF THE BINORMAL ROC The detectability index, d', and SLOPE for a binormal ROC e d will be derived in terms of the first two moments of the decision variable, Z, under H0 and H1. Let Z 1 N(mi 0,2), H0 z N(ml (12), H It then follows that (Z- m)2 3 2a 2 P - 1 e 1 dz (C 1) DD V2iT a and (Z- m)2 x 20 P -FA 1 e f dz.(C.2) PFA f 1 Equivalently D - (X1) (C.3) and 89

90 PFA = (X0) (C.4) where a (t2/2 dt 1. ^(' - xdt O - m 2. X 1 aO- m 3. X 3 0 0 Since the binormal ROC is a straight line when plotted on normalnormal graph paper, it is of interest to solve for SLOPE and 3 in the equation 1 = SLOPE X + 3 (C.5) The two ROC points used to solve for 3 and SLOPE in Eq. C. 5 are the PD 0.5 and PFA = 0.5 points. When PD 0.5, D FA D X' 0 and (C.6) m - mn X1 _0 0 0 When PFA - 0.5,

91 m - m X _. 1 _ 1 a1 and X = 0. (C.7) Substitute Eqs. C.6 and C.7 into Eq. C. 5. a0 m1 m0 1 = X +- (C.8) 1 or 0 cx 01 "1 The detectability index do at a given PD and PFA for a normal ROC is the intersection of that straight line of unity slope passing through the point (PD, PFA) and the negative diagonal. It is a measure of the distance between PD and PFA. That is PD - IX0 + dol PFA = LX and do = X1 X. (C.9) Substitute Eq. C.8 into Eq. C.9 0- 0 1 ml- m d 0 o- (C.10) 0 cx1 01

92 d' is the detectability index at the intersection of the binormal ROC and the negative diagonal. e 2~ X1 -X0. (C.11) Substitute Eq. C.11 into Eq. C.10. It follows that 2(m - m0) d' =. (C.12) e a1 + a1 0 It also follows from Eqs. C. 5 and C. 8 that SLOPE = c0/a1. (C.13)

APPENDIX D EIGENVALUES AND INPUT SIGNAL-TO-NOISE RATIOS Let RS and RN be the signal and noise correlation matrices respectively. Simultaneously diagonalize R and R as (Appendix A) R Q Q (D.1) S and R Q* A Q (D.2) where 1. Q {qnm, n,m = 1,2,..., N 2. The diagonal elements of A are the eigenvalues An, n-1,2,..., N The input signal-to-noise ratio, SNRI, is 1 TRi RI KP S SNR - I 1 TR[Rs] - TR[RS (D. 3) TR[RN] where TR[ j is the trace of the matrix in the brackets. 93

94 It then follows from Eqs. D.1 and D.2 that N TR[R] = qnm qmn (D.4) n, m=1 and TR[RN1] = qnm An qmn (D 5) n, m=-1 where denotes complex conjugate. Substitute Eqs. D.4 and D. 5 into D. 3. N qnm qmn SNR n, m=1 (D. 6) I N 4,qnm Xn qmn n, m=l The size of the eigenvector noise-to-signal ratio, Xn, is inversely proportional to the size of SNRI. Consequently the size of the eigenvector signal-to-noise ratio, Xn1, is directly proportional to the size of SNRI.

APPENDIX E EXAMPLE OF FACTORIZING THE AUTOCORRELATION MATRIX OF A WSCS PROCESS Consider sampling the WSCS process s(t) = x(t) cos o t where x(t) is a real zero-mean Wide Sense Stationary Gauss Markov Process with correlation function R (T) R (0) e I X X The frequency, w0, is assumed known. There are exactly P samples in a period. Let the sampling interval, Ts, be 1 T - s Pf0 Then s [(n- )Ts]-x [(f01)]cos2f0pf )] = x P Ijcos 2 n (n-1) for n = 1,2,3,... 95 95

96 Consider the correlation between s[ (n-l) T] and s[ (m-l) T ] rS(n,m) = E s[ (n-1)TI s[ (m-1)T 1) R x[ ] rcos 2 v (m+n-2) + cos 2 z (nm-n) 2 Ps is the normalized correlation coefficient between adjacent samples. ^^) R. Then R (O)p Im n r (n,m) 2 P P r(nm)=R(O)p [ 27r(m+n-2) cos 2n(m-n) (n ) - [cos p + cos p J Assume that P = 2 and K= 3, i. e., there are three periods in the observation and there are two samples in a period. Also assume R (0)= 1. The autocorrelation matrix is X RS - {r(n, m)} for n,m = 1,2, 3, 4, 5, 6 For this case, R has the structure of the autocorrelation matrix of a Wide Sense Stationary random process.

97 -S 1 -PS _p ts3 PS4 S -Ps i P Ps I Ps -Ps P 4 -ps s I P p pP 1 -Ps I-P 5 Pp 3 3 P_ From this we get the submatrices P- 2 -Ps - p -p 3 p1 3 p s A 2 31 5 P 4 1 -Ps s s s 31 s s1 -P -P P P31 ~, - s - S s Ps -Ps In this case it is evident that-P 13 Ps -12 3 1 -

98 A A12 p 2A ] ^12 s 12 R A A A RS A21 12 P2A A21 A s 2 1 1 The modulation matrix, M, becomies p 0 1 Ps M = and M = — 1 -1 0 M-1 1 where R = M. S The T are defined as nm -1* -1 T =M A M nm nm Then -Tp21 [ i p 2 p 1 ] 1P s s ps 12 1-pp 02 J1pPs]

99 -1 13 -s A12 Ps T12 0 0 Ps s s [3172 PS4 Similarly 0 -p l-p -s s 21 0 p2 S and 0 -p 31-p -Ps - s T ZP2T~~[ Ts] 31 Ps 21 s Consequently M 0 0 M3 = 0 M 0 0 0 M ps2 0I O o I o o s 0 0 - 10 0 o 0 o o -p 1.....~~~~~~~~~~

100 and T11 T12 T13 T = T21 11 12 T31 T21 T11 1 0 0 0 0 0 o 1 -PT PS2 -p 31-Ps Ps4 T 0 -P - P 1 0 0 0 0 PS2 0 1 P1P < 0 -ps3 1ji7I 0 -p 1-pS 1 0 S ~~~S S ~I 0 p4 I 0 0 1 From T = C C, we get the carrier matrix. 1 0 j O 0 0 _ _O 0 0Cl C C SCS= ~ pSH 1 ~ I O o C21 C22 23 0 p-1 0 1S2 10 0 C C C _ Ps 7_ps2 _ o _ I C31 C32 33 0 -P3 1p20 -P-P 1 0 0 pp4 0 p j 1 L Ps S0

101 Now 1 0 e11 - C12 = C13 = C23 = 0 - ~ 1 Ps s R C21 - ~C ~C, 22 11Cl C T C2 T3 C33 T3 3 31 31' 32 32' 33 33'

REFERENCES 1. Bello, P., "Some Results on the Problem of Discriminating between Two Gaussian Process, " IRE Trans. Information Theory, 1961, Vol. IT-7, No. 4, pp. 224-233. 2. Birdsall, T. G., The Theor- of Signal Detectability: ROC Curves and Their Character, Ph.D dissertation at the University of Michigan, Ann Arbor, Michigan, 1966. 3. Bryn, F., "Optinum Signal Processing of Three-Dimensional Arrays Operating on Gaussian Signals and Noise," JASA, 1962, Vol. 34, No. 3, pp. 289-297. 4. Cooley, J. W., Lewis, P. A. W., and Welch, P. D., "Application of the Fast Fourier Transform to Computation of Fourier Integrals, Fourier Series, and Convolution Integrals, " IEEE Trans. Audio and Electroacoustics, 1967, Vol. AU-15, No. 2, pp. 79-84.. 5. Cooley, J. W., Lewis, P. A. W., and Welch, P. D., "The Fast Fourier Transform Algorithm: Programming Considerations in the Calculation of Sine, Cosine, and Laplace Transforms," J. Sound Vib., 1970, Vol. 12, No. 3, pp. 315-337. 6. Chang, J. H. and Tuteur, F. B., "A New Class of Adaptive Processors," JASA, 1971, Vol. 49, No. 3, pp. 639-649. / 7. Cramer, H., Mathematical Methods of Statistics, Princeton, Princeton University Press, 1951, Chap. 24. 8. Davis, R. C., "The Detectability of Random Signals in the Presence of Noise, " IRE Trans. Information Theory, 1957, Vol. IT-3, pp. 52-62. 9. Deutch, R., "Detection of Modulated Noise-Like Signals, " Trans. IRE Professional Group on Information Theory, 1954, Vol. PGIT3, pp. 106-122. 10. Doob, J. L., Stochastic Processes, New York, John Wiley and Sons, Inc., 1953, Chap. 10, Art. 3. 11. Friedman, B., Principles and Techniques of Applied Mathematics, New York, John Wiley and Sons, Inc., 1956, pp. 107-109. 102

103 REFERENCES (Cont. ) 12. Guillemin, E. A., The Mathematics of Circuit Analysis, New York, John Wiley and Sons, Inc., 1944, Chap. 4, Art. 9. 13. Hariharan, P. R.,'Detection of Modulated Gaussian Signals in Noise," IEEE Trans. Commun. Technol., 1972, Vol. COM 20, No. 1, pp. 28-37. 14. Hurd, H. L., An Investigation of Periodically Correlated Stochastic Processes, Ph.D. dissertation at Duke University, Durham, N.C., 1970. 15. Johnson, N. L., and Katz, S., Distributions in Statistics, New York, Houghton Mifflin, 1969. 16. Kailath, T., "A General Likelihood-Ratio Formula for Random Signals in Noise, " IEEE Trans. Information Theory, 1969, Vol. IT-5, No. 3, pp. 350-361. 17. Kailath, T., "Correlation Detection of Signals Perturbed by a Random Channel," IRE Trans. Information Theory, 1960, Vol. IT-6, No. 3, pp. 361-366. 18. Kincaid, T. G., The Adaptive Detection and Estimation of Nearby Periodic Signals, General Electric Co., Research and Development Center, Schenectady, N. Y., Technical Report S-69-1139, 1969. 19. Middleton, D., "On the Detection of Stochastic Signals in Additive Normal Noise, I, " IRE Trans. Information Theory, 1957, Vol. IT-8, No. 3, pp. 86-121. 20. Middleton, D., "On the Detection of Stochastic Signals in Additive Normal Noise, II, " IRE Trans. Information Theory, 1960, Vol. IT-6, pp. 349-360. 21. Middleton, D. and Groginsky, H. L., "Detection of Random Acoustic Signals by Receivers with Distributed Elements: Optimum Receiver Structures for Normal Signal and Noise Fields, " JASA, 1965, Vol. 38, No. 5, pp. 727-737.

104 REFERENCES (Cont.) 22. Parzen, E. and Shirer, N., "Analysis of a General System for the Detection of Amplitude-Modulated Noise, " JASA, 1965, Vol. 35, pp. 278-288. 23. Peterson, W. W., Birdsa i, T. G., and Fox, W. C., "The Theory of Signal Detectabi;ity, " Trans. IRE Professional Group on Information Theory, 1954, PGIT-4, pp. 171-211. Also Peterson, W. W., Birdsall, T. G., The Theory of Signal Detectability, Electronic Defense Group Technical Report No. 13, Electronic Defense Group, The University of Michigan, Ann Arbor, Michigan, 1953. 24. Ristenbatt, M. P., Investigations of Narrowband Waveforms Generated by Clocked Pulses, Cooley Electronics Laboratory Technical Report No. 112, Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan, 1960. 25. Schwartz, S. C., "A Series Technique for the Optimum Detection of Stochastic Signals in Noise, " IEEE Trans. Information Theory, 1969, Vol. IT-15, No. 3, pp. 362-370. 26. Slepian, D., "Some Comments on the Detection of Gaussian Signals in Gaussian Noise," IRE Trans. Information Theory, 1958, Vol. IT-4, pp. 65-68. 27. Stoll, R. R., Linear Algebra and Matrix Theory, New York, McGraw-Hill Book Company, Inc., 1952, pp. 259-260. 28. Thomas, J. B. and Williams, T. R., "On the Detection of Signals in Nonstationary Noise by Product Arrays, " JASA, 1959, Vol. 31, No. 4, pp. 453-462. 29. Usher, Jr., T., "Signal Detection by Arrays in Noise Fields with Local Variations," JASA, 1964, Vol. 36, No. 8, pp. 14441449. 30. Usher, Jr., T., "Signal Detection by Arrays with Arbitrary Processes and Detectors, " JASA, 1966, Vol. 39, No. 1, pp. 79-86. 31. Van Trees, H. L., Detection, Estimation, and Modulation Theory, Part III, New York, John Wiley and Sons, Inc., 1971.

DISTRIBUTION LIST No. of Copies Office of Naval Research (Code 468) 1 (Code 102-OS) 1 (Code 480) 1 Navy Department Washington, D. C. 20360 Director, Naval Research Laboratory 6 Technical Information Division Washington, D. C. 20390 Director 1 Office of Naval Research Branch Office 1030 East Green Street Pasadena, California 91101 Dr. Christopher V. Kimball 1 8441 S.W. 142 Street Miami, Florida 33149 Director 1 Office of Naval Research Branch Office 495 Summer Street Boston, Massachusetts 02210 Office of Naval Research 1 New York Area Office 207 West 24th Street New York, New York 10011 Director 1 Office of Naval Research Branch Office 536 S. Clark Street Chicago, Illinois 60605 Director 8 Naval Research Laboratory Attn: Library, Code 2029 (ONRL) Washington, D. C. 20390 105

106 DISTRIBUTION LIST (Cont.) No. of Copies Commander Naval Ordnance Laboratory 1 Acoustics Division White Oak, Silver Spring, Maryland 20907 Attn: Dr. Zaka Slawsky Commanding Officer 1 Naval Ship Research & Development Center Annapolis, Maryland 21401 Commander 2 Naval Undersea Research & Development Center San Diego, California 92132 Attn: Dr. Dan Andrews Mr. Henry Aurand Chief Scientist 1 Navy Underwater Sound Reference Division P. O. Box 8337 Orlando, Florida 32800 Commanding Officer and Director 1 Navy Underwater Systems Center Fort Trumbull New London, Connecticut 06321 Commander 1 Naval Air Development Center Johnsville, Warminster, Pennsylvania 18974 Commanding Officer and Director 1 Naval Ship Research and Development Center Washington, D. C. 20007 Superintendent 1 Naval Postgraduate School Monterey, California 93940 Conimmanding Officer & Director 1 Naval Ship Research & Development Center* Panama City, Florida 32402 Formerly Mine Defense Lab.

1u7 DISTRIBUTION LIST (Colnt.) No. of Copies Naval Underwater Weapons Research & Engineering Station Newport, Rhode Island 02840 Superintendent Naval Academy Annapolis, Maryland 21401 Scientific and Technical Information Center 2 4301 Suitland Road Washington, D. C. 20390 Attn: Dr. T. Williams Mr. E. Bissett Com ander Naval Ordnance Systems Command Code ORD- 03C Navy Department Washington, D. C. 20360 Commander Naval Ship Systems Command Code SHIPS 037 Navy Department Washington, D. C. 20360 Commander 2 Naval Ship Systems Command Code SHIPS 00V1 Washington, D. C. 20360 Attn: CDR Bruce Gilchrist Mr. Carey D. Smith Commander 1 Naval Undersea Research & Development Center 3202 E. Foothill Boulevard Pasadena, California 91107 Commanding Officer Fleet Numerical Weather Facility Monterey, California 93940

108 DISTRIBUTION LIST (Cont.) No. of Copies Defense Documentation Center 5 Ca meron Station Alexandria, Virginia 22314 Dr. James Probus I Office of the Assistant Secretary of the Navy (R&D) Room 4E741, The Pentagon Washington, D. C. 20350 Mr. Allan D. Simon Office of the Secretary of Defense DDR&E Room 3E1040, The Pentagon Washington, D. C. 20301 Capt. J. Kelly Naval Electronics Systems Command Code EPO-3 Washington, D. C. 20360 Chief of Naval Operations Room 5B718, The Pentagon Washington, D. C. 20350 Attn: Mr. Benjamin Rosenberg Chief of Naval Operations Rm 4C559, The Pentagon Washington, D. C. 20350 Attn: CDR J. M. Van Metre Chief of Naval Operations 801 No. Randolph St. Arlington, Virginia 22203 Dr. Melvin J. Jacobson Rensselaer Polytechnic Institute Troy, New York 12181 Dr. Charles Stutt General Electric Co. P. O. Box 1088 Schenectady, New York 12301

109 DISTRIBUTION LIST (Cont.) No. of Copies Dr. Alan Winder EDO Corporation College Point, New York 11356 Dr. T. G. Birdsall Cooley Electronics Laboratory The University of Michigan Ann Arbor, Michigan 48105 Mr. Morton Kronengold Director, Institute for Acoustical Research 615 S.W. 2nd Avenue Miami, Florida 33130 Mr. Robert Cunningham Bendix Corporation 11600 Sherman Way North Hollywood, California 91606 Dr. H. S. Hayre University of Houston Cullen Boulevard Houston, Texas 77004 Mr. Ray Veenkant Texas Instruments, Inc. North Central Expressway Dalla, Texas 75222 Mail Station 208 Dr. Stephen Wolff John Hopkins University Baltimore, Maryland 21218 Dr. Bruce P. Bogert Bell Telephone Laboratories Whippany Road Whippany, New Jersey 07981 Dr. Albert Nuttall 1 Navy Underwater Systems Center Fort Trumbull New London, Connecticut 06320

110 DISTRIBUTION LIST (Cont.) No. of Copies Dr. Philip Stocklin Raytheon Company P. O. Box 360 Newport, Rhode Island 02841 Dr. H. W. Marsh Navy Underwater Systems Center Fort Trumbull New London, Connecticut 06320 Dr. David Middleton 35Concord Ave., Apt. #1 Cambridge, Massachusetts 02138 Mr. Richard Vesper Perkin-Elmer Corporation Electro-Optical Division Norwalk, Connecticut 06852 Dr. Donald W. Tufts University of Rhode Island Kingston, Rhode Island 02881 Dr. Loren W. Nolte Dept. of Electrical Engineering Duke University Durham, North Carolina 27706 Dr. Thomas W. Ellis Texas Instruments, Inc. 13500 North Central Expressway Dallas, Texas 75231 Mr. Robert Swarts Honeywell, Inc. Marine Systems Center 5303 Shilshole Ave., N.W. Seattle, Washington, 98107 Mr. Charles Loda Institute for Defense Analyses 400 Army-Navy Drive Arlington, Virginia 22202

1ll DISTRIBUTION LIST (Cont.) No. of Copies Mr. Beaumont Buck General Motors Corporation Defense Research Division 6767 Holister Ave. Goleta, California 93017 Dr. M. Weinstein Underwater Systems, Inc. 8121 Georgia Avenue Silver Spring, Maryland 20910 Dr. Harold Saxton 1 1601 Research Blvd. TRACOR, Inc. Rockville, Maryland 20850 Dr. Thomas G. Kincaid I General Electric Company P. O. Box 1088 Schenectady, New York 12305 Applied Research Laboratories 3 The University of Texas at Austin Austin, Texas 78712 Attn: Dr. Loyd Hampton Dr. Charles Wood Dr. Paul McElroy 1 Woods Hole Oceanographic Institution Woods Hole, Massachusetts 02543 Dr. John Bouyoucos Hydroacoustics, Inc. P. 0. Box 3818 Rochester, New York 14610 Dr. Joseph Lapointe Systems Control Inc. 260 Sheridan Avenue Palo Alto, Calif. 94306 Cooley Electronics Laboratory 25 University of Michigan Ann Arbor, Michigan 48105

Srurcti y Cll a rict l ______ ____ __ DOCUMEHT CONTROL DATA. R & LO (f.eterl y Ilr.Rlftr.fln of llle*, o ity o rf hftr ft o dIf IeffE l ni n inflt d,^l f hte eI et'e f1 wherl r h ot vrn lt rnof lt ~lnm s)d111) s. QORIONATNO ACT v ^C TIVI TV (( ootof hr) F t O Tr Cl Anr A r lhttI Cooley Electronics Laboratory Unclassified ___ The University of Michigan. r oAnn_ A hnr Mihi gn n 48105_ THE THEORY OF SIGNAL DETECTABILITY: CYCLO-STATIONARY PROCESSES IN ADDITIVE NOISE 4 C oe sCPTIV NOtt (rype of raport end Incfhas ve date~)' Technical Report No. 224 - October 1973 S AU TO-4nc, I) ( fr1 name, middle Intllal, tima name) Joseph R. Lapointe, Jr.,. PRPORT OATI 7a. TOTAL NO. OF PAGEAS b. NO. OF REFS October 1973 130 31 lS. COrNT RACT OI GRANT NO. 98f. gORIGINATOR'S REPORT NUMBER(S) N000124-67-A-0181-0032 TR 224 b. PROJEC T NO. hthi report) d. 036040- 19-T I 0. DITRIBUTION STATKEMENT Approved for public release; distribution unlimited. i a. SUPPLiMNTAURV NOTES 12. SPONSORING MI LITARY ACTIVITY Office of Naval Research Department of the Navy ____________________ Arlington, Virginia 22217 l''Cyclo-Stationary (CS) processes are those nonstationary processes that appear to be stationary when observed at integral multiples of a basic interval. Wide Sense Cyclo-Stationary (WSCS) processes possess autocorrelation functions and autocorrelation matrices with a cyclic structure for continuous and discrete time respectively. Discrete time WSCS processes normally arise from sampling continuous time WSCS processes. Other sampling schemes such as multiplexing samples from different stationary random processes or multiplexing samples from the sensors of an array also generate random processes with a cyclic structure in the autocorrelation matrix. The optimum detector for the fixed time forced choice detection of discrete time WSCS processes in additive noise is designed according to the likelihood ratio. The detector design is constrained to preserving the cyclic structure of the signal autocorrelation matrix followed by a signal enhancement filter followed by energy detection. The structure of the signal enhancement filter is clearly identifiable with the cyclic structure of the signal autocorrelation matrix. A suboptimum detector is also presented and is the low input signal-to-noise ratio form of the optimum detector. The optimum and suboptimum detector performance is evaluated for a discrete time real zero mean CS Gauss-Markov process in the region 0.01 - PD 0. 99 and 0. 01 PFA - 0. 9. The optimum and suboptimum Receiver Operating Characteristics (ROC) are binormal in this region. There is little difference between the optimum and suboptimum performance though the ~~DD.~oVSSI^*473 ~' Security Clasi (1caton

Security Clasftasi(atton 1RY WO4O LLNNK a Lz I cH KC 4 K[V WORDS - - 1 - j Signal Detectability Cyclo-Stationary Processes Wide Sense Cyclo-Stationary Processes I Suboptimum and optimum detector Receiving Operating Characteristics IIII 1 I I * t i j - I I I i I Security Cluxaihlul n

incD C'e) Too 0- CC,