Technical Report No. 160 03674-7-T COMPUTER TECHNIQUES FOR THE EVALUATION OF DETECTOR PERFORMANCE by Joseph N. Gittelman Approved by: B. F. Barton for COOLEY ELECTRONICS LABORATORY Department of Electrical Engineering The University of Michigan Ann Arbor, Michigan Contract No. Nonr-1224(36) Office of Naval Research Department of the Navy Washington 25, D. C. May 1965 THE UNIVERSITY OF MICHIGAN ENGINEERING LIBRARY

/ACKNOWLEDGEMENT The author wishes to thank Mr. T. G. Birdsall, program leader of the project under which this study was made, for his patience and guidance. The author is also indebted to Mr. K. Metzger who translated the author's ideas on evaluation techniques into practical, working computer algorithms. ii

TABLE OF CONTENTS Page ACKNOWLEDGMENT ii LIST OF ILLUSTRATIONS v ABSTRACT vii CHAPTER I: INTRODUCTION TO THE EVALUATION PROBLEM 1 1.1 Introduction 1 1,2 Observation Space Y 3 1o 3 Noise Space N and Signal Space X 3 1. 4 Observation, Detection, and Evaluation Procedure 4 1.o 5 Evaluation 6 CHAPTER II: A CLASSICAL APPROACH 8 2.1 Introduction 8 2. 2 Moments 8 2. 3 Characteristic Function 9 2. 4 Cumulants or Semi-Invariants 10 2. 5 Central Limit Thedrem 11 2. 6 Classic Series 12 2. 6, 1 Gram-Charlier A Series 12 2. 6, 2 Edgeworth Series 15 2. 7 Application of Hermite Polynomial Expansion 16 2. 8 Application of Edgeworth Series 19 CHAPTER III: THE PEARSON SYSTEM 22 3, 1 Introduction 22 3.2 Approach I 22 3.3 Approach II 24 3.4 The Pearson System 26 3, 5 Errors and Rationale 28 CHAPTER IV: RECEIVER EVALUATION 30 4,1 Introduction 30 4.2 General Receiver 30 4. 3 Examples of the General Receiver 32 4. 3, 1 Correlator 32 4. 3. 2 Square-Law Detector 32 4, 3. 3 Clipper Crosscbrrelator 32 4. 4 Log-Likelihood Ratio Receiver 34 4. 5 Postintegration Optimum Processing 36 4. 6 Receiver Evaluation from Raw Data 37 4. 7 Computational Aids 37 4. 8 Computer Evaluation 38 iii

CHAPTER V: RECEIVER-EVALUATION COMPUTER LIBRARY 41 5, 1 Moment-Conversion Subroutine (ALTOMU) 41 5, 2 Edgeworth Subroutine (EDGE) 43 50 3 Correlator Program 45 5, 4 Gaussian Quadrature Subroutine (GAUINT) 47 50 5 CAL-CHAN-FUN Subroutine 47 5, 6 Type A Subroutine (TYPEA) 50 5. 7 Type B Subroutine (TYPEB) 52 5, 8 SNINT Subroutine (SNINT) 52 50 9 NINT Subroutine (NINT) 55 50 10 Logarithm of Gamma Function Subroutine (LNGAM) 57 5, 11 Pearson Subroutines (PEARSN, PEARS2) 59 SUMMARY 62 APPENDIX A: PEARSON'S EQUATION COEFFICIENTS AND RANDOM VARIABLE MOMENTS 63 APPENDIX B: PEARSON'S CRITERION FOR FREQUENCY CURVES 65 APPENDIX C: PEARSON'S MAIN TYPES OF FREQUENCY CURVES 67 APPENDIX D: PEARSON'S SUBTYPES OF FREQUENCY CURVES 69 APPENDIX E: SOME PROPERTIES OF THE LIKELIHOOD-RATIO TRANSFORMATION 71 REFERENCES 73 DISTRIBUTION LIST 75 iv

LIST OF ILLUSTRATIONS Figure Title Page 1 Transmitter-receiver complex 2 2 Three normal ROC curves on linear paper with parameter d 6 3 Approximation to a log-likelihood ratio receiver 18 4 Symmetric point binomial polygon 23 5 A unimodal density function 25 6 General receiver block diagram 31 7 Clipper crosscorrelator nonlinearity 33 8 Optimum post-integration processor 36 9 Log-likelihood ratio receiver nonlinearity 39 10 Moment-conversion subroutine (.LTOMU). 42 11 Edgeworth subroutine (EDG E) 44 12 Correlator receiver program 46 13 Gaussian ouadrature subroutine (GAUINT) 48 14 CAL-CHAN-FUN subroutine 49 15 TYPEA subroutine 51 16 TYPEB subroutine 53 17 Signal-plus-noise integration subroutine (SNINT) 54 18 Noise integration subroutine (NINT) 56 19 Log gamma subroutine (LNGAM) 58 20 Pearson subroutines (PEARS, PEARS 2) 60 v

I

ABSTRACT In many situations, the theory of signal detectability lacks a connection between the theoretical aspects and the practical implementation of a detector. This condition exists because there are no general techniques for evaluating the detector performance. This report contains a collection of available techniques which have been adapted for the computer evaluation of a large class of detectors. In addition to the classical approximations, the Pearson system of frequency curves is integrated into the computer programs. The Pearson system yields a closed-form approximation to the detector performance, based on the moments of the signal and noise distribution functions. Such closed-form approximations enable the user to evaluate the effects of signal-to-noise ratio,detector nonlinearities, and filter bandwidth. vii

I

INTRODUCTION TO THE EVALUATION PROBLEM 1. 1 Introduction This report is concerned with several techniques feasible for the computer evaluation of receivers which are to detect signals in the presence of additive noise. The general problem studied is the evaluation of a fixed-time detection receiver which tests a sample function of a stochastic process and decides whether the sample function is noise alone or contains a signal component as well as noise. A block diagram of the receiver complex is shown in Fig. 1. The symbol N represents the null hypothesis that the received sample function y(t) is noise alone, while SN represents the hypothesis that y(t) is a sample function of signal plus noise: N: y(t) = n(t), (1. 1) SN: y(t) = n(t) + x(t). The symbol x(t) denotes a sample function of a stationary stochastic process, X, which describes the ensemble of transmitted signals, while n(t) denotes a sample function from a stationary random process, N, describing the noise ensemble. Moreover, it will always be assumed that the random processes X and N are statistically independent processes. Although X is the ensemble of transmitted signals, the set of signals may contain only one known element. This one-element set gives rise to "signal-known-exactly" (SKE) detection problems. Another class of detection problems arises when the signal is known exactly, but the characteristics of the signal are randomly disturbed by the channel. The channel's influence on the signal can be considered as some mapping from the signal space to a channel space which has a stochastic description. In this event, the space X will represent the ensemble of channel-disturbed signals instead of the transmitted signals. This provision for a dual role of the transmission space, X, allows a unified treatment of random or non-random signal detection and evaluation problems. 1

ner Transmitter Receiver decision Observation Space Transmission Space (.''ignal Space) Processed Space Fig. 1. Transmitter-receiver complex The function of the receiver is to perform a statistical test on the received sample function y(t) and to make a decision (presence or absence of signal) based on this test in such a manner that some index of performance is optimized. The selected index determines the statistical test the receiver must perform, and thus, the general nature of the receiver.

1. 2 Observation Space Y It is assumed, in this report, that Y is the set of integrable functions defined over the interval of time, [ 0, T], with the property that any sample function y(t) has a Fourier series expansion with vanishingly small coefficients outside a frequency band of width W. One can then choose 2WT time samples of y(t) such that 2WT y(t) = Z y(tk)0k(t), (1. 2) k=1 where the functions f ~W0k(t), k = 1, 2,..., 2WT, form a complete orthonormal set on the interval [0, T] with respect to the elements of Y (Ref. 1). Moreover, the coefficients of the expansion are assumed to be statistically independent. Any element of Y can be considered as a point in a 2WT Euclidean space, with coordinates equal to the sampled values of y(t). Hereafter, the vector of sampled values will be denoted by Y1 y =. (1.3) 2WT 1. 3 Noise Space N and Signal Space X The spaces N and X are subspaces of the observation space Y. Hence, every element in N will be denoted by a vector n = n2, (1. 4) n2WT and associated with N is a probability density function (p. d. f. ) denoted by fN(n). Similarly, the elements of X are denoted by x= X2, 2(1.5) X2WT

with associated p. d. f., fx(x). Moreover, it follows that the space Y is the union of the space of the vector sum X + N and the space N, and Y has associated with it the conditional density functions fy(y ISN) or fy(yIN), depending on the hypothesis under consideration. 1. 4 Observation, Detection, and Evaluation Procedure An idealized model will be assumed for the detection problem. The observer knows a priori that either noise or signal plus noise is present for the entire time interval [ 0, T]. At the end of the time interval, the observer decides which condition he thinks actually existed. Decision theory (Refs. 1, 2) is largely concerned with the choice of a receiver and a decision device which will process the observation, y, to optimize the performance index of the decision. A large class of performance criteria leads the observer to use a non-random decision device which partitions the observation space by using the likelihood ratio of the observation y. For simple-hypothesis testing, the likelihood' ratio satisfies performance criteria such as: (1) Minimum average risk (Bayes' Class), (2) Maximum a posteriori probability, (3) Neyman-Pearson Criteria, (4) Ideal observer. The likelihood ratio is defined as fy(y I SN) Y(y) (1. 6) where y is the 2WT observation vector, Y1 Y = Y2 (1. 7) Y2WT Birdsall (Ref. 3) has shown that a likelihood ratio receiver should be used under the very general condition where the observer prefers correct decisions over incorrect decisions. 4

and fy(y... ) is the conditional probability density of the event y given the event... The likelihood ratio is, then, a mapping of the 2WT observation space Y into a onedimensional space where a randomized decision function is applied. This decision function partitions, according to costs and the a priori probabilities, the one-dimensional space into regions corresponding to either the decision N or SN. One might then drop the problem with the comment, "Well, that's the best one can do. " However, the question remains, "How good is that?" Indeed, Birdsall and Nolte (Ref. 4) have shown that the design of the likelihood receiver (one which performs the likelihood transformation) is not easy. The required hardware is expensive and the receiver performance may well be sensitive to slight changes introduced by hardware aging or partial ignorance of the channel or signal space statistics. Another receiver, nonoptimum in the sense of the listed performance criteria, may be cheaper to build and, in the long run, more reliable to operate. Does one gain that much by using the likelihood receiver? The problem is complicated further when the observer insists on meeting certain detection requirements., A properly interpreted evaluation of the receiver may tell the interested party working in the assumed environment that he is unable to keep his losses below a certain point unless, for example, the transmitted signal energy is increased. The evaluation of detection receivers is portrayed graphically by curves of the receiver operating characteristic (ROC). These curves display the detection versus false-alarm probability with the assumed environment as a parameter. Figure 2 illustrates the ROC curves for the signal-known-exactly detection problems. Peterson, et al., (Ref. 1) and Birdsall and Ristenbatt (Ref. 5) have demonstrated the significance and use of ROC curves3 as criteria for receiver performance in certain well-known detection problems. 2Note that this definition of the likelihood ratio differs from the definition of the generalized likelihood given by Middleton (Ref. 2). The generalized likelihood ratio of Middleton scales the likelihood ratio by the ratio of the a priori probabilities of the hypotheses SN and N. 3There are several other graphical representations of the receiver evaluation, such as "betting curves" or the "operating characteristics", but these portrayals can be obtained from the ROC curves. 5

1.0 d=4.0.8 d=lo0.6 d=0.0,4 ok/l.2 |4,6 8I 0.2.4.6 8. 1.0 P ('N) Fig. 2. Three normal ROC curves on linear paper with parameter d. The evaluation seems to be straightforward. For the general receiver of Fig. 1, one seeks the distribution functions of the receiver output under the two hypotheses, fz(z IN), (1.8a) fz(z ISN), (1.8b) where z = P(y) for the likelihood receiver. 1. 5 Evaluation Two major difficulties are encountered when an analytic theory is applied to the problem of detection performance in a practical case: The first problem is the description of the receiver in a form useful for performance evaluation. That is, a form of likelihood-ratio mapping must be deduced which is useful and suitable for further computation. Since the likelihood ratio is defined 6

to be fy(y ISN) fy(yIN) - f (yJN) (1.9) and the noise is considered to be additive, then f fN(y- x) fx(x) dx ()(Y ) = (1. 10) fy(y IN) Except for a few singular cases, the integral of Eq. 1. 10 cannot be evaluated in terms of elementary functions. Second, a given receiver's performance, both in the environment for which it was designed and in some other environment, must be evaluated, that is, the distribution functions of the receiver output must be determined under the two hypotheses. An examination of the extensive body of literature concerning the distributions of the outputs of nonlinear devices should easily convince the reader of the potential problems faced in determining the distributions of the receiver output. These two problems become increasingly more difficult when one realizes that, in the practical situation, one cannot start out with a "neat" closed-form expression for the Signal Space or Noise Space distributions. Instead, provisions should be made in characterizing these distributions to permit successful performance using raw data obtained from channel experiments. As has already been discussed in Section 1. 4, evaluation is an integral part of the practical application of detection theory. In fact, evaluation is the link between the theory and practice. This report will introduce several techniques in an attempt to forge this link.

A CLASSICAL APPROACH 2. 1 Introduction Before proceeding, we introduce the notation used throughout this report. A subsequent part of this chapter deals with some elementary statistical ideas, while the latter part of the chapter introduces some new views on the design of a likelihood-ratio receiver. 2. 2 Moments (a) Given a function g(x) of a random variable x, where x has a p. d. f. denoted by fx(s), the k-th moment of g is defined by: ak(g) f [g(s)] fx(s)dso (2. 1) X In particular, for g(x) = x, the first moment of g, al(g(x)) = al(x) (2. 2) is the mean of the random variable x. (b) Given a function g(x) of a random variable x, where x has a p. d. f., fx(s), the k-th central moment of g is defined by Ilk (g) f [g(s) (g)k f (s) ds. (2.3) It follows from the linearity property of integrals that = 0. (2.4) The second central moment of g is called the variance of g and is denoted by the symbol a2(g), where a (g) _ p2(g) = a2(g) - al(g). (2. 5) (g) A 2 28

2. 3 Characteristic Function The characteristic function of a random variable x is defined by A its M (it) = e f x(s)ds. (2. 6) X M (it) is the Fourier transform of the p. d. f. of the random variable x. Since, by definition, fx(s) > 0 for s E X, (2. 7) f fx(s)ds = 1, (2. 8) x and since the Fourier transform of an absolutely integrable function exists (Ref. 6), then the characteristic function always exists. The inverse transform of the characteristic function is then the p. d. f. of the random variable x; that is, fx(s) f Mx(it)e itsdt (2. 9) - 00 Given a function g(x) of a random variable x, the characteristic function of g is M (it) f e itg(s)f (s)ds, (2.10) g' x so that the p. d. f. of g can be obtained from the characteristic function, f s f e M (it)dt. (2.11) fg(s) = 2.J f (it)dt. (2.11) -00 If the integrand of Eq. 2. 10 is expanded in a Maclaurin series, and the integration is performed term-by-term, then, (it>n M (it) = 1 + al(g) (it)+... + an(g) n + (2.12) Whenever the p. d. f. of g has moments all of order less than n, the characteristic function can be expanded in a Maclaurin series with a remainder of order n. Moreover, if a density function has moments of all order as n approaches infinity, and if Eq. 2. 11 converges for some t > 0, then the sequence of moments { ak(g)} defines a unique class of density functions 9

which can differ at most by a set of points of measure zero (see Ref. 7). 2. 4 Cumulants or Semi-Invariants The cumulant function of a random variable x is defined by Cx(it) = n M (it). (2. 13) Using the Maclaurin expansion for fn (1 + Z) where ~ ak(x) (it)k Z = k k. = M x(it)- 1, (2. 14) k=l then k K (x) (it)v Cx(it) Z= Z' + O(tk), (2.15) v= 1 where K (x) is defined as the v-th cumulant of x. Formally, one can obtain the relation between Kv and al,..., av by equating the coefficients of (2. 1. 21 to the coefficients in an expansion exp C (it) = exp 3 K i (2. 16) _~ v V. For example: K1(x) = l(X), K2(x) =.2(x), K3(x) = / 3(x), K4(x) = /4(x) - 3/12(X). (2. 17) The use of the characteristic and cumulant functions facilitates the calculation of moments for the sum of N independent random variables. For example, letting N y = E x., x. independent, (2. 18) j=1 then My(it) =1 fx... S fx(slS2,s... sN)dsds2 (2. 19) X1 X 2 N 10

Since the x. are independent, N'fX ( ~s S21.SN) j =. fxj(sj), (2.20) sj= the characteristic function of y is given by N M (it) = n M (it). (2. 21) j=1 I The cumulant function of y is given by N Cy(it) = En My(it) = Z Cx (it). (2.22) j=l j Therefore, the v-th cumulant of y equals the sum of the v-th cumulants of xj(j = 1,..., N), N K (y) = K (X). (2.23) J1j=1 l In particular, if the x. are drawn from the same distribution, Kv(y) = NK (xi). (2. 24) Thus, if the moments of xi are known, one can obtain the moments of y using the addition property of the cumulant function as the intermediate step in the calculation. 2. 5 Central Limit Theorem Define a normal probability density function with zero mean and unit variance [N(O, 1)] as 1 2/ 0(x) (2r)- eX /2 (2. 25) and a normal distribution function (x) (X f S (u) du. (2. 26) -CO If a probability density function f(x) depends on a parameter n and if f(Xm) converges to O(x) as n approaches infinity, then x is said to be asymptotically normal N(m, a), which means only that (Ref. 7): lim P(m + a < x < m+ ba) = (b) - (a) (2.27) 11

Rather general conditions, based on Lindeberg's Theorem, for the sum of a sequence of independent variables { xn} to be asymptotically normal are given by Feller (Ref. 8). However, for this report, the more restricted conditions of Liapunoff (Ref. 7) are adequate. Liapunoff's conditions for a sum of independent random variables, y = Xk, to be asymptotically normal N(m, a) require that lim (2~) = o, (2. 28) n — oo a where n 3~3 1/ 3 Y = E(Ixk- mkl) (2.29) k= 1 L nx 1/2 (k k ) 1(2.30) and mk and ak are the mean and variance of xk, respectively. In the more restricted case, where the xk are independent samples of the same distribution, it is sufficient for the xk to have finite second moments (Ref. 9) for y = Z Xk to be asymptotically normal. As a warning, it should be pointed out that even for very large n, the n 4 approximation to Z xk by a normal density function is poor on the tails of the distribution. k= 1 2. 6 Classic Series In this section, two series useful for the approximation of probability density functions in terms of the normal function and its derivatives are given. 2. 6. 1 Gram-Charlier A Series Define n(x) - d- 0(x), (2.31) dxn where 1 2 0(x) = (21tr)- exp { -. (2, 32) 4For an interesting discussion of this point with respect to binomial and Poisson approximation by the normal function, see Ref. 10. 12

Then, 02 (x 1) = (x2 )(x), 0(n)(x) = (-1)n Hn(X)(x), 0(n)W 1) nH W 0 W (2. 33) where H (x) is the n-th Hermite polynomial (Refs. 11, 12). The sequence of functions Ho, H1,..., } form a bi-orthogonal set in (-o%, oo) with the sequence { (0), 0(1),; that is, 0(n)(x) and Hn(X) are: 1. real and continuous on the entire line, 2. no one of them identically zero on the real line, 3. f 0 (n)(x) Hm(x)dx = (-l)m m 6mn!5 -o9 That is, the set of functions n! forms an orthonormal set with respect to a weighting function 0(x) in (-co, co). It is then possible6 to expand an arbitrary p. d. f. into a series of the form fx() =: ~ Ck )(x), (2. 34) k=O where Ck are the coefficients of a generalized Fourier Series (Ref. 13), that is, (-1)k Ck = k fx(x)H(x)dx. (2. 35) Instead of expanding fx(x), consider the expansion of the density function for a standard random variable x - a(x) z = (x) (2. 36) so that a1(z) = 0 and c2(z) = 1 (See (2. 1) and (2. 5). Then, cc (k) f Z(z), dk 0 (z), (2. 37) k=O 51 for m = n, 6mn = 0, otherwise. 6Sufficient conditions for the convergence of the series to f(x) are given by Fisher (Ref. 11) and Cramer (Ref. 7). 13

where dk k. J fz(z) Hk(z) dz. (2. 38) Substitution of the Hermite Polynomials into (2. 8) reveals that dO = 1, d1= d2 = 0. (2.39) Thus, f (z) = 0(z) + dk 0()(Z) (2. 40) k=3 where from (2. 3), it is apparent that the dk are sums of the central moments of f (z) (central moments since a1(z) = 0). Using the transformation theory of random variables (Refs. 9, 14), one obtains 1o (k) fx(x)= a 0(z) + ~ dk (Z) (2. 41) k=3 where z is now given by Eq. 2. 6. Moreover, since Ik(x) (2. 42) a (x) the dk coefficients can be expressed in terms of the central moments of f x(x). In particular, -AI3(Z) / 3(x) / 1 d 3 3 (z) -3) (,4(X) 4 4.1 4x (-. 5(z) + 10g3(z)) - g5(x) 3 1 3 d5 = 5! 5 - + 10 3 )) 5 (2. 43) a (X> (Wx) The expansion of (2. 34) is primarily of theoretical interest since, in practice, one uses a truncated series. In this case, the Ck are the coefficients which give a leastmean-square fit (Ref. 13) for the series approximation. It has been found, in practice, that for moderately skewed distributions, the first few terms in the expansion yield a satisfactory fit to the distribution function (Ref. 10). 14

Let y be a sum of n independent variates from the same distribution, n y = j x. (2.44) i= 1 Since y is a sum of variates, depending on a parameter n, one expects y to be asymptotically normal by the Central Limit Theorem. The Gram-Charlier expansion for the standard variable y - n a 1(x) Z = na (x) (2.45) is fZ(Z) = 2 Ck0 ()(z) (2. 46) The coefficients Ck can now be related to the central moments of a single variate, xi, through the additive property of the cumulants. Defining a standard cumulant as Kk(x) ik ak (2. 47) a (x) then the first seven coefficients of (2. 46) are c0 = i, C1 = C2 = 0, c = X3n- 1/2/3, 3 3 =4n / 4.', 5 = 5= A5n3/2 / 5., -2 2 6 =(X6n + 10 3 n-)/6'. (2. 48) An examination of Eq. 2. 48 shows that the natural sequence of ordering is not the most efficient in terms of the parameter n (c6 has a term of order n while c5 is of order - 3/2). 2. 6. 2 Edgeworth Series An alternative series, proposed by Edgeworth (Ref. 15), groups the terms of the Gram-Charlier series according to the power of the parameter n, 15

0, 0, 3, 0, 3, 4, 6, 0, 3, 4, 6, 5, 7, 9. (2.49) Hence, a three-term approximation for fz is __3 (3) [ 4 (4) 10 3 (6) fZW = 0(Z)n (z) + () + - (z) (2. 50) Since the ik are constants for any distribution function fx(x), Eq. 2. 50 is an asymptotic expansion demonstrating the convergence of fz(z) to the Normal function in terms of the parameter n. If the ik are replaced by the standard central moments of z, Eq. 2. 50 can be written in the compact notation of Eq. 2. 37 z) = (z-. 0 (3) (z) + [4 ( 4(z) -3) 0(4) (z) + 10 (z)(6) (z) (2 51) where Lk(Z) k (2.52) U (y) and y I fY 1(Y)\ fy(Y) = Z ( a(y). * (2.53) 2. 7 Application of Hermite Polynomial Expansion One useful application of the Hermite polynomial expansion is in the determination of a series expansion for the likelihood ratio. If independent sampling of the received waveform and independent signal space samples are assumed, the likelihood ratio equation 1. 9 can be factored into the product of 2WT integrals 2WT n fx. fY(Yi IXi) fx(xi) dxi )Y) = 2WT (2. 54) II fY(yi IN) Expanding fY(Yi Ixi) in a Taylor Series, and using the fact that the noise is additive, one obtains 16

fy(yixi) = xi (2. 55) where dk fy(u) ak = ) (-1)2k. 56) du u= Y(2. Thus, for a general noise density function, oo ak(Yi) a(x) fy(YilSN) = k y xi (2. 57) k=O, (2.57) where ak(xi) is the k-th moment of the i-th component of the signal space. If fy(u) is the normal density function, then ak = (1)k (k) (i) = Hk(Yi) 0(Yi). (2. 58) Substituting (2. 58) into (2. 57) and dividing through by fy(yi IN) which equals 0(Yi), yields oo ak(x.) H(yi) fy(y ISN) 0(Yi) 29 k' f(Yi N) (Yi) k y (2. 59) Thus, it follows that co a (xi) Hk(Yi).c(Yi) =( k kHi(y (2. 60) k=O The expression in Eq. 2. 60 is a series expansion for the nonlinear transformation which the receiver must perform on the input in order to process the reception in an optimum manner. This expansion is given in terms of the Hermite polynomials and the moments of the signal sample space. A different expansion of fy(yilxi) can be made. If this density function is expanded about the expected value of the i-th signal components a1(X1) - m1, then ak(xi- mi)k f (Yi xi)= k (2. 61) where, again assuming normal additive noise, a (1k d 0(u) |= Hk(Yi- mi) P(Yi- mi)'(2.62) du u= Yi m. 17

This leads to a likelihood ratio yim. -m.2/2 o (i)= e e 1 ik(Xi) Hk(Yi- mi) (2. 63) k= 0 where /ik(xi) is the central moment of the i-th component of the signal. Taking the natural logarithm of both sides to obtain the log-likelihood ratio, 2 m. L(yi) U mii 2 + n 1+ 2(i) H2(Yi-mi ).. (2. 64) Equation 2. 64 can be interpreted as follows: The first two terms correspond to the loglikelihood receiver for signal-known-exactly in white Gaussian noise. The kn{ } corresponds to a perturbation of the SKE case. An approximate receiver design for this expansion is shown in Fig. 3. SKE Receiver?_(ti) _ L(Yi) i=1 (Xi)H2(Yi-mi) I____l_______ (k-l) filters P2(Xi)H2(Yi-mi) - Fig. 3. Approximation to a log-likelihood ratio receiver.

The receiver enclosed by the dashed lines is the k-th order correction to the "signal-known-exactly" case. If such a receiver were implemented, several simplifications could be made. First, a time-varying bias equal to the expected signal amplitude "m", could be inserted at the input to the correction section. The (k - 1) filters are then fixed nonlinearities with time-varying scale factors7 adjusted according to the central moments of the signal distribution. The nonlinearity which is now fixed may be described by a polynomial whose degree equals the index on the H functions. Thus, to use this receiver in a signal environment other than the original one would require merely the adjustment of the bias and (k - 1) scale factors while the nonlinearities remain the same. The advantage of this type of receiver over a likelihood receiver with a single nonlinearity is its adaptability to other environments. In particular, the design of the likelihood receiver (single nonlinearity) is based on the hypothesis of a priori distributions, while the receiver of Fig. 3 allows the operator to adjust the parameters (bias and scaling) to maximize signal detectability. The obvious disadvantage is the requirements of (k - 1) nonlinearities instead of only one. However, the nonlinearities are polynomial functions, which facilitate their implementation or on-line processing by a computer. 2. 8 Application of Edgeworth Series Consider the correlator as a receiver for signal detection. The output z* is given by n Z*=.s (2.65) i= 1 If the output is biased and then scaled, the new output z is given by z:C - 11? i (Yi m-i) z*-m 1 1 (2. 67) but (2. 67) is the exact form to be used in an Edgeworth series. In order to use the Edgeworth series, one need calculate only the central moments of z* under hypotheses N and SN. Since it is assumed that the distributions of the samples xi and ni are known and hence their moments, the computer is programmed to 7The scale factors are constant if the signal space statistics are stationary. 19

use the moments ak(Xi) and ak(ni) as input data. These moments are then converted to cumulants and added. The central moments of z* are computed from the cumulants of the sum Ysi and are used in the expression for the coefficients of two Edgeworth expansions for f(z IN) and f(z I SN). In order to evaluate the receiver, the integral of the Edgeworth expansion is needed; 71 F(n71N) = f f(ztN)dz, (2. 68) -00 71 F(I71SN) = f f(zISN)dz, (2. 69) -00 where the probability of false alarm is given by 1 - F(?7 IN) and the probability of detection is given by 1 - F(-7 ISN). Either (2. 68) or (2. 69), which are conditional distribution functions of z, may be expressed as F(z < U) = 2 Ck C (k)(q), (2. 70) k unordered where (k)(1) = J 0(k) dx (2. 71) -o0 or (k)( (k-7n)= (k - 1)(_o). (2. 72) Since (k - 1) ) = 0, 0( (-Co) = 0, then (k)(7) = (k - 1)() (1)k- 1 () 0(n) - (2.73) Therefore, the only integration involved in the computation is of ~ ()(7), which is a standard computer routine. The integral of the remaining terms is replaced by the Hermite polynomials and the 0(x) function. 20

In practice, one really wants to approximate the distribution functions as closely as possible, since these distributions functions yield the receiver evaluation. Fortunately, every intuitive and mathematical idea on convergence of the Edgeworth series to f(z) likewise applies to the convergence of the Edgeworth series in Eq. 2. 70 to the distribution function F(7 I... ). The Edgeworth Series provides a tool for the evaluation of a detection receiver when the receiver output moments are known. Thus, the Edgeworth Series answers the second problem posed in Section 1. 5. Although the Hermite polynomial expansion, given in Section 2. 7, yields a novel physical realization of the log-likelihood receiver, Eq. 2. 64 is not a convenient analytic representation of the log-likelihood ratio mapping. That is, the form of Eq. 2. 64 is unsuitable for the computation of the receiver output moments which are needed in the Edgeworth Series. Thus, the first problem posed in Section 1. 5 remains unanswered. 21

THE PEARSON SYSTEM 3. 1 Introduction The material in this chapter presents the background for the approximation of the likelihood ratio mapping by a closed form expression. The closed form expression can be used to approximate the moments of the likelihood ratio receiver output. The likelihood ratio is the ratio of the signal-plus-noise p. d. f. to the noise p. d. f. The signal-plus-noise po d. f. cannot usually, be expressed in terms of elementary functions. The approximation to the likelihood ratio replaces the signal-plus-noise p. d. f. by a Pearson frequency curve. The Pearson System of Frequency Curves, developed by Karl Pearson (Refs. 16-18), approximates raw statistical data with a closed form expression. The Pearson curve which does the approximation has the property that its first four moments (a1'.., a4) are the same as the moments of raw data. In the discussion of Gram-Charlier and Edgeworth Series, credibility was given to the approximation used by some "acceptable" applied mathematics and by appealing to one's intuitive belief in the Central Limit Theorem. In the case of the Pearson System, no such mathematical justification can be given. However, having made such a strong statement a rationale is now presented for using his work in the evaluation of detection receivers. 3. 2 Approach I There are two approaches to the Pearson System. Each one gives some insight into the final results. Pearson was concerned with fitting raw statistical data with curves. He sought a system of curves which would allow him freedom in choosing the skewness (measure of the nonsymmetric deviation about the mean) and the range8 of the resulting curve. He found, in many cases, a fundamental relationship between the slope and amplitude of the density function. For example, consider the symmetric (p = q)-point binomial with a polygon Here, range implies the range of the random variable or, alternatively, the domain of the probability density function. 22

drawn through its ordinates as shown in Fig. 4. P(k) r-3 r-2 r-1 r 1 k Fig. 4. Symmetric point binomial polygon. The normalized first difference of this density function is given by P(r+l) - P(r) mean abscissa I variance' 2 [P(r+l) + P(r)] which may be interpreted as "slope of polygon" _ 2 mean abscissa mean ordinate 2r2 (3. 2) Similarly, consider the normal density function, f(x) = (272)-1/2 -x2/2Cr2 (3. 3) Then f'(x) - 2x -2x (3. 4) x 2a2 23

Comparing Eq. 3. 4 with Eq. 3. 2, the two density functions are seen to have a close geometric relation which is independent of the size of the binomial distribution. As a second example of the relation between a density function and its derivative, Pearson found that the skew point binomial (p / q) had the same slope relation as the curve: f(x) = + ye x (3.5) a i where y and a are given by y 2 a 2pq(n+l) (3 6) = (p-q) a (p-q)3.6) The result of Pearson's work was a differential equation representing the slope to amplitude relation of a large class of symmetric and nonsymmetric density functions, having finite and infinite range. The Pearson differential equation is df (x) x + f(x). (3. 7) dx b0 t+b X+ b2x2 class. This class includes the Normal, Chi-Squared, Poisson, Rayleigh and MaxwellBoltzmann probability density functions, etc. Pearson's curves are divided into three main groups or types, according to their use: (1) Type I - Skewed density functions of limited range. (2) Type IV - Skewed density functions of unlimited range. (3) Type VI - Skewed density function of semilimited range. Subtypes of these main types cover the symmetric cases as transition curves. There are nine subtypes giving a total of twelve Pearson type curves. 3. 3 Approach II Before proceeding, it is well worth the time to examine the second approach to Pearson's differential equation. 9 Consider some of the more obvious characteristics of the probability density function found in practice. See Elderton [ 19] for further discussion of the Pearson System. 24

(a) If x0 and x1 denote the end points of the range, then f(x0) and f(xl) equal zero. (b) The p. d. f. is unimodal. Thus, the derivative of the p. d. f. equals zero at some point within the range [xO, X1]. (c) The p. d. f. has k-th order smoothness at the end points of the range. That is, the first k derivatives of the p. d. f. at xO and x1 equal zero. Figure 5 shows a typical unimodal probability function with the above three properties. A density function which satisfies the differential equation, df(x) _ f(x) (x a) (3. 8) dx - g(x) xO a x1 Fig. 5. A unimodal density function. certainly satisfies the conditions (a), (b) and (c) above. In fact, by leaving g(x) general, one can vary the conditions on f(x) (e. g., g(x0) = 0 so that f'(x0) U 0). If g(x) is expanded in a Maclaurin series retaining only the first three terms, Pearson's differential equation results: df(x) _ (x+a) f(x) (3. 9) dX bo + blx + b2x2 Pearson must have realized the advantages of this approach for he suggests in a later paper (Ref. 18) the differential equation, 2 1 df(x) _ aO + ax + a2x f(x) dx - 2 3 b0 +bX+ b2x + b3x 25

This differential equation allows data from higher-order hypergeometric distributions to be fitted, in particular, bimodal density functions. An example of a p. d. f. satisfying Eq. 3. 10 is the Halsted density function (Ref. 20), k bx cx f(x) = x e e, x >. (3.11) Nevertheless, little has been published on the types arising from the more general differential equation (3. 10) because the available data do not effectively illustrate the solutions of the equation. 3. 4 The Pearson System Pearson's differential equation, df(x) = (x+a) f(x) (3.12) dx bo + b1x + b2x2 is integrated to yield in f(x) = b +bl b2x d (3. 13) b0 + blx + b2x Although the actual form of the density curve depends on the integral of Eq. 3. 13, the three main types of Pearson's system depend on the pole locations of the integrand. The three main types are classified according to the location of the zeros of the quadratic in the integrand denominator. If the roots are real and of opposite sign, the density function is Type I. For complex roots, the density function is Type IV, while for real roots of the same sign, the density function is Type VI. The coefficients a, bo, bl, b2 can be related to the moments of the random variable10 so that a criterion to determine the Pearson Type can be based on the moments of the distribution. The criterion, K, 11 determines the location of the quadratic roots and thus implies the choice of the Pearson Type. The three main Pearson Types12 and the criterion K follow. 10See Appendix A. See Appendix B. See Appendix C. 26

Type I - Limited range, skew, roots real and of opposite sign f(x) = f0( a1+ ( 2 al a2 al < x < a2' K < 0. (3. 14) Type IV - Unlimited range, skew, complex conjugate roots -v tan 1 fOe,o < x < o, 0 < K < 1, (3. 15) Type VI - Semilimited range, skew, roots are real and of same sign m1 f(x) = fo (x-a) (x) 2 a <x < oo 1 < K. (3.16) In Types IV and VI, the "a" has a different meaning from the "a" in the differential equation. The criterion 01(32 + 3)2 - 4(42 - 31) (212 - 32 i - 6) (3. 17) 2 P13 114:1 3 =' 2 = *(3.18) 12 12 27

Some of the more interesting subtypes occur for the transition points K = 0, K = 1 and also for the magnitude of K large. 3. 5 Errors and Rationale Since the Pearson approximation to a probability density function is based only on the first four moments, there is no limit to the possible error involved. Clearly, an infinite number of different density functions yields the same first four moments. Under the Pearson system, all of these density functions would have the same classification. When Pearson was fitting data, he was able to measure the mean square error of his approximation and compare this error with the errors from other approximations. But, in the evaluation problem, the moments are used to fit unknown distributions (if the distributions were known, it would not be necessary to use the approximation techniques). Thus, no error estimate can be found directly. However, one could re-evaluate the receiver using an approximating differential equation of the form: 1 df x + a f (x)(3d 19) () 0 +bx+b2x +b3x3 Since the f of Eq. 3. 19 is based on the first five moments, the evaluation using this class of approximation functions when compared with the evaluations from the Pearson class would determine an estimate of the error due to the omission of the fifth moment. In spite of the inherent problems in estimating the errors incurred when using the Pearson system for receiver evaluations, several strong practical reasons are in its favor. The evaluation of the receiver if not exactly, then, at least in the Pearson sense, allows insight into the environmental effects on the signal detectability. When the Pearson system is used, the results both include those of the usual normal approximation (first two moments) and allow for the qualities of skewness and kurtosis. Moreover, in practice, the probability density functions should be based on the statistical data obtained from channel measurements. The Pearson system is ideal for fitting these data with a closed form expression. To expect the reliability of the data to be high enough to estimate the first four moments of the distributions with a small probable error is asking quite a bit of the channel experiments. To ask more seems impractical, at the present time, from an equipment point of view. Indeed, the Pearson system can be criticized for its reliance on the accuracy 28

of the fourth moment as well as the first moment while, in practice, the estimate of the first moment is far more reliable than that of the fourth moment. 29

RECEIVER EVALUATION 4. 1 Introduction The techniques for the approximation of probability density functions introduced in Sections 2 and 3 will now be applied to a broad class of receivers. A unified treatment of this class will be presented through the introduction of the general receiver. In Sections 4. 2- 4. 5, it will be assumed that the density functions of the observation space are known; Section 4. 6 will discuss the evaluation of detection receivers when this is not the case. 4. 2 General Receiver A fixed-time receiver maps the observation vector, y, into the real line. The general receiver is a detection receiver which maps the components of y, via a zero memory nonlinearity, g, into the random variables, zi, and then sums the random variables, zi, to give the output Z, (see Fig. 6). The output Z, at time t = T, is compared against a threshold, q7. If Z > 7, one asserts that signal-plus-noise was present while if Z <_ 7, one asserts noise alone was present. The evaluation of this type of receiver is straightforward. Since Z = Zi, (4. 1) where Zi = gi(Yi), (4.2) and the yi are independent samples, then Z is the sum of 2WT independent samples from the same population (either N or SN). The moments of z. follow from their definition ak(Zi) = gk(yi) f(yi) dyi (4. 3) Y where f(Yi) = f(Yi N) (4.4) 3-0

if the moments of z.i under the noise hypothesis are desired, while f(Yi) = f(yiISN) (4. 5) when the moments of zi under the SN hypothesis are sought. [ Zero Memory 1 L Nonlinearity Fig. 6. General receiver block diagram. Using the additive property of the cumulants of independent random variables, one can calculate the general receiver output moment sets, ac(Z N) = 3k(Zl N), k = 1,2,...,2, (4. 6) a(ZIN) = k(ZSN, k = 1,2,...,. (4.7) At this point, one can use either a Pearson fit or an Edgeworth series to approximate the receiver output probability density functions. If a Pearson fit is to be made, it is sufficient to use the first four moments (2 =4) while the Edgeworth series can use more than four moments (2 >4). Once the density functions are obtained, the false alarm and detection probabilities follow from P(AIN) = 1 - f f(ZIN)dz, (4. 8) o0 P(AjSN) = 1 - f f(ZJSN) dz. (4. 9) As explained in Section 2. 8, the Edgeworth series evaluation can be reduced to the evaluation 31

of one integral and a number of Hermite polynomials. The Pearson evaluation requires only one integration procedure which, in general, must be carried out by some quadrature method since no standard routines are known for the integration of the Pearson curves over subintervals of the random variable range. 4. 3 Examples of the General Receiver Some examples of the general receiver are: 4. 3. 1 Correlator: gi(Yi) = xi Yi' (4. 10) ak(Zi) = (xi)k ak(Yi), (4. 11) ak(ZiIN) = (xi)k ak(ni); (4. 12) ak(ZilSN) = L (xi)2k- a(n) (k (4. 13) A fuller development of the correlator and its evaluation is given in Section 2. 8. 4. 3. 2 Square-Law Detector. gi(Yi) = Yi, (4. 14) a'k(ZiJN) = a2k(ni), (4. 15) ak(ZilSN) = a2k(Yi[SN). (4. 16) The square-law detector is used for incoherent detection where the epoch of the signal is not known. The detection is based on the increase of observed energy when signal is present as compared to the noise-alone energy. 4. 3. 3 Clipper Crosscorrelator. (Figure 7) g(yi) = 1 for yi > K -1 for yi < K, (4. 17) 32

g(Yi) +1 Yi K -1 Fig. 7. Clipper crosscorrelator nonlinearity. ak(z iN) = 1 for keven = 1 - 2 P(KISN) for k odd, (4. 18) ak(Zil SN) = 1 for k even = 1- 2 P(KISN) for k odd, (4. 19) where K P(KI...) = f f(Yi ) dYi. (4. 20) -00 If, instead, the nonlinear function g(yi) = 1 for yi > K = 0 for yi < K (4.21) is used, then ak(zi) = 1 - P(KI...) for all k. (4.22) Thus, the z. are samples of a two-point, skew binomial distribution. The output Z is then 33

the number of successes in 2WT Bernoulli trials with probability p 1 - P(KI...) (4. 23) of success. It is well known (Ref. 8) that Z has a binomial distribution with expectation, (2WT)p and variance (2WT) p(l-p). When such a receiver is evaluated by the Pearson technique, the resultant distributions are continuous-curve approximations to the skew point binomial distribution (see Section 3. 2). 4. 4 Log-Likelihood Ratio Receiver Instead of the general receiver of the preceding section, consider the optimum Bayes detector, the likelihood-ratio receiver given by (Y) = f(yjSN) (4.24) f(y N) Independence of the observations in N and SN permits one to write the likelihood ratio of the total observation as 2WT 2WT f(Yil SN) 2(y) = II.(yi) 1 f(yN) (4.25) i-l i i i-l f(Yi|N) Since a decision based on i (y) is optimum in a Bayes sense, a decision based on a monotone function of f(y) is also optimum in a Bayes sense. In particular, a decision based on the logarithm of the likelihood ratio is optimum. Denoting the log-likelihood ratio by L, L(y) = in 2(y), (4.26) then from Eq. 4. 25 2WT 2WT ff(yi SN) 1 L(y) =, L(Yi) Z= n J (4.27) i=l Jf(YilN) Now, if the zero memory nonlinear function of the general receiver is given by f(YiJ SN) g(yi) L(Yi)= f(N (4.28) 34

it is evident that the optimum Bayes detector can be represented by the general receiver with the nonlinearity set equal to the log-likelihood ratio of the observation. The evaluation of the Bayes detector can thus be performed in the same manner as described in Section 4. 2 with the following exception. In Section 4. 3, the moments of zi were directly related to the moments or the distribution function of the observation component Yi. When a log-likelihood ratio nonlinearity is used, such a relation is not generally available. Hence, the formal expectation of Eq. 4. 3 must be performed with a quadrature method. At the same time, the Bayes detector imposes a structure on the processing such that the output p. d. f. under signal-plus-noise hypothesis is related to the output p. d. f. under 13 the noise-along hypothesis by the likelihood ratio; that is, f(LISN) = eL f(LIN). (4. 29) Hence, in theory, one need only evaluate the p. d. f. under the noise hypothesis without concern for the signal-plus-noise moments. Although this is theoretically feasible, Pearson fits, in practice, have the potential of diverging when multiplied by a term e L. To expand on this statement, consider Eq. 4. 29 and the conditions under which f(LI SN) remains a member of the Pearson class. d f(L = f(SN) N) [1 + Q (4. 30) where P(L) - df(LIN) 1 Q(L) dL f(LIN) (4.31) It is evident from the form of Eq. 4. 30 that the maximum degree of Q(L) is one and thus, the class of allowable p. d. f. under noise is restricted to the form df L+a = b +b L f(L) (4. 32) Sed13 1 See Appendix E. 35

Therefore, if f(LI N) is not a member of the restricted class defined by Eq. 4.32, f(L SN) will not be in the Pearson class. 4. 5 Postintegration Optimum Processing It is worthwhile to point out that threshold detection of the general receiver output is not, in general, the optimum Bayes procedure. Instead of threshold detection, one should form the likelihood ratio of Z and base a decision on the likelihood ratio; i. e., to optimize the Bayes performance of a receiver with preprocessing, the observer should treat Z as the observation instead of y(t). The generalized receiver then can be considered as a part of the transmission channel which maps the 2WT Y space into the real line. The optimum Bayes detector for preprocessing by the generalized receiver is shown in Fig. 8. If the generalized receiver is a likelihood ratio receiver, then further processing of Z cannot improve the observer's decision. Intuitively, it follows that the likelihood ratio of the likelihood ratio must equal the likelihood ratio. 14,. General z Likelihood Ratio z Threshold N Receiver Nonlinearity SN Fig. 8. Optimum post-integration processor. When the generalized receiver is not a monotonic function of the likelihood ratio, one must compute the likelihood ratio of Z in a form suitable for further computation. If the Edgeworth series is used to approximate f(ZI~ ~ ), an unwieldy number of coefficients and polynomials must be carried along. However, when a Pearson fit to f(Z ~* ) is used, the two p. d. f.'s are given in closed form. These two density functions are all that is needed for the remaining part of the evaluation. At this point, the same procedure which is given in See Appendix E. 36

Section 4. 4 is carried out using the p. d. f.'s, f(ZI SN) and f(ZJ N), with the modification of setting 2WT equal to one. The ROC curve obtained is optimum for the class of receivers restricted to preprocessing by a generalized receiver. The performance in this situation can then be compared to the performance of the generalized receiver with threshold detection (no post-integration processing) and the optimum receiver of Section 4. 4 where the likelihood ratio operates directly on the observation space Y. 4. 6 Receiver Evaluation from Raw Data This section will discuss the problem of design and evaluation of the receiver when the density functions on the observation space Y are not given, but instead either the channel measurements or the moments of the density functions are available. If the data available are the channel measurements, these can be converted into estimates of the density functions' moments. Using these moments a Pearson fit is made to the two density functions and their ratio is taken. This ratio is the Pearson approximation to the likelihood ratio nonlinearity for a single component of the observation vector y. It is evident that the post-integration detector of Section 4. 5 is an example of the Pearson receiver (approximation to likelihood ratio). Once the approximation is obtained, the procedure of Section 4. 4 is followed using the Pearson nonlinearity. 4. 7 Computational Aids As mentioned in Section 4. 4, the calculation of the output moments of a likelihood ratio receiver requires a quadrature routine. This can be partially circumvented by use of either of the following theorems. Theorem 1 The k-th moment of the likelihood ratio under the signal-plus-noise hypothesis equals the (k+1) moment of the likelihood ratio under the noise-alone hypothesis, ak[P(y)JSN] = ak+l[2(y)|N]. (4. 33) The proof of this theorem is given in Ref. 1. Its application to the post-integration processing allows computation of all of the required signal-plus-noise moments by the computation of one extra noise moment. Use of the theorem is limited since the log-likelihood receiver of Section 4. 4 requires the moments of the log-likelihood ratio. The following theorem can be used in place of Theorem 1. 37

Theorem 2 The k-th moment of the log-likelihood ratio under the signal-plusnoise hypothesis is related to all of the moments greater than k- 1 of the log-likelihood ratio under the noise-alone hypothesis. This relation is given by o0 a.(LIN) ak(LISN) = (- (4. 34) j=k (j-k)( Proof: By definition, ak(LISN) = f Lk(y) f(yISN)dy, (4. 35) but, since f(ylSN) = f(y N) = eL f(yJN), (4. 36) then ak(LISN) = f Lk(y) eL f(yN)dy. (4. 37) L Expanding e in a Maclaurin series and integrating term-by-term, it follows that CO a. (L IN) ak(LISN) = J(4. 38) j-k When the noise distribution is such that the moments of a(LJ N) are both easy to compute and growing slowly, the use of Theorem 2 can save computing time. 4. 8 Computer Evaluation A library of receiver-evaluation programs is available. These programs have been tested in several well-known detection problems where analytic results were available for comparison. In nearly all of the test programs, the evaluation results completely agreed with the analytic results. The conclusion, drawn from the test results is that the programs are operational; that is, the quadrature methods and program logic yield evaluations with negligible error. As yet, no statement can be made about the quality of the approximation tech38

niques when the Pearson system is used. Whenever the test result deviated negligibly from the analytic results, one noted that the probability density functions in the test problem were, indeed, members of the Pearson class, and thus, the application of Pearson's techniques yields exact results. When the density functions in the test problem were not members of the Pearson class, the approximation to the receiver nonlinearity was often excellent over a large range of the observation variable. An example of the receiver approximation (see Ref. 21) is Fig, 9, where the log-likelihood ratio nonlinearity curves computed by both the convolution Log Likelihood Ratio 70 60 Convolution 50 *- Pearson 40 30 20 10 -4 -2 2 4 6 8 10 12 Yi -10 Fig. 9, Log-likelihood ratio receiver nonlinearity.

and Pearson techniques, the former yielding the theoretical result, are shown. Although the Pearson approximation is good over the range -4 < yi < 12, the receiver moments calculated from this nonlinearity may require as good a fit as shown in Fig. 9 over even a larger range. The question of the range of the approximation is academic in the practical problem. In the test problems, the random variables have a range, -co < xi < co, but with measured data, the range is well bounded. Thus, the question of the quality of the approximations must be deferred until channel data are available for testing purposes. 40

RECEIVER- EVALUATION COMPUTER LIBRARY This section describes the programs available in the receiver-evaluation library. The programs are written in MAD (Michigan Algorithm Decoder) language - a computer language developed by personnel at the University of Michigan Computing Center. However, comprehension of any of the computer languages, such as FORTRAN, should enable the reader to follow the program description. Each program listed will contain the following information: (1) Name and general description of program (2) Required Subroutines (external functions)* (3) Program Entry - Calling routine and explanation of required routine arguments (4) Output (5) Program purpose and method Any routine called by the program falls into one of three classes: (a) Internal Function - the called routine is defined and listed as a part of the program (b) External Function - the called routine is a subroutine which belongs to the library of receiver-evaluation programs. As such, the routine is listed in the number (2) category of program information and a description of the called routine can be found in this chapter. (c) Computer Subroutine - the called routine belongs to the computer library system, called MESS (Michigan Executive Subroutine System). Typical routines in this category include the transcendental functions, subroutines for dating the program, etc. 5. 1 Moment-Conversion Subroutine (ALTOMU) (Figure 10) This program computes the central moments of a sum of random variables 41

ENTRY MUN(I)= M+ AN(I) MUN(2) = M * (AN(2)- [AN( I) 2) MUN(3) = M * (AN(3)- 3 *AN i) *AN(2)+ 2 * [AN(I)] 3) MUN(4) = M * [AN(4) - 3 * (AN(2))2- 4 * (AN(I) * AN(3)) + 12*(AN(2)) *(AN(I))2- 6 *(AN(I))4] + 3 * (MUN(2))2;;TIUlRld ~ PRINT: MOMENT CONVERSION YESI=..... A(I)...A(4), MU(I)..MU(4) I Fig.'AN M n NO A(I) AN(I) M U(LKI)= MUN(I) Fig. 10 Moment conversion subroutine (ALTOMU), 42

drawn from the same population. The program can be used to compute the output moments of linear filters. (2) Required Subroutines - none. (3) Execute ALTOMU. (AN, MUN, M) AN - the location of a vector of the first four noncentral moments of a single variate in the sum. MUN - the storage location of the computed central moments of the sum in the main program. M - the number of random variables in the sum. (4) Output - the function returns MUN to the main program and prints out the vectors AN and MUN. (5) Method - the program computes MUN from AN using the additive property of the cumulants to express the relation between MUN and AN for any given M. Note that for M equal to one, the program converts the alpha moments to mu moments. This program is limited to linear filters with unity weighting functions. A generalization of ALTOMU would allow the output moments of arbitrary linear filters to be evaluated. 5. 2 Edgeworth Subroutine (EDGE) (Figure 11) This program takes the central moments of a noise and a signal-plus-noise distribution and computes the ROC curve. The program is used to evaluate receivers whose output central moments are known. (2) Required Subroutines - none. (3) Execute EDGE. (MS, MN) MS - location of the signal-plus-noise central moments. MN - location of the noise central moments. (4) Output - the function prints out the following data: Y - standarized threshold axis in terms of the signal-plus-noise parameters. X - the actual threshold level. PDET - the detection probability at the threshold X. PFA - the false-alarm probability at the threshold X. 43

('ENTRY)~~ t —--------- ODY,(H(9), H(10))~ —-— ~ N= N:,+' PRINTThetitleThedate, MSM.. MSM3(4, MISIM.-M ISIM LPRIN'' B S,() S3/SS.3) 82) PRIT:AM A, 10) B2) AM='MS(3)/SSN3 ~ —-— I PRINT: SSN, SN S SN = AM -(2 A(2) = MN(3)/SN3 SN = MT-2 B() = MS(4)/SSN4 B(2) = MN(4)/SN4 SSN3 = SSN 3 SSN4 = SSN3 * SSN SN3= SN3 SN4= SN3 * SN Y=-3 i YES PRINT: Y, X, PDET, PFA, F(I), F(2) Input data; MS(I)..MS(4), MN(I)...MN(4) Standard deviations; SN, SSN cP <Y>3= NO Y~~~~~~~~~~~~ ES. = YES PDET- 1- ED(I) PFA = i- ED(2) X = SSN * Y + MS(I) 1>2 F(I)= F(I)/SSN \ NO F(2) = F(2)/ SN 1='+'\ / YES X.YY H(2) X2 I = X3.3-3 H(4) = X4- 6X2+ 3 t NO / H(5) = X5 - IOX3 +15X | X =(SSN *Y+ MS(l)-MN(!))/SN | ()X-5X+5I1 / H(6) = X6 - 15X41 45X2 - 15 PHI = 0.3989423 ~ EXP.( - X2 *.5) F(I)= PHI *( I+ A(I) *H(3)/6 +(B(I)- 3) *H(4)/24+ J0o* [A(I)]2 * H(6)/720) ED(I) = FREQ.(X)- PHI * (A(l)* H(2)/6+(B(I)- 3)* H(3)/24 + 10 * [A(1)]2 * H(5)/ 720) Fig. 11 Edgeworth subroutine (EDGE).

F(1) - signal-plus-noise density function. F(2) - the noise density function. The program also prints out MS, MN, and the standard deviation of the noise, SN, and the signal plus noise, SSN. (5) Method - the program computes a six-term Edgeworth Series approximation to the noise (N) and signal-plus-noise (SN) probability density functions. The program then evaluates the probability of a random variable, conditional to the N or the SN hypothesis, exceeding a threshold X. The threshold, X, is varied through six standard deviations (SSN) about the mean of the signal-plus-noise distribution. The output table contains a listing of points of the ROC with a tabulated threshold parameter, X. 5. 3 Correlator Program (Figure 12) This program evaluates the correlator receiver under the assumption of independent noise and signal-plus-noise sampling. (2) Required Subroutines (a) ALTOMU (b) EDGE (3) The program requires the following input data: (a) Two data cards (1-72) which give the user's title of the program. (Eq., "Receiver-Evaluation Program for a Clipper Crosscorrelation"). (b) Data cards for ASN, the signal-plus-noise moments, AN, the noise moments, and M, the number of observation samples. (4) Output - the program print-out includes the probability of detection, the probability of false alarm, and the corresponding threshold. For details of the print-out, see the Edgeworth subroutine. (5) Method - the program takes the input moments, ASN and AN, and using ALTOMU, computes the output central moments of the correlator. The output central moments are fed into the Edgeworth subroutine where the ROC curve is evaluated. This program is written for stationary noise and signal-plus-noise processes. 45

START RA T tt PIT tl REA.D DATA, ASN, READ: Th title RINT: Te titleAN, M EDGE.(MSN, MN) 14 ALTOMUIASN, MSN, M) ALTOMUgrAN, MN, M) Fig. 12 Correlator receiver program.

sTIhe program is not easily generalized to nonstationary processes. 5. 4 Gaussian Quadrature Subroutine (GAUINT) (Figure 13) This program is used for numerical integration on the computer. (2) Required Subroutines - none. (3) AREA GAUINT. (M, N, A, B, F,) AREA - the storage location of the value of the integral. M - the order of integration (number of points per subinterval). N - the number of subintervals. A - the lower limit of the integral B - the upper limit of the integral F - the integrand. (4) Output - function returns to AREA the numerical approximation of the integral of F. (X), A < X < B. (5) Method - Gaussian Quadrature is a technique which chooses the minimum number of points needed to evaluate, with no error, a polynomial of degree M. For a detailed description of the method, see Kopal (Ref. 22). The program affords the user a choice of the degree of the approximation polynomial; that is, the values of M range from 2 through 16. The larger the value of M used, the more accurate the result, but the longer the computation time. 5. 5 CAL-CHAN-FUN Subroutine (Figure 14) This subroutine calculates the appropriate Pearson coordinate from a given real axis value. The program, also, evaluates the Pearson density function along the Pearson axis, and the program changes a given Pearson coordinate into the real axis value. (2) Required Subroutines - none. (3) The subroutine has three entries: (a) X = CAL. (A) A - location of real axis coordinate. X - storage location of calculated Pearson coordinate. 47

YEs ENTR I I RETURN S* HP H (B - AWN K (M *l (M - 1))/2 HP= H/2 l>N S 0 NO YES Fig. 13 GGsA+aHP JqK s u (GAUIN A==A+HH SP = 0 J= K+M 0 I NO J = J+I SP - SP+ R(J) * F.(HP * T(J)+ Fig. 13 Gaussian quadrature subroutine (GAUINT)o

(b) X = CHAN. (A) A - location of Pearson axis coordinate. X - storage location of real axis value. (c) X = FUN. (A) A - location of Pearson axis coordinate. X - storage location of the value of the Pearson density function, evaluated at A. (4) Output - the subroutine returns to the calling program the computed datum of CALCHAN-FUN. (5) This subroutine is needed when one uses the Pearson Frequency Curves. In Pearson System, the origin of the coordinate is related to the constants of the frequency curve. The CAL routine permits the translation of a physical threshold into the equivalent Pearson threshold. The CHAN routine is the inverse translation of the CAL routine. The FUN subroutine is used along with the output of the CAL routine. Certain error routines have been inserted into the FUN program. These error routines return control to the main calling program whenever logarithmic or exponentiation commands lead to a computer overflow or underflow. The main program then proceeds to process the next set of data. 5. 6 Type A Subroutine (TYPEA) (Figure 15) This program, using the random variable central moments, computes the Pearson constants for one of the Pearson Types 1, 2, 4, or 7. The program also computes the information used in the coordinate shifting of the CAL and CHAN routines. (2) Required Subroutines (a) LNGAM (3) EXECUTE TYPEA. (RUN, LOC) RUN - a two-element statement label array, RUN, RUN(1). RUN is the location to which the program will transfer after the Pearson curve constants are computed. RUN(1) designates the location to transfer to in the event of computer overflow when computing the constants. LOC - a statement label to which the program will transfer in the event of an exponentiation of logarithm error return. 50

Note that in the Pearson Subroutine, both RUN(1) and LOC give rise to the same action. (4) Output - the function returns to program common the computed values of the constants for a specified Pearson type. The values of the constants are printed out. The program also stores in the program common the values required for the translation of coordinates. (5) Method - the program runs through a set of logic statements which select the proper set of constants needed for the given Pearson type of curve. The constants are evaluated, if possible. Otherwise, the program prints out an error statement and returns control to either RUN(1) or LOC. 5. 7 Type B Subroutine (TYPEB) (Figure 16) This program is identical with TYPEA except the constants are evaluated for either a Pearson type 3 or 6. See TYPEA. (2) Required Subroutines - none. (3) EXECUTE TYPEB. (RUN, LOC) See TYPEA. (4) Output - see TYPEA. (5) Method - see TYPEA. 5. 8 SNINT Subroutine (SNINT) (Figure 17) This program computes the distribution function associated with a given Pearson density function. (2) Required Subroutines (a) CHAN (b) FUN (3) MAX = SNINT. (D, B,X,S) MAX - the number of elements used in the X and S arrays. Note, MAX is an integer. D - the lower bound of the integration. B - the upper bound of the integration. X - array for storing the real axis values of the coordinates used in the integration. S - array for storing the values of the distribution function. 52

ENTRY ~~ —-— L B I =N(903) E K — TPEK IB2 = N904) PRINT: FO too large for the -. computer, Al, R, G, P, TEMP ETURN TO I|G = |2*MU(2)/ MU(3) YES P=4/BI-I I. IPRINT: Curve is " " shAped., Y PES- < IG, P TEMP= (Pl I)*ELOG.(P) - LOG.(A 1)- P- LNGAM.(P + 1) TM>8. r 1~ ~~ ~~~~~NO \ Al=P/G RETURN T RUN(I)R=G*Al NO /O IFO = EXP.(TEMP) IA<I | OP= MU(I)-.5*MU(3)/MU(2)*UNITS I YES M3<0 YES PRN:M(0GATE 2.0 MU(3)<0 PRINT: MU(3)<0 ~~~~N(905) GATE I ~~~NO I.5 * UM(3)/ MU(I,/uc 2 I PRINT: G, P, AI, R, FO, OP, ULIM, TYN(900) -AI i rRETURN7'TO RUN L R=6V(B2-I3I-I)/(6+3*B-2 2'BB2 PRINT: Calculations using..E u. 2T L YPE(), TEMP =VI B I * (R+ 2)2 +16 * R + 161 moments, TEMP, Q1, Q2, R, Al 2< - N(905)- GATE A -I =:.. /-x, TE M P -43/M-U) ) AI = A /UNITS ~~~~QI=Q Q.I =. 5R 2t4 - (+2-,BI/(TEM P) YES'" Q2 = (R - 2)/2 + R *(R+ 2)*,f/'I / (2 *TEMP) PRINT: The mode at PRINT: MU(3)<0 r NO N FO =(QI-Q2 -I)* ELOG.(AI) MODE = MU(I) -.S *MU(3)* (R+~2)/(MU(2) * (R - 2)) * UNlTSH Q2- 2a-Q-S0} —N GAMQ2PRINT: < I Q - _ I 0 +LNGAM.(Q )-LNGAM (QI-Q2-1) - LNGAM.(Q2-+1- ) \~~~~~~~~~~~~~~YS \ +1<0 C; ~~~~~~~~~FO EXP. FO) YES IULIM=5*,/JM+MU(])/UNITS+AI (-RTR T O 1ES I YES No: MI)- (Ai, Q -Al-)/ (Qi- Q2 -2), U.ITS RN UI F O 3 PO>87. OP~~UL/~=5*v]I]'+U (I/NT'fA' ULIM~~ ~~JAUIZ~~+ MU(I _ _ O Z lYS ~( RETURN TO LOC) YES PRINT: FO to~ Iarge for J RETURN TO RUN, I)" ~... N(000)= LIM YES ~ R (< Q 2<-RN PRINT: Bad crvalue of 2 PRINT: This curve is "J"RUN(I) NO Fig. 16 TYPEB subroutine. 53

Ny.,O NO ENTRY or TY= 3 TY~or YES YES YES MODE = MU (I)-.S*(R+2)/(R-2)*MU(3)/MU(2) MODE. MU( 1)-.5* MU(3)/MU(2) MODE = MU()-.5 * (R 2)/ (R + 2)* MU(3)/MU(2) TY7 NO 1 r PRINT: STINT type problems. T EUNT TR TY=2 4YES MODE= MU( I) M Q & -FPMODE CAL(MODE) PRINT: MODE, PMODE A=PMODE ROUND =I I=0 F I = FUN.(PMODE) YES INC =-.005/F I SINC =-INC 4 LOOP = LOOP I LOOPB= LOOPI LOOPS F2~FUN.(A+INC) ROUND>501 ~ROUND= ROUND+ I k ~NO NO ES AR =IINCI*(FI+ F2+ 4* FUN.(A+INC/2))/6 <I 0 _ YES YES PRINT: ROUND, A INC, D INC = 1.5 *INC INC =.5 * INC -~~RETURN TO START2 YES 1=1+1 J=2+ S(I)= AR =NC=D-A F= F2 S(I)= INC (F!+4*FUN.(A+.5*INC))/6 J>/ A = A- INC REVERS.(S,!,I) > NO X(I) =CHAN.(A) TROUND = ROUND S(I) ='S - I)an ROUND = 0:O~ (1) < INC= SINC LOOPB = LOOP4 LOOP = RUN2 LOOP YES l= FUN. (PMODE) A =PMODE S(J)= S(J)+ S(J-)! X(O)= II' PRINT: Signal integration, Solution incorrect, function <9 S rbt I 0 (<(,)<.9>K 1-O 0 ~ ~ ~ ~ ~ ~ ~ ~ I01 NO ROUND= TROUND+ROUND PRINT:ROUNDX RETURN MAX IR INTERNAL FUNCTION REVERS. (Z, BEGIN, END) J BE(E YES) >EGIN+N NO J=J+l TEMP- Z(J) Z(J) = Z(BEGIN- END -J) Z(BEGIN+' END - J) = TEMP Fig. 17 Signal plus noise integration subroutine (SNINT)o 54

(4) Output - function returns, to the program common, the distribution function in S, the associated coordinates in X, and the integer MAX which gives the length of the X or S array. The program also prints out the integer MAX and the integer ROUND which counts the number of integration steps. (5) Method - the SNINT routine begins by computing the mode of the Pearson density function to be integrated. The integration proceeds from the mode to the lower and upper limits of the integral. The quadrature method is Simpson's rule with the length of the integration step controlled to maintain an incremental area for the step between 0. 001 and 0. 01. This control of the integration step permits a fast integration of regions where the distribution function is slowly changing. The interval length is changed by powers of two until the incremental area criterion is satisfied. The total number of such changes is printed out as the integer, ROUND. The integration stops when the distribution (total accumulated area) equals 0. 9. If the upper limit, B, is reached before the area is 0. 9, an error return is made. SNINT is used to integrate the density functions of the signal-plus-noise hypothesis. The X array is found from the Pearson coordinates via the CHAN routine. This X table is then used in the NINT program. 5.9 NINT Subroutine (NINT) (Figure 18) The NINT program integrates the Pearson noise density function to give the distribution function of the noise at the coordinate values, X, determined by SNINT. (2) Required Subroutines (a) FUN (b) CAL (c) GAUINT (3) EXECUTE NINT. (A. B. X. N. MAX) A - lower limit of integration. B - upper limit of integration. X - array of coordinate values used in SNINT. 55

C~~~~~~~~~ ix —:I it NO NO ENTRY ~ ~ ~~or T PRIN: NIN. typ TY=AR = probleGAUINT.(5ms, TY10, XM, B FUN.) = Y ES S ES E MODE MUM 1 - 51*(R+ 2)/(R - 2) *MU(3)/MU(2) MODE MUM 1 -.5 *MUM3) MUM2 MODE MU(i)-.5 *(R- 2)/(R+2)* MU(3)/ MU(2)1 MODE MUMI EUR OSAT XPI =MODE CAL(X(MAX)) YES I= TY=7 ~~~~~~SN =/2-MU(2) YES RETURN MEAN MUM I PRINT: MODE MD NO NOT XM CAL. XMAI X) ~~~~~~~~~~~~~~~~G= - XP2+ HP N NO ~~~~~~~~~~~~~N(I)'N(I.t I)+HP *(C(I) *FUN.(HP* T(i)+ G)N _ NO + Q(2)* FUN.(HP *T(2)+ G)+ Q(3)-x *'FUN.(HP * T(3)+ G)) |I = CAL.(X(I)) N.5 -.5, ER.((X(,.- MEAN)/SN) YES.XO=MXPITABXM C-(RTR MAX =-I YES Fig. 18NT: Noise integraticommon subroutine (NINT)o**2 ~~NO~56 YES AR = GAUINT.(5, 10, XM, B, FUN.) 1= 1-~ (I.R 1 II1 RETURN B XM XPI = CAL.(X(MAX)) I cMAX I I0 ~ NO N XPI~XPZ I=I-1 XP2= ALWO)I) I MAX YES ~~~PRINT: Error here in NINT. HP= (XPI -XP2)/ 2 0< G =XP2+ HP N NO ~~~~~~~~~~~~~~~~~N(l) = N(Io + + HP *(Q(I) FUN-(HP T(])+ G) xrz< + Q(2)* FUN (HP *T(2)+ G)+ Q(3)* I=I-1 ~~~~~~~~~~~~~~~~~t~~FUN.(HP * i(3)+ G)) YES XPI CAU.X() SPRAY.O.,W) I... W01) YE X AR=E GAUIN V.3, 2, XP 1, B, FUN.)) —~2) (_nUTURN Fig. 18 Noise integration subroutine (NINT).

N - array where the complement of noise distribution (1 - P(xlN)) will be stored. MAX - number of elements in X. (4) Output - returns to the calling program the computed values of the array N. (5) Method - the program evaluates the noise distribution at the values of X. The elements of X are converted, using CAL, to Pearson coordinates where the integration is performed using a three-point Gaussian quadrature (integration on tails of the density function). The values of the distribution function are stored in the array N. After execution of the SNINT and NINT routines, the following table is stored in the computer. P(xlSN) Threshold X 1 - P(xIN) 0 D.9 Since the probability of detection and the probability of false alarm are given by the complements (1 - P(x I..)) of the conditional distributions, the ROC curve for the receiver, along with threshold values, can be read out of computer storage. Note that the ROC curve is limited by the program to regions where the detection probability is greater than 10%. The 10% limitation is easily changed if a more general program is required. The table uses the detection probability as its base for determining X. If one is interested in false-alarm rates, the roles of SNINT and NINT are reversed. This reversal permits the detection probability, for a fixed false-alarm range, to be evaluated. 5. 10 Logarithm of Gamma Function Subroutine (LNGAM) (Figure 19) This program computes the logarithm of the Gamma function of a given argument. (2) Required Subroutines - none. (3) X = LNGAM. (Z, LOC) 57

ENTRY X=~XZ X< NO > X>4 NO PRINT: LNGAM. arg less than 34 EROR RETR ESE PRINT: GAMMA. less YES GAMMA. than or equal to B (X -5)L E LOG. W) X + E LOG. (250662) zero () 00 N V~~~~~~~~~~~~~~~~ETR ERROR RETURN YY ELOG.(GAMMA.(X)) Fig. 19 Log gamma subroutine (LNGAM).

X - the storage location of the computed answer. Z - the argument of the in-gamma function. LOC - error-return location if Z < -34. (4) Output - function returns the value of In r(z) to the location X or, when Z < -34, the function prints out Z. (5) Method - the program uses a MESS subroutine, GAMMA, for the Z I < 34 and then computes the logarithm of the MESS output. For Z > 34, an asymptotic expansion of the in Gamma function is used (Ref. 23). 5. 11 Pearson Subroutines (PEARSN, PEARS2) (Figure 20) The PEARSN subroutine takes two sets of central moments and evaluates a Pearson fit to the corresponding probability density functions. These probability density functions are integrated to give the ROC curve. The PEARS2 program evaluates a Pearson fit to the central moment sets and stores the Pearson fit parameters in the calling program. (2) Subroutines Needed (a) ALTER (b) CAL- CHAN-FUN (c) GAUINT (d) LNGAM (e) NINT (f) SNINT (g) TYPEA (h) TYPEB (3) EXECUTE PEARSN. (MS, MN, BP) EXECUTE PEARS2. (MS, MN, BP) MS - a four-element array of the signal-plus-noise central moments (Note: MS(1) a l(SN)). MN - a four-element array of the noise central moments. BP - a statement label return in the event of an error. 59

(4-a) PEARSN. Output - the function returns the following list to storage in program common. TY - the noise-alone Pearson type (integer). S(O) - the signal-plus-noise Pearson type. MAX - the number of elements in the ROC table (See NINT). S - an array containing the signal-plus-noise distribution function. N - the complement of the noise distribution, that is, 1 - P(x N). X - an array of values, x, for which S and N are tabulated. (4-b) PEARS2. Output - the function returns the Pearson type which fits the signal-plus-noise moment set, MS, and stores in common the parameters associated with the fit. (5) Method - the Pearson program uses the moment set, MS, to evaluate Pearson's criterion, K which is then used to select the proper Pearson type. The constants needed for this type are evaluated by the TYPEA or TYPEB program. If the called routine is PEARS2., control is returned to the main program for additional instructions. If the called routine is PEARSN, the program, using SNINT, evaluates the distribution function of the signal-plus-noise and stores the results in common. The program next takes the noise moment set, MN, finds the Pearson fit, and, using NINT, evaluates the complement of the noise distribution function. The function, after storing the distribution results in common, returns control to the main program. The main program uses a PEARS2., routine to determine the approximation to the log-likelihood ratio nonlinearity. This approximation is used for the general-receiver evaluation. The PEARSN. routine is used in lieu of the Edgeworth program for the evaluation of the ROC curves. 61

SUMMARY This report has introduced the evaluation problem for signal detection receivers and some computer techniques useful in the solution of the evaluation problem. A systematic method for approximating the receiver nonlinearity, such as the log-likelihood ratio, is outlined in Section 3. This method, based on the Pearson System of Frequency Curves, permits a broad class of detection problems to be considered, in contrast to the truncated Taylor series approximation (Ref. 2) which requires small input signal-to-noise ratios. The techniques of Section 3 are also applicable to raw data obtained from channel measurements and, thus, the method can be used for the receiver design and evaluation in practical detection problems. The techniques of receiver evaluation given in this report have been applied to several theoretical models for amplitude-fading channels (Ref. 21). The results of this study, to be published in a future report, suggest the use of bandwidth spreading to increase the signal detectability for the same transmitted signal energy. Noise and signal measurements are being collected from a practical receiver site. The receiver-evaluation methods will be applied to these data to determine the theoretical detectability of the receiver site and a comparison will be made with the actual performance. This study should provide further insight into the acoustic signal design and detection problems. 62

APPENDIX A PEARSON'S EQUATION COEFFICIENTS AND RANDOM VARIABLE MOMENTS This appendix derives the relations between the coefficients of Pearson's differential equation and the moments of the random variable. Pearson's differential equation is df(x) (x+ a) f(x) xE [Lujj. (A.1) bo+b1x+b2x2 (A.1) If the real number L or u is unbounded, the closed interval in (A. 1) is replaced by a semiclosed or open interval. An additional constant is the set of boundary conditions, f(L) = f(u) = 0 (A. 2) Multiplying both sides of (A. 1) by xk[b+ b1x+b2x2] and integrating the result, one obtains f[b0xk +bxk+l +b2xk+2] dx = f(x) )dx. (A.3) Integrating the left side of (A. 3) by parts and using (A. 2) yields the following result, in terms of the random variable moments: ak a+kak lb0 + (k+l)ak bl + (k+2)k+1 b2 = -ak+1. (A. 4) Setting k equal to 0, 1, 2, 3, one obtains the system of equations, 1 0 1 2a1 a 1 a1I 2a3a b a 1 1 2a1 3a2 b0o - 2 (A. 5) a2 2a 2a2 4a3 b 2 1 2 3 1 3 a3 3a2 4a3 5a4 b2 4 If a1 is set equal to zero, which is equivalent to taking moments about the mean, the system 63

of (A. 5) becomes 1 0 1 0 a 0 0 1 0 3112 b (A.6) (A. 6) /"2 0 3 2 4,3 bI L3 3 3,"2 4P3 5/ 4 b2 14 Solving (A. 6), one finds,.3(A4 + 3 22) a = -. 3 2' 10/12/L4 - 18/ 2 - 12 13 10/12144- 18/2- 12/3 bl = a, 2 3 2/12/4- 3/13-6/12.... 3.3 - 2 (A. 7) 10/12/4-84- 18/2 12/13 64

APPENDIX B PEARSON'S CRITERION FOR FREQUENCY CURVES This appendix introduces the Pearson criterion, K, and relates the criterion to the three main Pearson types. Pearson's differential equation is df(x) (x + a) f(x) (B.1) b0 + b x+b2x Dividing (B. 1) by f(x) and integrating, one obtains P-nf(x) = f x+a dx + constant. (B. 2) b0+ blx+ b2x2 Setting the denominator of the integrand equal to zero and solving for the integrand pole locations, one obtains b F4b0b2 x1,2 2b= 1 2(B. 3) 2 b Pearson defined the following parameters: 2 1 L3' (B. 4a) t2 4 2 2 - (B. 4b) 4b b 02 The parameter,1 is a measure of the skewness of the probability density function (1 = 0 implies symmetry). The parameter 2 is a measure of the density function's kurtosis or flatness, relative to a Gaussian density function (M2 = 3). The parameter K is the Pearson 65

criterion. Using K, then (B. 3) is rewritten as -b1 x12 2b2 ( )(B. 5) Note that, based on the results of Appendix A, K can be written in terms of the parameters 1 and 2, i. e., K P= 1(02 + 3) 4(492 - 3j1) (212 - 301 - 6) (B. 6) An examination of (B. 5) yields the following information. If K < 0, the roots Xl, 2 are real and of the opposite sign which is Type I. If 0 < K < 1, the roots are complex conjugate which is Type IV while for K > 1, the roots are real and have the same sign which is Type VI. 66

APPENDIX C PEARSON'S MAIN TYPES OF FREQUENCY CURVES This appendix derives the main Pearson types from the differential equation and the criterion K. TypeI, K< 0 -a1 < x < a2 inf(x) = J b2(xXl x) dx + constant, (C. 1) where x1 and x2 are given by (B. 5). Using a partial fraction expansion on the integrand of (C. 1) and integrating yields A1 A f(x) = c(x-xl) (x- x2) (C. 2) When the Pearson origin is translated to the mode of the density function, f(x) =+ (C. 3) If a1 equals a2, f(x) is symmetric and limited in range. This corresponds to the Pearson subtype II. The criterion for this case is given by K = 0, 1 = 0, 92 k 3. Type IV, 0 < K < 1 - < x < Since the roots, x1 and x2, are complex conjugate, (B. 2) can be written in f(x) f=S y+2 2) dy + constant, (C. 4) b2(y + A ) where b1 y = x + 2b (C. 4) b1 c = a -2b, 2 67

b b1 A2 = l- (C. 5) b2 4b2 Performing the integration indicated in (C..4) yields 2b2 tan- U f(x) = c(y2 +A2) e (C. 6) which, after an origin translation, gives -1 x -v tan 1 () f(x) = f e (C. 7) + 2m Type VI, 1 < K O < a < x or a < O < x The derivation is the same as for Type I except both roots have the same sign. Therefore, f(x) = c a +( -- (C. 8) A shift in the origin yields the standard Pearson form for Type VI, m1 -m f(x) = f0(x-a) x (C.9) Note that in (C. 7) and (C. 9), the symbol "a", which occurs must be evaluated from the terms v, A, m, or v, ml, m2. This symbol "a" should not be confused with the "a" which appears in the Pearson differential equation. 68

APPENDIX D PEARSON'S SUBTYPES OF FREQUENCY CURVES This appendix discusses several subtypes of the Pearson system. In Appendix B, an equation for the roots of the quadratic is given in terms of the Pearson criterion, K; namely, 1, 2 = 2b2 (D. 1) In addition to the three main types, a number of interesting cases arise as the transition from one main type to another occurs. (i) K = + co (b2 = 0) This criterion leads to a Type III fit given by a density function f(x) = fO e/X (1 + x)ya (D.2) (ii) K=0 (b1 = b2 = 0) These conditions are representative of the Gaussian density function (Type VII). In Appendix C, subtype II was introduced with the criteria K = 0, P1 = 0, 32 \ 3. Type VII has the criterion K = 0, p1 = 0, /2 = 3. While Type II is symmetric and finite in range, Type VII is symmetric but infinite in range. The density function is 1 2 2 b0 f(x) = f e. (D. 3) (iii) K = 1 (real and equal roots) This criterion gives the Type V curve whose density function is f(x) = f0x p e /x. (D. 4) The following table lists the twelve Pearson types along with their range and skewness properties. 69

Table I. Pearson Curves Type Curve Range, Skewness Ya va2 x (a1 x 2 Finite range, skewed o a1 a2 2 II f (1 - x )a Finite range, symmetric a ya -yx III f (I +-a) e Semilimited range, skewed - X 2 -m -vtan (x) IV f (1 + ) e Unlimited range, skewed a V f xp e x Semilimited range, skewed 0 m1 -m2 VI fo(x- a) x Semilimited range, skewed 2 2 VII f ef x / 2 Unlimited range, symmetric o VIII f (1 +- )-m Limited range, skewed 0II o~ea a ID f (1 + x)m Limited range, skewed x X ff e f Semilimited range, skewed XI f xm Semilimited range, skewed 0 m 1 m2 XII f0(x + al) (a2 x) Limited range, skewed 70

APPENDIX E SOME PROPERTIES OF THE LIKELIHOOD RATIO TRANSFORMATION This appendix will derive certain properties which the likelihood ratio transformation induces on the conditional, probability density functions. The likelihood ratio of an observation, (Y1, Y2... Yn) is fY(Y1"',- YISN )'(YlY2 "'Yn)= fY(Y1"'y n N) (E. 1) Associated with every observation (Y1,.. y n) is the image of the transformation given in (E. 1). If this image is denoted by the symbol "e", then "fl" is a random variable with conditional probability density functions, fQ(Q ISN) and fl(fl IN). Using the characteristic function approach (see Section 2. 3), one obtains 1 cCo iv(Yl"...'Yn) f( I =SN) = J e f e fy(Y1,Yn ISN) dy1 I dy dv. (E 2) -00 -00 Substituting (E. 1) for the density function in (E. 2), one obtains f ([SN) = f27. ef e (yl..., yn)fy(y... y I N)dyl.. dyndv. (E. 3) -0C -CO Thus, 1 o 00 - ip iv- co ivl(Yl1""Y n) fk( ISN) = e J' iv v) J e 1 fy(Y l'" Yn)dYl'' n dv. (E. 4) By definition, the inner integral of Eq. (E. 4) is the characteristic function of the likelihood transformation under the noise hypothesis. It follows from Eq. (E. 4) that MQ(iv) M (iv) (E. 5) S a(iv)M (iv) A well-known property of Fourier Transformation Theory (Ref. 6) is the transformation of 71

differentiation into multiplication. Therefore, f (liSN) = f fl(l N). (E. 6) If I is considered as a random variable, then, from (E. 6), (l) \ fa(= S); (E. 7) (f(lSN) that is, the likelihood ratio of the likelihood ratio equals the likelihood ratio. Let g be any mapping on the random variable f with the property that the inverse of g exists and is continuously differentiable and let Z denote the image of g. Then, Z = g(M), (E. 8): g (Z). Using the theory of transformations to find the probability density function of the random variable Z yields f- (z ISN) = f(g- (z)ISN) ag (z) z az (E. 9) f(zIN) = fj(g-I(z)IN) ag l(z) z~z az Thus, the likelihood ratio of the random variable z is f (zA SN) ffz(g(z SN) l(z) =A z N 1 (E. 10) - f'(ziN) ZiT 7N= (z). z fQ(g (z) I N) Since, from Eq. (E. 8), g-l(z) =, (E. 11) the likelihood ratio of any monotone function of the likelihood ratio equals the likelihood ratio. The existence of the inverse function is guaranteed by the monotoneness of the function. Although the inverse function may not be differentiable everywhere, the derivative of the inverse occurs in both the numerator and denominator of (E. 10) and these nondifferentiable points are removable in the limit. 72

REFERENCES 1. W. W. Peterson and T. G. Birdsall, "The Theory of Signal Detectability", University of Michigan Electronic Defense Group Technical Report No. 13, Ann Arbor, Mich., 1953. 2. D. Middleton, Introduction to Statistical Communication Theory, McGraw-Hill Book Co., Inc., New York, 1960. 3. T. G. Birdsall, "Detection Theory", University of Michigan Summer Conference on Random Processes, Ann Arbor, Mich., 1962. 4. T. G. Birdsall and L. W. Nolte, "A Computer-Augmented Technique for the Design of Likelihood Ratio Receivers", University of Michigan Cooley Electronics Laboratory Technical Report No. 130, Ann Arbor, Mich., 1962. 5. T. G. Birdsall and M. P. Ristenbatt, "The ROC: Receiver Operating Characteristic", University of Michigan Electronic Defense Group Memo No. 24, Ann Arbor, Mich., 1957. 6. A. Papoulis, The Fourier Integral and Its Applications, McGraw-Hill Book Co., Inc. New York, 1960. 7. H. Cramer, Mathematical Methods of Statistics, Princeton University Press, New Jersey, 1957. 8. W. Feller, An Introduction to Probability Theory and Its Applications, John Wiley and Sons, Inco, New York, 1960. 9. W. B. Davenport and W. L. Root, An Introduction to the Theory of Random Signals and Noise, McGraw-Hill Book Co., Inc., New York, 1958. 10. T. C. Fry, Probability and Its Engineering Uses, D. Van Nostrand Co., Inc., New York, 1928. 11. A. Fisher, The Mathematical Theory of Probabilities, Macmillan Co., New York, 1936. 12. W. Kaplan, Advanced Calculus, Addison-Wesley Publishing Co., Inc., Reading, Mass., 1956. 13. R. V. Churchill, Fourier Series and Boundary Value Problems, McGraw-Hill Book Co., Inc., New York, 1941. 14. D. A. S. Fraser, Statistics: An Introduction, John Wiley and Sons, Inc., New York, 19 58. 15. F. Y. Edgeworth, "The Law of Error", Proceedings of the Cambridge Philosophical Society, Vol. 20, 1905. 16. K. Pearson, "Contributions to the Mathematical Theory of Evolution, II. Skew Variation in Homogeneous Material", Philosophical Transactions of the Royal Society of London, Series A, Vol. 186, 1895. 73

17, K. Pearson, "Contributions to the Mathematical Theory of Evolution, X. Supplement Variation", Philosophical Transactions of the Royal Society of London, Series A, Col. 197, 1901. 18. K. Pearson, "Contributions to the Mathematical Theory of Evolution, XIX. Supplement to a Memoir on Skew Variation", Philosophical Transactions of the Royal Society of London, Series A, Vol. 216, 1916. 19. W. P. Elderton, Frequency Curves and Correlation, Charles Edwin Layton, London, British Isles, 1917. 20. L. Halsted, T. G. Birdsall and L. W. Nolte, "On the Detection of a RandomlyDistorted Signal in Gaussian Noise", University of Michigan Cooley Electronics Laboratory Technical Report No. 129, Ann Arbor, Mich., 1962. 21. J. N. Gittelman, "Performance of Optimum Detection Receivers in Amplitude Fading Channels", presented at the Acoustical Society of America Meeting, Oct. 1964. 22. Z. Kopal, Numerical Analysis, John Wiley and Sons, Inc., New York, 1961. 23. M. Abramowitz and L. A. Stegun, (Editors). Handbook of Mathematical Functions, National Bureau of Standards, AMS 55, Washington, D. C., 1964. 74

DISTRIBUTION LIST No. of Copies 4 Office of Naval Research (Code 468), Department of the Navy, Washington 25, D. C. 1 Office of Naval Research (Code 436), Department of the Navy, Washington 25, D. C. 1 Office of Naval Research (Code 437), Department of the Navy, Washington 25, D. C. 6 Director, U. S. Naval Research Laboratory, Technical Information Division, Washington 25, D. C. 1 Director, U. S. Naval Research Laboratory, Sound Division, Washington 25, D. C. 1 Commanding Officer, Office of Naval Research Branch Office, 230 N. Michigan Avenue, Chicago 1, Illinois 10 Commanding Officer, Office of Naval Research Branch Office, Box 39, Navy No. 100, FPO, New York, New York 20 Defense Documentation Center, Cameron Station, Building No. 5, 5010 Duke St., Alexandria 4, Virginia 2 Commander, U. S. Naval Ordnance Laboratory, Acoustics Division, White Oak, Silver Spring, Maryland 1 Commanding Officer and Director, U. S. Navy Electronics Laboratory, San Diego 52, California 1 Director, U. S. Navy Underwater Sound Reference Laboratory, Office of Naval Research, P. O. Box 8337, Orlando, Florida 2 Commanding Officer and Director, U. S. Navy Underwater Sound Laboratory, Fort Trumbull, New London, Connecticut (Attn: Mr. W. R. Schumacher, Mr. L. T. Einstein) 1 Commander, U. S. Naval Air Development Center, Johnsville, Pennsylvania 1 Commanding Officer and Director, David Taylor Model Basin, Washington 7, D. C. 1 Office of Chief Signal Officer, Department of the Army, Pentagon, Washington 25, D. C. 2 Superintendent, U. S. Navy Postgraduate School, Monterey, California (Attn: Prof. L. E. Kinsler, Prof. H. Medwin) 75

DISTRIBUTION LIST (Cont.) No. of Copies 1 U. S. Naval Academy, Annapolis, Maryland (Attn: Library) 1 Harvard University, Acoustics Laboratory, Division of Applied Science, Cambridge 38, Massachusetts 1 Brown University, Department of Physics, Providence 12, R. I. 1 Western Reserve University, Department of Chemistry, Cleveland, Ohio (Attn: Dr. E. Yeager) 1 University of California, Department of Physics, Los Angeles, California 2 University of California, Marine Physical Laboratory of the Scripps Institution of Oceanography, San Diego 52, California (Attn: Dr. V. C. Anderson, Dr. Philip Rudnick) 1 Dr. M. J. Jacobson, Department of Mathematics, Rensselaer Polytechnic Institute, Troy, New York 1 Director, Columbia University, Hudson Laboratories, 145 Palisade Street, Dobbs Ferry, N. Y. 1 Woo~ds Hole Oceanographic Institution, Woods Hole, Massachusetts 1 Johns Hopkins University, Department of Electrical Engineering, Johns Hopkins University, Baltimore 18, Maryland (Attn: Dr. W. H. Huggins) 1 Director, University of Miami, The Marine Laboratory, # 1 Rickenbacker Causeway, Miami 49, Florida (Attn: Dr. J. C. Steinberg) 1 Litton Industries, Advanced Development Laboratories, 221 Crescent St., Waltham, Massachusetts (Attn: Dr. Albert H. Nuttall) 1 Institute for Defense Analysis, Communications Research Division, von Neumann Hall, Princeton, New Jersey 1 Commander, U. S. Naval Ordnance Test Station, Pasadena Annex, 3202 E. Foothill Blvd., Pasadena 8, California 1 Chief, Bureau of Ships (Code 688), Department of the Navy, Washington 25, D. C. 1 Chief, Bureau of Naval Weapons (Code RU- 222), Department of the Navy, Washington 25, D. C. 1 Cornell Aeronautical Laboratory, Inc., P. O. Box 235, Buffalo 21, New York (Attn: Dr. J. G. Lawton) 1 Autonetics, A Division of North American Aviation, Inc., 3370 East Anaheim Road, Anaheim, California (Attn: Dr. N. Schalk) 1 Mr. Charles J. Loda, Institute for Defense Analyses, 400 Army-Navy Drive, Arlington, Virginia 22202 76

DISTRIBUTION LIST (Cont.) No. of Copies 1 Catholic University of America, Department of Physics, Washington, D. C. 20017, Attn: Dr. Frank Andrews 1 Dr. B. F. Barton, Director, Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan 23 Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan 77

Unclassified Security Classification DOCUMENT CONTROL DATA- R&D (Security classitication of title, body of abstract and indexing annotation must be entered when the overall report is classified) I. ORIGINATING ACTIVITY (Corporate author) 2a. REPORT SECURITY C LASSIFICATION Cooley Electronics L aboratory Unclassified The University of Michigan 2b GROUP Ann Arbor, Michigan 3. REPORT TITLE Computer Techniques for the Fvaluation of Detector Performance 4. DESCRIPTIVE NOTES (Typo of report and inclusive date.) Technical Report 5. AUTHOR(S) (Last name, first name, initial) Gittelman, Joseph N. 6. REPORT DATE 70. TOTAL NO. OF PAGESb. NO. OF REFS May, 1965 78 23 Ea. CONTRACT OR GRANT NO. 9a. ORIGINATOR'S REPORT NUMBER(S) Nonr 1224(36) b. PRJEC nr 224(36)NO. Tech. Report 160, -Cooley Elec. Lab. c. Task 187-200 9b. gTHER RoPORT NO(S) (Any othernumbere that may be assigned Task 187-200 sthi report) d. 10. A V A IL ABILITY/LIMITAtION NOTICES 11. SUPPLEMENTARY NOTES 12. SPONSORING MILITARY ACTIVITY Office of Naval Research Acoustic Branch, Code 468 _l_ _ _ _ __ Washington 25, D.C. l 13. ABSTRACT In many situations, the theory of signal detectability lacks a connection between the theoretical aspects and the practical implementation of a detector. This condition exists because there are no general techniques for evaluating the detector performance. This report contains a collection of available techniques which have been adapted for the computer evaluation of a large class of detector In addition to the classical approximations, the Pearson system of frequency curves is integrated into.the computer programs. The Pearson system yields a closed-form approximation to the detector performance, based on the moments of the signal and noise distribution functions. Such closed-form approximations enable the user to evaluate the effects of signal-to-noise ratio, detector nonlinearities, and filter bandwidth. D D JAN64 1473.. Security Classification

TTnclass ifid __ Illll___l t I Il I IlIll II Il I Security Classification 3 9015 02826 0258 14. LINK A LINK B LINK C KEY WORDS ROLE WT ROLE WT ROLE WT Detector Performance Receiver Evaluation Pearson Curves Moment Techniques Theory of Signal Detectability Likelihood Ratio Receivers Suboptimum Detectors Generalized Detection Receivers INSTRUCTIONS 1. ORIGINATING ACTIVITY: Enter the name and address imposed by security classification, using standard statements of the contractor, subcontractor, grantee, Department of De- such as: fense activity or other organization (corporate author) issuing (1) "Qualified requesters may obtain copies of this the report. report from DDC." 2a. REPORT SECUIUTY CLASSIFICATION: Enter the. over- (2) "Foreign announcement and dissemination of this all security classification of the report. Indicate whether report by DD is not authorized" "Restricted Data" is included. Marking is to be in accordance with appropriate security regulations. (3) "U. S. Government agencies may obtain copies of this report directly from DDC. Other qualified DDC 26. GROUP: Automatic downgrading is specified in DoD Di- users shall request through rective 5200.10 and Armed Forces Industrial Manual. Enter the group number. Also, when applicable, show that optional., markings have been used for Group 3 and Group 4 as author- (4) "U. S. military agencies may obtain copies of this ized. report directly from DDC. Other qualified users 3. REPORT TITLE: Enter the complete report title in all shall request through capital letters. Titles in all cases should be unclassified., If a meaningful title cannot be selected without classification, show title classification in all capitals in parenthesis (5) "All distribution of this report is controlled. Qualimmediately following the title. ified DDC users shall request through 4. DESCRIPTIVE NOTES: If appropriate, enter the type of. report, e.g., interim, progress, summary, annual, or final. If the report has been furnished to the Office of Technical Give the inclusiv e dates when a specific reporting period is Seryices, Department of Commerce, for sale to the public, indicovered. cate this fact and enter the price, if known, 5. AUTHOR(S): Enter the name(s) of author(s) as shown on 11. SUPPLEMENTARY NOTES: Use for additional explanaor in the report. Enter last name, first name, middle initial. tory notes. If military, show rank and branch of service. The name of the principal aulthor is an absolute minimum requirement. 12. SPONSORING MILITARY ACTIVITY: Enter the name of the departmental project office or laboratory sponsoring (pay6. REPORT DATE; Enter the date of the report as day, ing for) the research and development. Include address. month, year; or month, year. If more than one date appears on the report, use date of publication. 13. ABSTRACT: Enter an abstract giving a brief and factual summary of the document indicative of the report, even though 7a. TOTAL NUMBER OF PAGES: The total page count it may also appear elsewhere in the body of the technical reshould follow normal pagination procedures, i.e., enter the port. If additional space is required, a continuation sheet shall number of pages containing information, be attached. 7b. NUMBER OF REFERENCES: Enter the total number of It is highly desirable that the abstract of classified reports references cited in the report. be unclassified. Each paragraph of the abstract shall end with 8a. CONTRACT OR GRANT NUMBER: If appropriate, enter an indication of the military security classification of the inthe applicable number of the contract or grant under which formation in the paragraph, represented as (TS), (S), (C), or (U). the report was written There is no limitation on the length of the abstract. How8b, 8c, & 8d. PROJECT NUMBER: Enter the appropriate ever, the suggested length is from 150 to 225 words. military department identification, such as project number, 14. KEY WORDS: Key words are technically meaningful terms subproject number, system numbers, task number, etc. or short phrases that characterize a report and may be used as 9a. ORIGINATOR'S REPORT NUMBER(S): Enter the offi- index entries for cataloging the report. Key words must be cial report number by which the document will be identified selected-so that no security classification is required. Identiand controlled by the originating activity. This number must fiers, such as equipment model designation, trade name, military be unique to this report. project code name, geographic location, may be used as key 9b. OTHER REPORT NUMBER(S): If the report has been words but will be followed by an indication of technical conassigned any other report numbers (either by the originator text. The assignment of links, rules, and weights is optional. or by the sponsor), also enter this number(s). 10, AVAILABILITY/LIMITATION NOTICES: Enter any limitations on further dissemination of the report, other than those Security Classification

C)ItEY ZLW tC Ai-' - BOtATR ERRATA. SS g1EV ff r C m) I ey E*eclitt-Ao, ntrs > 4 IA t m Ai d U;ne, tVItie thie pltsn Ob 4 bN apt 4t ad tAt 13 Im Mae fiarsi 1BAxi a. t e 9og.., t$6. A' - p. lB n ~" - t P, A c tJ a8 E~uathon 2.6 elode d - tread'"bo h....1 29 Afaterhe seeeoM line, add "The ytmarm att cy tem which ct n he td foro the dergnp A. -h tode: aottosPreceivers whshilt -re to op erte'V. p4tti t Atbt A P erso n fIt to.e ob nse A dt a a, c pI Ytt: a;m pproxit on to khet log 11ke&' hni rav('tt la 4 gitabl-e for the comptio flofOe "r >eceive,, o 2 A s 415~ InF~~~b Aetent to 0101 t n6 o't eA fore'trer Iv e~ph, vCg~gi t wion." 741 In ti. f!h I-th h Pnoe owa the tbrnt (n meted pagvg heidd anl asterIIkf 7'2 b i the f hliefom the boffzm, tnaerl t ed world, IstI9 p~ff bet een yanyt a "imonotne." * 72 In the ut line fo the (tom, replace the word "traontone nes&-# I v to ol — fetty