Technical Report No. 129 3674-2-T ON THE DETECTION OF A RANDOMLY DISTORTED SIGNAL IN GAUSSIAN NOISE by L. H lsted T..Birdsall L. W. Nolte Approved by: _ B. F. Barton for COOLEY ELECTRONICS LABORATORY The University of Michigan Department of Electrical Engineering Ann Arbor Contract No. Nonr-1224(36) Office of Naval Research Department of the Navy Washington 25, D. C. October 1962

TABLE OF CONTENTS Page LIST OF ILLUSTRATIONS iv FOREWORD v ABSTRACT vi 1. INTRODUCTION 1 2. THE MATHEMATICAL MODEL 3 3. OPTIMUM RECEIVER DESIGNS 6 4. RECEIVER EVALUATION, SMALL SIGNAL ANALYSIS 14 5. RECEIVER EVALUATION USING THE IBM 704 19 5. 1 Method for Obtaining Discrete Probability-Density Functions 19 5. 1. 1 Representation of Additive Noise 20 5. 1. 2 Representation of Multiplicative Noise 20 5. 2 Determination of Lk Functions 21 6. 3 Computer Results 24 5. 3. 1 Distribution of Signal Energy 24 5. 3. 2 Comparison of Detectability With and Without Multiplicative Noise 25 5. 3. 3 Crosscorrelator 28 5. 3. 4 Clipper Crosscorrelator 30 6. CONCLUSIONS 34 APPENDIX: Qk AND THE PARABOLIC CYLINDER FUNCTIONS D 35 REFERENCES 36 DISTRIBUTION LIST 37 iii

LIST OF ILLUSTRATIONS Figure Title Page 1 Sketch of the problem. 1 2 Sketch of vectors in two dimensions. 5 3 Pearson III density function for k = 1, 3, 8, and normal. 10 4 Receiver block diagram for H type multiplicative noise and Gaussian additive noise. 11 5 Plot of L1, L3, L7, computed at 0. 5 intervals. 23 6 Effect on ROC of distribution of signal energy for two-sample case (Pearson III index of k = 1). 25 7 Effect on ROC of distribution of signal energy for two-sample case (Pearson III index of k = 3). 26 8 Effect on ROC of distribution of signal energy for two-sample case (Pearson III index of k = 7). 26 9 Effect of multiplicative noise on detectability for two-sample case with energy distributed equally between samples. 27 10 Effect of Pearson Type III multiplicative noise on performance of crosscorrelator. 27 11 Clipper crosscorrelator. 30 12 Effect of Pearson Type III multiplicative noise on performance of clipper crosscorrelator. 32 13 Efficiency, f7mn, due to multiplicative noise, for clipper crosscorrelator for various variances of multiplicative noise. 32 iv

FOREWORD The detection problem considered here is within the framework of the general class of fixed-observation-interval detection problems in which the input to a receiver is either signal-plus-noise or noise alone. The receiver is to decide whether or not the signal is present. The primary aim of many detection studies is to both design and evaluate the receiver which makes this decision in an optimum manner. It has been shown that the likelihood ratio is the quantity which the receiver should compute (Ref. 4). The receiver design as well as its performance will differ depending upon the particular lack of knowledge that exists in regard to the signal waveform. Many detectionproblems assume that the amplitude of the signal is constant, and known. In many transmission media the path, path attenuation, or other factors may vary rapidly. Such physical situations were the motivation for this report. The specific situation considered was a polarity-preserving rapid fluctuation. One conclusion reached is that when signal amplitude is known and constant, maximum detection efficiency occurs when the signal energy available is concentrated in one component. However, when signal amplitude is unknown and not constant, maximum detection efficiency occurs when the available signal energy is spread equally among all of its components. v

ABSTRACT The problem considered is that of designing a likelihood ratio receiver for detection of a signal that has been distorted by a randomly varying transmission loss and subsequently masked by an additive Gaussian noise. A general likelihood ratio receiver design was found whose form was the same for a wide class of random multipliers which includes the Pearson Type III, Rayleigh, and truncated Gaussian distribution. The optimum receiver was evaluated for the two-inputsamples case using a digital computer. The results reinforced theoretical work which indicated that the effect of the multiplicative noise could be minimized by spreading the signal energy equally among the samples. Various facets of the performance of two nonoptimum receivers, the crosscorrelator and the clipper crosscorrelator, were also evaluated using the computer. vi

1. INTRODUCTION The design of a receiver for detection of a signal subject to random interference or distortion, where the only purpose of the receiver is to indicate the presence or absence of this particular signal, can be appropriately regarded as the design of a test for a statistical hypothesis and the corresponding theory applied to obtain an analytical expression which describes the functional behavior of an "optimum" receiver. A receiver design arrived at using this approach may be considered optimum only in a well-qualified sense, since in applying the theory it is usually necessary to make many simplifying assumptions, some of which very likely will not hold under the actual conditions in which the receiver is to operate. Because of the necessity of making these assumptions, it is desirable to vary them a little and note how the optimum receiver design varies. In this way it is possible to identify critical assumptions and to see how the assumptions enter into or affect the receiver design. The problem considered here is a basic one. A receiver is to be designed to detect the presence of a signal which is distorted by a nonphase reversal type of transmission loss, rapidly varying in a random manner, and subsequently masked by added noise. Kailath (Ref. 6) has considered this problem and has aptly termed the medium with randomly varying transmission loss a random channel. This investigation differs from Kailath's in that he considered random channels with a memory and Gaussian coefficients or multipliers, whereas here the random channel is considered to be a nonnegative random multiplier, with various assumptions being made regarding its distribution. Figure 1 is a diagram illustrating the problem. FigG1NSkcRANDOMLY o pYES-NO TIME -VARYING =RECEIVER D SOURCE I = Fig.AMPLIFIER DECISION

The intent here has been to represent the receiver inputs, under both signalplus-noise and noise-alone conditions, as points in a finite-dimensional vector space and to make various assumptions regarding the distributions of the random multiplier and added noise. By this method optimum receiver designs may be determined and compared with the crosscorrelator. The crosscorrelator is used as a reference because it is basically the type of receiver design to be used to detect a signal in added noise when the signal is completely specified (except perhaps in total energy) and it is not distorted by a random multiplier. The comparisons serve to give the receiver designer some feeling for the problem and to suggest ways in which a crosscorrelator might be modified where necessary to cope with this "multiplicative noise" as well as with the additive kind. 2

2. THE MATHEMATICAL MODEL A receiver used to detect the presence of a signal in noise can be viewed as a computer. It is designed to operate upon certain input data, called the receiver input, so as to compute the most meaningful statistics possible upon which a decision can be based, as to whether or not the signal is present. The receiver input here is assumed to be a scalar-valued function of time of the form z(t) = m(t) (t) + a(t) (1) where t lies in a fixed time interval (0, T), a(t) is Gaussian noise, and m(t) is a nonnegativevalued random function. Under the noise-alone condition ~ (t) - 0, while under the signal-plusnoise condition 5 (t) = s(t), a signal known exactly. The input z(t) could be a voltage, a force, a displacement, or any other measurable entity. It is assumed that the random functions m(t) and a(t) are stationary and that a(t) has a mean value of zero. To simplify the analysis we assume that we can choose a set of n orthogonal functions {i(t} defined over the interval (O, T), and a set of n sampling times t1, t2,., tn in this interval, so that almost certainly T n 2 f z(t) - Z z(ti) 4/i(t) dt < e (2) for a given e > 0. Then we can represent the receiver input as the point z = (Zl, z2'... Zn) (3) in an n-dimensional vector space where z. = Z(ti) (4) 3

This representation is good in the sense that (2) holds. The actual analysis and determination of optimum receiver design then is based on taking the receiver input to be a random vector such as (3) and on being able to relate the results obtained to the processing of functions such as (1). What we have in mind in the way of a set of orthogonal functions {ji(t)} is either the (set of) step functions 1 (i- 1)T<t<iT i=12. n n n P/i(t) - 0 otherwise (5) or the continuous bandlimited-functions having no frequency components outside the band (O,W). sin [(2WT)m (T - ] i(t) = st i=l, 2,..., n=2WT (6) (2WT) sin [lr( T ) With either of these sets of orthogonal functions we take the sampling times as iT i n and it is fairly easy to translate the results of analysis into receiver design. We shall therefore speak of both (1) and (3) as the receiver input, leaving it to the reader to remember that the vector (3) is part of an approximation of (1). Similarly, we shall not distinguish between the functions s(t), m(t), and a(t) and their vector representations. The equation n = 2WT defines an "equivalent" bandwidth W and n n E = T/n E s52 = (2W) E s.2 (8) i=l1 i=l is referred to as the energy of the signal. These identifications are made to relate the present investigation and results to a previous study of the detectability of a signal specified exactly in white Gaussian noise (Ref. 1). Then the problem is to observe the vector z having coordinates Zi. = mi+a i = 1 2., n (9) 4

where the mni are identically distributed nonnegative random variables and the ai are Gaussian random variables with a mean value of zero and a common variance, and to decide whether - 0 or ( 1,...' 5n) =(s,, s2 n), the signal known exactly. Sketches of the variables in (9) for 2-dimensional vectors are shown in Fig. 2. (a) Signal alone (b) Multiplicatively (c) Additive perturbed signal noise alone (d) Signal-plus-added (e) Multiplicative perturbed noise signal-plus-added noise Fig. 2. Sketch of vectors in two dimensions.

3. OPTIMUM RECEIVER DESIGNS The procedure in determining an optimum receiver design is a simple one. It consists of assuming a pair of probability distributions for the n-dimensional receiver input and then forming the likelihood ratio. The likelihood ratio is simply the ratio of the two corresponding probability densities, and the optimum receiver is one that computes this likelihood ratio or a monotonic function of it. The realization of such a receiver may however be very difficult. In the present case the receiver input under the signal-plus-noise condition is a random vector which we have assumed to be the sum of two random vectors, the first consisting of the receiver input under the noise-alone conditions, the other due to the signal being present. The probability density under the signal-plus-noise condition is then the convolution of the densities of the random vector due to the signal and that due to the added noise. In general, it will not be possible to write this in other than integral form. An optimum receiver must therefore perform an integration over n-space or compute some monotonic function of this integral, if one can be found. It is understandable then that the most easily realized receiver designs are obtained when it is assumed that the receiver input has independent coordinates. The integral over n-space becomes the product of n integrals over 1-space, and the receiver can instead compute the logarithm of the likelihood ratio, the difficulty of computing (a good approximation of) the logarithm of an integral over 1-space being a relatively mild one. We consider here some cases where the receiver input has independent coordinates and then, briefly, the more general case where it does not. Suppose that the ai and mi are independent random variables, so that the receiver input has independent coordinates. Let the a. have common variance r2 and the m. have the density f(m). Then the probability densities for the ith coordinate z. of the input are -z 2/2 2 hN(Zi) = whenz. = a. (10) 0 a Jz~ 6

and hsN(zi) = f f(m) exp - ) dm (11a) or hSN(zi) hN(Zi) f(m) exp2 ( - 2m zsi dm (lb) when z. m.s. + ai.. The function 1. 11 S1 (zi) = f f(m) exp 2 ( 2m z. s. dm (12) o c 1 is the general expression for the likelihood ratio of the ith coordinate of the receiver input. In assuming various distributions for the multiplier m, the log-normal distribution received first consideration. The reason for this was that such a multiplier in practice might arise as the product of many independent identically-distributed multipliers. For example, m might be the overall gain of a group of identical amplifiers in cascade where the gain of each varied randomly about some nominal value. Then the log of m would be the sum of a number of independent identically-distributed random variables and we might expect the Central Limit theorem to apply. _cb2 Choosing log10m to be Gaussian with mean 2 variance b2, where c is the natural logarithm of 10, the expected value of m is 1 and the density of m is (log m + cb) /2b2 f(m) = exp K (13) bm /2A/ Substituting this expression in (12) we obtain 1 o 0 (log m+ 2)1 s - 2m zisd (z exp 1 f exp 2 exp(14) The principal difficulty in realizing a receiverb to 2compute the log of the complete likelihood ratio n z log { (zi) (15) i=1

for this case is that the integral in (14) is a function of the two variables si and z.s. 1 1 1 The above difficulty does not appear, however, if we assume the multiplier m to have the Pearson Type III density f(m) (k+l) mk e-(k+l)m, k > -1, (16) m=(k+l) F denoting the Gamma function. Here the mean value of m is 1 and the variance is k~. The likelihood ratio CQ(zi) then becomes (k+l)k+l 0 k m s m slz () r(k+l) S exp (k+l) dm (17) which, by means of a change of the variable of integration, can be written as (k+ 1) k+ 1 k Oxi (__ /2 (zi) IF(k+l) (S1 f pk eXi e-2 d: (18) where x - i (k+l) (19) U IsiI Isil In this case the integral cc = k xi -e2/2 do (20) Qk(xi) f Ik 0 in the likelihood ratio is a function only of the random variable x.. This is a function that is monotonically increasing (in the variable xi) and has a derivative of every order that is also monotonically increasing. It may be seen that for this distribution of the multiplier m, a receiver that computes n Z Lk(Xi) i=1k where: Lk(xi) = in Qk(xi) j (21) See Appendix for a discussion of the Q-functions. 8

would be an optimum receiver. Studying expressions (19), (20), and (21) we note that the receiver input is first normalized relative to the variance of the added Gaussian noise, and multiplied by the infinitely clipped version of the signal to obtain a quantity that is positive when input and signal agree in sign and negative when they do not. A variable negative bias, which is large when signal amplitude is small and small when signal amplitude is large, is then added to form the random variable xi. Next Lk(xi), which emphasizes or weights the more positive xi more heavily than those less positive, is computed and fed into an adder to form (21). The optimum receiver described by (21) is of particular interest since the Pearson Type III distribution approaches the log-normal as the parameter k becomes large. Therefore (21) will approach a monotonic function of the log of the complete likelihood ratio [expression (15)] for large k when the multiplier has a log-normal distribution. Hence it might be taken as a reasonable approximation of the optimum receiver for that case. Figures 3(a) and 3(b), respectively, indicate the shape of the Pearson Type III distribution for k=1, 3, and 8 and show how well the Pearson Type III distribution for k = 1 and k = 8 approximates the log-normal. The latter is illustrated by showing that when m has a Pearson Type III distribution, y = loglom has essentially a skewed-Gaussian-distribution and the skewness diminishes as k increases. It can also easily be seen that y is very nearly Gaussian for large k by considering the density function for y, g(y) c(k+l1) exp[(k+l) (cy - eCY)] (22) Using Stirling's approximation for F(k+i) k -k r(k+1) z V2v2Kk k e (23) which is good when k is large, and the approximation ecy 1+ cy + c2y2 (24) 2The "infinitely clipped" signal is +lwhens.> 0 i=1, 2,..., n. S. -1 when s. < 0 1 9

1.25 k8 1.0 0 2 3 4 (a).4 -4 -I 2 0 C(b.210 CL k 8 -— NORMAL -4 -3 -2 -1 0 I 2 3 4 log m -u(log m) oa (log m) (b) Fig. 3. Pearson III density function for k = 1, 3, 8, and normal. which is good when Icyl is small, it is evident that for large k and at least for values of y near zero, g(y) has the form of a Gaussian density function where the mean is zero and the variance is. We note, then, that for large values of k the variance of y is small; thus c2 (k+l) y will tend to be close to its mean value and therefore will be ess~entially Gaussian. wi ss~~'entially Gaussian0

distribution of the form v2m2 f(m) = A mk e- e 2 m > 0 k> -1 (25) where A is a constant and k, p, and v are known parameters, then (12) will describe an optimum receiver. For this more general case however we have ix (. s 1+) (= ~ 1 ) 2 (26) The fact that three parameters enter into (25) means that not only the mean and variance of m, but also, to a certain extent, the shape of the distribution, can be varied, and this can be done without changing the general form of the receiver. It may be seen that this family of distributions contains the truncated Gaussian and the Rayleigh distributions, as well as the Pearson Type III. We have called the distribution an H-type distribution. To compare it with the Pearson system we note that k- pm- v2 2 f'(m) pm f(m) (27a) m if the distribution of m is of the H-type and f'(m) - f(m) (27b) bO +bl m +b2 m2 if m is from the Pearson system (Ref. 7). Figure 4 shows a block diagram of the optimum receiver, or rather, of an analog of the optimum receiver, for the above general case. The functional behavior of the reINPUT Z NORMALIZING + NONLINEAR AMP GAIN L( S paO ISI ISI S I INTEGRATE THRESHOLD DECISION ( O,T ) COMPARATOR OUTPUT Fig. 4. Receiver block diagram for type H multiplicative noise and Gaussian additive noise. 11

ceiver is essentially as described for the case where m has a Pearson Type III distribution, differing mainly in the addition of the variable gain amplifier having gain V2K 2 2 (28) The nonlinear function Lk is the natural log of Qk. When the receiver input does not have independent coordinates it is difficult to do more than write the general expression for the likelihood ratio which, using vector and matrix notation, is k(z) f f(m) exp 2 (ms) ms)exp z (ms) dm (29) Here -1 is the inverse of the covariance matrix, E, of the added Gaussian noise, z is the row vector (zi, z2.., Z ), (ms) is the transpose of the row vector (m s1, m2s2,... m s ), etc., and the integral is over the n-dimensional space of the multiplier m. The principal difficulty in considering specific cases lies in determining suitable joint density functions f(m) = f(m1, 2,''', m n).(30) However we do not have to consider specific cases in order to interpret (20) in terms of a receiver design. Since Z is symmetric and positive definite, (29) can be rewritten as {(z) = f exp [log f(m) - (ms)' W'W(ms)+ z' WW(ms)] dm (31) where W is a triangular matrix such that -1 = W' W (32) The matrix W is a "whitening filter. " Then it may be seen that a receiver computing (31) would operate as follows: the receiver input is first whitened, and the whitened input is crosscorrelated with each of the infinitely many whitened perturbed versions of the signal. The variable bias 12

log f(m) - - (ms)' W' W(ms) (33) is then added to the corresponding crosscorrelator output, and the biased output of each of the infinitely many crosscorrelators is then weighted by computing exp [log + (m) - 2 (ms)' MW' W(ms) + z' W' W(ms)] (34) The weighted biased crosscorrelator outputs are then "summed" to form the receiver output. This interpretation of (29) serves to point out the basic features of an optimum receiver for detection of a randomly distorted signal in Gaussian noise, and to emphasize the greater complexity of such a receiver when the receiver input does not have independent coordinates. 13

4. RECEIVER EVALUATION, SMALL SIGNAL ANALYSIS The proper way of evaluating a likelihood ratio receiver is to determine its ROC curve. 3 This is the graph of the probability of detection as a function of the false alarm probability. In order to determine the ROC curve it is necessary to know the probability distribution of the receiver output under both the noise-alone condition and the signal-plus-noise condition, and these must be derived from the distributions of the random variables entering into the receiver input. In the problem considered here, we have been able to obtain specific likelihood ratios only where the receiver input is assumed to have independent coordinates. The determination of these likelihood ratios was a relatively simple task. The derivation of the probability distributions of these likelihood ratios, however, is not so simple. Consider the case where the multiplier has a Pearson Type III distribution and the receiver has independent coordinates. A receiver that computed n ziS (k+1) or\ i Lk - i(35) was found to be optimum. To find the ROC, the probability distributions of Lk( z ii (k+l) a) (36) must be found and then those of the sum (35). The difficulties in doing this are obvious. The reasonable thing to do in order to evaluate the small-signal performance of the receiver seems to be to compute (good approximations of) the mean and variance of the receiver output under the noise condition and under the signal-plus-noise condition, and to use these as measures of the discrimination that the receiver provides between noise and signalplus-noise. Where it is known that the sum in (35) is very nearly Gaussian in both noise and 3Receiver Operating Characteristic curve. 14

signal-plus-noise, these calculated means and variances are also sufficient to enable plotting an approximate ROC. By suitably restricting the values of the Isil and the value of k in (35) we can approximate the expression and its first and second moments. We consider the case where k is large and where j sij is uniformly small with respect to the variance of the added noise, e.g., k > 8 and sil <. 2 for i=1, 2,..., n. Observe that S. Variance (zi) Var(zi) = Var(msi) + Var(a) = k+l + r2 (37) since S. 2 k+1 25(k+1) < 2/225, (38) so that we could consider the variance of msi (and thus the effect of the multiplicative noise) to be negligible. This case is interesting, however, in that it enables us to see how the functional behavior of the optimum receiver differs from that of the crosscorrelator when a small amount of multiplicative noise is introduced. We see also that the distribution of the optimum receiver output under the signal-plus-noise condition depends on the form of the signal as well as its energy, and that therefore the detectability of the signal in this case is dependent upon both signal form and signal energy. Note that under the noise-alone condition zisi (k+l)) (19) x[sil ISi l is Gaussian with mean value -(k+l) a and unit variance, while in signal-plus-noise x. has mean value s /ar - (k 1) ca and variance 1 + ( and is very nearly Gaussian. Thus ~~meanvalue s i l/ m [sil (k+ in either case the mean value of x. is about (k- s astandard deviations from the origin. Using the approximation k! Qk(Xi) xk+e' xi < (39) 15

we obtain Lk(xi) = log Qk(xi) log k! - (k+l) log (-xi) (40) But log(-xi) = log { i } s FIsil Isi s (k+l) r _ i 1ZS Isil (k+) a and log + ~ -... (42) (k+l) (k+l) 2 2 (k+l) 2 3 k 2 for z.s.! 1 < 1 (43) (k+l) z 2 and the probability of that latter is very nearly one. Then z.s. ( zisi2 1 zisi3 Lk(Xi) Ci + >21 + 2(k+1) K -) + 3(k+1)(2 2 ) (44) where Ci depends on only the index i. It is easy to see, for this particular case, not only that the optimum receiver is essentially a crosscorrelator but also how it differs from a crosscorrelator. To approximate the first and second moments of the receiver output (35) for this case, we expand the right side of (40) about the mean value pi of x. and consider the first five terms log k! (k+l) (k+l) xi i)2 log k! - (k+l) log (-i) + (-k-i) (xi -i)+ (xi) 2(k+1) (xi- -i)3 +(-/i)3 3! (45) The moments (xi - ~i)P, p=l, 2,..., 6 are easily computed, and the most significant terms ol 16

the mnean and variance of (45) are taken as approximations of the mean and variance respectively of (35). These approximations serve to indicate roughly the effect of signal form upon detectability. The detectability is inferred from the difference of the means, the variance in noise [of (35)], and the variance in signal-plus-noise. For the case being considered, we find the difference of the means is (approximately) k + 4 2E 2k2 + k+ 3 2E 2E/N 0 1 + P2 2(k+1)z (N + P3 6(k+ 1)4 N (46) The variance in noise alone is 2E/N~ + 2 2(k+1)2 (No+ P3 3(k+1)4 No and the variance in signal-plus-noise is 6k + 11 2E\ 198k2 + 359k + 263 2E48) 2E/N + P2 2(k+1)2 N ) P 3 18(k+1)4 (48) 4 8 where:4 n E = (2W) Z s 2 is the signal energy i=1 1 N = U2 (a) is the additive noise power No = N/W is the additive noise power density I = nT/2 is the equivalent bandwidth n n/(2\ 2 P2'i=1 1 Si = / i are signal form factors where 0 < P2' P3 < 1 n n 3 P3 i1 si These expressions indicate that the detectability is essentially that of a signal specified exactly in white Gaussian noise when 2E/No is small, k is large (the variance of the multiplicative noise is small) and the signal parameters P2 and p3 are small. The parameters P2 and p3 are less than one, so they apparently do not affect the detectability much, unless k is small and 2E/No is large. This notation is employed to facilitate comparison with the detectability of a signal specified exactly in white Gaussian noise (see Ref. 1) 17

Since the above analysis is based on k being large and the signal amplitude being uniformly small with respect to the variance of the added noise, it does not really indicate how much the detectability of a signal depends upon its form or energy distribution. To obtain a better understanding of this dependency, a 2-dimensional case has been studied using a digital computer to compute approximations to the ROC. This study is described in Section 5.

5. RECEIVER EVALUATION USING THE IBM 704 This section describes several short computer programs used to evaluate variou facets of the performance of the optimum and two nonoptimum receivers. The program for evaluating the optimum receiver was restricted to the 2-dimensional (2 input samples) case. In order to evaluate the receiver whose design was described in Section 3, it is necessary to find the distribution of the logarithm of the likelihood ratio under conditions of noise and of signal-plus-noise. This involves the mathematical manipulation of continuous probability density functions. To solve this problem, the computer program uses discrete approximations of the multiplicative and additive noise. The program simulated the optimum receiver (Fig. 4) for the 2-dimensional problem. The output of the computer program provided an ROC curve. Several techniques involved in this method of solving the problem are worth pointing out. One is the way in which the discrete approximations were made. The other is the computation of the nonlinear Lk functions. 5. 1 Method for Obtaining Discrete Probability-Density Functions The general method that was used in approximating a continuous density function by a discrete density was as follows: c00 Given a probability density-function, p(x), the integral, f p(x) dx, is parti-C0 tioned into N equal integrals such that fY2 YG f 1 p(x)dx f p(x)dx... f p(x)dx = N(49) Y1 Y2 YN-1 Each integral represents a probability element of value. Each of the N probability elements must now be associated with a point xn. One correspondence would be to define xn implicitly by the integral j p(x)dx = N, where n=l, 2,..., N (50) -oc 19

In this particular problem, an N of 50 was chosen. The two continuous density functions that were replaced by discrete density functions were the density function for the added white Gaussian noise, and the density function for the Pearson Type III multiplicative noise, f(m). 5. 1. 1 Representation of Additive Noise. The 50 points of the discrete density function for the additive noise were denoted by ai, where i=1, 2,..., 50, and where each ai has a probability of.02. In accordance with the general method outlined above, the a's were read from Table 1 of Ref. 2. These tables give the distribution, for various values of skewness, of the Pearson Type III density function. For a skewness of 0, the Pearson Type III function reduces to the normal distribution. Therefore, the 50 ai values were obtained by entering Table 1 at values of the integral of 0. 01, 0. 03, 0. 05,..., 0. 99, and reading the values of t for a skewness of 0. 5. 1. 2 Representation of Multiplicative Noise. The representation of the multiplicative noise used was a 50-point discrete density function, denoted by mi, where i=1, 2,..., 50. These values were obtained from the Pearson Type III density function. The functional form used for this distribution was (k+l1) k+1 k -(k+1)m f(m) = k+1 ) k e-( +l), k> -1 (16) f(k+l) where F denotes the Gamma function. The range of this function is nonzero for m > 0; it has a mean value of 1 and a variance of 1/(k+l). In the tables of Ref. 2, the range of the function is nonzero for t > -2/a, where 3 is the skewness. This function has a mean of 0 and a standard deviation of 1. Therefore, to use these tables, it is necessary to make a linear transformation on t. This transformation is a 3ti 2 m. + 1, where a 2 (51) 1 3 fJk+l In order to obtain the ti values, the tables are entered for the desired skewness, a3, at values of the integral of 0. 01, 0. 03,..., 0.99. For k=l, or a3 = Z, the method of computation of the mi's is modified, since Carver's tables do not include an entry for a skewness this large. Substituting k=l into (16) 20

for f(m), one gets f(m) = 4m e 2m, k = 1 (52) Integrating with respect to m gives the distribution function F(m) = 1- e2m - 2me2m (53) The 50 m values for k=l were then obtained numerically, to four-place accuracy. 5. 2 Determination of Lk Functions Referring to the block diagram of the receiver, Fig. 4, one can see that it is necessary for the signal and noise to be weighted by the nonlinear function Lk, where L = &n Qk(x). Qk(X) was defined in integral form in Section 3. 00 k 0 xi _-f2/2 Qk(X) = k ei (20) Qk(x) is also related to the parabolic cylinder or Weber-Hermite functions, Dr(x), (see Appendix) with negative index in the manner Qk(x) = k! eX /4 D(kl)(-x) (54) Neither of these forms appears satisfactory, however, for the calculation of values of the Qk(x) function with the computer. It would be desirable to have Qk(x) expressed in terms of elementary or tabulated functions, preferably involving a finite number of terms. One such expression is Qk(X) = Pk(X) + Hk(X) Qo(X) (55) where Pk(x) and Hk(x) are polynomials in x of degree k-1 and k, respectively. Q (x) = dI (x)/0(x) is the ratio of the Gaussian distribution function to the Gaussian density function. In particular, the Lk function for the three integer values of k run on the computer are 21

L 1 = nQ = n 1 + x 0x (56) L3 = 3nQ3 = kn x + 2 +(x + 3x) 0 (x) (57) 3 3~Lx7 +3x)0 x (x)j L = nQ7 n x6 + 20x4 + 87x2 + 48 +(x7 + 21x5 + 105x3 + 105x) (x) (58) Several computational difficulties arise with these expressions. One is the difficulty of obtaining D (x)/0(x) accurately from tables for the entire range of interest. For -4 < x < +4, these expressions were calculated using values of obtained from the 15-place tables of q (x) the National Bureau of Standards, Table of Normal Probability Functions (Ref. 8). For 4 < x < 10K (x) was calculated from an asymptotic expansion similar to the one on p. VIII of the NBS Table 23, (x) fex/2 1 I 1._ 3 1. 3. 5 I 3.5. 7 (x eX/2 I - -2 + 4 6 + 8.. This is an alternating semi-convergent series. In regard to straightforward use of this series for computations, this means that there is a finite number of terms, depending upon x, that give optimum accuracy. This optimum number, for an x of 4, is nine. Since this is an alternating series, the ninth term was modified by dividing it by two to further improve the accuracy of the computation. For the range in x of -20 < x < -4, the use of the closed form equations, (56), (57), and (58), gives difficulty in that the small difference of two large factors is calculated. In order to circumvent this problem, an asymptotic series was obtained for large negative x. It is Lk = kn Qk = nk! - (k+l)fnx + En { n 2n (60) n=0 2 n!k! x This series also turns out to be an alternating semi-convergent series, requiring a finite number of terms for optimum computational accuracy. For k=l, k=3, and x = -4, the number of terms is seven. For k=7 and -10 K x < -35, the number of terms included was five. Again, the last term in each of these cases was divided by 2 to further improve the accuracy. For 22

-10 < x < -4, k=7, neither the asymptotic expansion nor the closed form expression gives sufficient accuracy, so a parabolic approximation was made to the three points, x = -15, -10, -4. The approximation used for this region was L7(x) =.0372x2 + 1. 543x + 1. 486 (61) A plot ofL L, L 7, computed at 0. 5 intervals, is shown in Fig. 5. 60 L7 (X.J -30 20 / — 15 Fig. 5. Plot of L1, L3, L7, computed at 0. 5 intervals. 23

5. 3 Computer Results It is of interest not only to consider the performance of the optimum receiver described in Section 3, but also to consider the performance of several nonoptimum receivers. A nonoptimum receiver whose performance is "close" to that of the optimum receiver may be more practical to use, either because of greater simplicity or because it is already available. This section presents the results of computer studies aimed at evaluating the optimum receiver under several conditions as well as evaluating two nonoptimum receivers, the crosscorrelator and the clipper-crosscorrelator. The relative performance of these receivers was investigated for a number of situations. There are four points of interest for which computer results were obtained. First, what is the effect on detectability of the distribution of signal energy? Second, if there is a best way of distributing the signal energy from an optimum detectability viewpoint, how does the detectability for this case compare with the detectability of a signal known exactly in additive white Gaussian noise with no multiplicative noise? Third, how well does a crosscorrelator perform as compared to the optimum receiver? Finally, how does the performance of a clipper crosscorrelator compare to the performance of the optimum receiver? 5. 3. 1 Distribution of Signal Energy. If one has the freedom of distributing the signal energy, it is of interest to know in what manner this should be done in order to get maximum detectability of the signal. Consider the signal to be composed of N independent samples, sl, s2,..., sN. In addition, a meaningful restriction is to consider the problem N 2 under the condition of constant total signal energy, i. e., Z s. = E. Now what distribution i=1 1 of the si's gives optimum detection of the N samples? A specific case worked on the computer was an N of 2, and values of total energy, / of 1, 2, and 4. Four combinations of energy distributions were worked. They were: 1. S1 = E, S2 = 0 2. s 1 = cos 15, 2 sin 15 = N2E o2E 3. s1 N cos 30~, 2 = sin 30 4.Sl =/2E cos 4 s2 sin 45~ -1 O24 2 0 24

In the first case all the energy is concentrated in one of the two samples. The fourth case is the situation where the energy is spread evenly between the two samples. The second and third cases represent intermediate situations. The performance of the optimum receiver was evaluated for each of these four cases for each of the three total energy levels, giving a total of twelve situations. The distribution of log likelihood ratio was obtained under signal-plusnoise and noise-alone conditions in order to make an ROC plot. These results are presented in Fig. 6 for the Pearson Type III index of k=l1. Similar plots for k=3 and k=7 are shown in Fig. 7 and Fig. 8. From these figures, one can see that if one has the freedom of distributing the energy between the two samples, one should distribute the energy equally for optimum detectability. 5. 3. 2 Comparison of Detectability V'ith and Without Multiplicative Noise. If the energy is distributed equally between the two samples, it is of interest to compare the detectability of the signal with various amounts of multiplicative noise. In Fig. 9 the ROC is plotted for the two-sample problem, for s = s2, 1 2, and 4, and for k=1, 3, and 7. Also 99 58~~~0 _A, N F' 90 / 97 70 /-: - a. 60 0 90 - /1/1 no/ 30 _O* CASE I ENERGY IN ONE COMPONENT CASE 2, = -15% o CASE3, = -30% CASE4, ENERGY EQUAL IN BOTH 80 /COMPONENTS 2 3 4 5 10 20 30 40 50 60 70 80 90 95 PN (A) Fig. 6. Effect on ROC of distribution of signal energy for twosample case (Pearson III index of k = 1). 25

99 98 // / 97 /: 95 90 0 // ~~~~~~~~~~~~70 ~~ 7 50 / AA Z,6006 40 30 _ * CASE I ENERGY IN ONE COMPONENT / a CASE2, 0 15% o CASE3, 0 =30% 20 a CASE 4, ENERGY EQUAL IN BOTH 20 / COMPONENTS / 10 l l l I I I I 1 2 3 45 10 20 30 40 50 60 70 80 90 95 PN (A) Fig. 7. Effect on ROC of distribution of signal energy for twosample case (Pearson III index of k = 3). 99 / 98 a A / / 9796 - / 95 > A 80~ t 40 - o/ -730 ~ CASEI,ENERGY IN ONE COMPONENT A/ 60/o 40 / CASE2 0 -15% ~*~~~ *~~~ ~ ~o CASE 3, 0 -30% A CASE 4, ENERGY EQUAL IN BOTH //0 oCOMPONENTS 2 3 4 5 10 20 30 40 50 60 70 80 90 95 PN(A) -- Fig. 8. Effect on ROC of distribution of signal energy for twosample case (Pearson III index of k = 7). 26

98'. 95 - I, / / // * ko / / // /-6d 90 /// 70 40/. I / /0; 3 oX(M) /-6db 200 3 2 3 4 5 10 20 30 40 50 6 70 80 90 95 o~~ ~~~~~~?/ 0 _ / /k =,2(m)=-3 db / / k 3, /2(m)/ = -db 20 /0 1 2 3 4 5 10 20 30 40 50 60 70 80 90 95 97 9 96 "- 95 0 90 0 YY 70,o' z,''' 70 50 0 40- k I (m) = - 3 db k'3, 02(m) -6db 0o k Ct7 m)'-9db PN (A) Fig. 10. Effect of Pearson Type III multiplicative noise on performance of crosscorrelator. 27

shown in this figure are the ROC curves for a d' of 1, 2, and 4 that would result if there had been no multiplicative noise. The comparison of these curves gives one some measure of the loss in ability to detect a signal due to the multiplicative noise factor. In general, the detectability is greater with no multiplicative noise. This is reversed at low detection probabilities, where the probability of detection is actually larger than it would have been with no multiplicative noise. This is a general rule with amplitude fluctuations, because the signal ensemble (including the multiplier) contains some signals with very large energies. When very low detection probabilities are examined, one observes that a detection normally involves some of these larger-than-average signals. This does not occur at detection probabilities greater than 50. 5. 3. 3 Crosscorrelator. 5Since a crosscorrelator is a relatively simple as well as common receiver, it is of interest to compare its performance with that of the receiver that is optimum for the conditon of Pearson Type III multiplicative noise. The evaluation of the crosscorrelator follows immediately from data already presented in the previous section for certain sample sizes, M, and values of the Pearson Type III index, k, if one considers the case where the signal samples are independent and have their energy equally distributed. Table I lists the correspondence between the crosscorrelator problem and the data already computed. The relevant curves are replotted in Fig. 10. M Equal-Energy, Independent Samples Read ROC To Evaluate Crosscorrelator for: (Figs. 6, 7, and 8) for: k=1, M=1 k=1, M =1 k =1, M =2 k =3, M =1 k = 1, M =4 k =7, M =1 k = 3, M = 1 k = 3, M =1 k=3, M=2 k =7, M = 1 k =7, M = 1 k =7, M = 1 Table I. Correspondence between crosscorrelator. Two cases of special interest are the two 2-sample cases (M=2) for k=1 and k=3. For these cases the performance of the optimum receiver and the crosscorrelator are, within computational error, practically the same. Therefore, the crosscorrelator is quite a satis5The type of crosscorrelator used here crosscorrelates the input with a locally generated (stored) signal. 28

factory receiver under these conditions. The relationships illustrated in Table I will now be derived. Consider the case of M equal signal components. Then for a crosscorrelator M s2 2E or s E (62) M 2 N O MN (62) o o Let x. be an input to the receiver. Then in signal-plus-noise, M M 2E 1 M M x. = (m.s+ a)= N m.+ Z a. (63) i=l 1 i=l il i= and in noise-alone M M Z x. = z a. (64) i= i= 1 Now, the sum for the noise-alone input is normally distributed with mean of 0 and variance of M. M Z a. - N(A =0, a2 =M) (65) i=i 1 Or, in terms of a variance of one, let A be a normalized variable, M A = a. N(O, 1) (66) N in 1 Since a characteristic feature of Pearson Type III density functions is that the sum of Pearson Type III density functions is a Pearson Type III density function with a different index, k', then m. TI (I =M, a (67) or M / WL D, m (~.:T - 1 or (68) M> M i 1 i III (I' M(k+l) (68) where TIII denotes the Pearson Type III distribution. Now the new index, k', is one less than the reciprocal of the variance. In signal-plus-noise /__g~~~~~~~~~~~~ i-l i(69) 29

And in noise-alone v x. = A (70) _M 1 So the performance of a crosscorrelator with energy equally spread among M input samples and multiplicative noise with index k, is equivalent to the performance with one sample containing the same total energy, and a new Pearson Type III index k' of M(k+l) - 1. 5. 3. 4 Clipper Crosscorrelator. A clipper crosscorrelator is a practical receiving device which is often used when amplitude fluctuations are a serious problem. Although one of the prime reasons for its use is amplitude fluctuations in the noise level, in this report the amplitude of the noise is assumed stable. A simple diagram of a clipper crosscorrelator is shown in Fig. 11. Although the clipping level need not be at zero, we have INPUT Xr CLIP -x)-~ INTEGRATE (SUM) I LOCAL - S CLIP SIGNAL s (t) Fig. 11. Clipper crosscorrelator. assumed in this report that the level is at zero, as it is commonly in most practical clippers used for detection purposes. The clipping circuit output is the polarity of the input signal. This operation is sometimes referred to as "hard clipping" or "infinite clipping" to differentiate it from the saturation type of peak clipping that occurs in many amplifying devices. The polarity of the input is then compared with the polarity of the expected signal. If the polarities of the two are the same, the input to the adder is +1; if the polarities are different, the input to the adder is not +1 (either 0 or -1 is commonly used). The analysis contained here follows the assumptions of the rest of the report: that the inputs are statistically independent samples. If the integrator is operated for a fixed number of samples, that is, over a fixed time, then the output has a binomial amplitude distribution which is a function only of the number of samples considered and the probability of having a +1 at the integrator input on 30

each of the identical independent samples. Because the input signal clipper is at the median of the noise, this probability is exactly. 5 when noise alone is present at the input. When the input is due to signal-plus-noise the probability of a one has been determined as a function of the 2E/No ratio for the individual samples and the variance of the multiplicative noise. Although ROC curves could have been determined for this specific case a much briefer analysis which isolated the effect of the multiplicative noise from the total receiver performance was used. A more thorough analysis of the general clipper crosscorrelator performance, the ROC curves and the efficiency is treated in Ref. 4, and a brief analysis of the efficiency in the absence of multiplicative noise is given in Ref. 5. The analysis here is based on the observation that the performance of the clipper crosscorrelator can be broken into two parts: first, the effect of the physical parameters, signal strength, additive noise strength and multiplicative noise variance on the probability of having a positive input to the adder, and second, the general effect of using a clipper crosscorrelator. This latter is a function solely of the probability p, the binomial distribution, and the operating point desired on the ROC. The first effect, that of obtaining a certain probability at the input to the adder, can be easily studied by numerical methods. Let us denote the energy of a single sample as E c. When there is no multiplicative noise, but purely added white Gaussian noise of power density No watts per cycle per second, then the probability of the input polarity corresponding to the expected signal polarity, that is, the probability of a +1 to the adder, is given by the normal probability distribution function c p = = N) e dt (71) -xc This is the straight line, the uppermost curve, in Fig. 12. Using the digital computer, Type III multiplicative noise variance values. These are shown in Fig. 12. They indicate that there is very little effect for small input signal-to-noise ratios (that is, 2EC/No), and that there is also very little effect unless the variance of the multiplicative noise is more than about -10 db (of course each reader must use his own criterionfor "very little"). A somewhat more meaningful interpretation of these results is shown in Fig. 13, 31

~aslou alaT.3Ididllanu Jo sOauD Aall snoi.JT A JOJ JoWlIaaJoS ssoJa.addiT. ToJ'as!ou oaAlWaIdd!llntu ol anp'ultU'ASuauoe.ig'g1'.21 ON ~ 0 ~]/ I 0 hqpt -t m -n qp2- - qP1-,qpz'TOTTIXIIOaSSOt3 Tadd.IT jo aouteuTojaad uo asIou aAT.!z.Idildlnu III adAj, uosTeacI Jo paJJ,'g'g ON o ~PO 3 0' C 0 03,m V~~~~~ ~I. -I'r - 0 7 3*-0: 7 -o 10'

where the efficiency loss is considered as a function of the amount of multiplicative noise and of the individual sample signal-to-noise ratios. The efficiency 7 mn is defined as the ratio of signal energy required when there is multiplicative noise to that amount required to give an equivalent performance without multiplicative noise using the clipper crosscorrelator. This efficiency was determined by using the plots of Fig. 12 in the following manner: for a given 2Ec/No ratio the graph is entered and the ordinate read for the appropriate amount of multiplicative noise. At this ordinate value one also reads the 2Ec/No ratio had there been no multiplicative noise; that is, one determines the input strength necessary to get the same p value. The efficiency is simply a direct comparison of these two energy ratios. One concludes from this efficiency plot that the multiplicative noise causes a progressively larger loss in efficiency as the individual component signal strength increases. Therefore, as was shown for both the clipper crosscorrelator and the optimum receiver, if one can spread signal energy over many components of the signal there will be better performance than if the energy is concentrated in a few components. This is the same effect as the clipper crosscorrelator efficiency with no multiplicative noise displays; hence, when both are taken into account to get the over-all efficiency of such a device, one has a serious decrease in total efficiency unless the individual component strengths are very low. 33

6. CONCLUSIONS We have considered the problem of designing a likelihood ratio receiver for detection of a signal that has been distorted by a randomly varying transmission loss and subsequently masked by an additive Gaussian noise. To simplify the analysis it was assumed that we could approximate the receiver input by a function from a finite dimensional vector space of functions, and that this approximation was almost certainly a good approximation in the mean. A general likelihood ratio receiver design was found where the receiver input was assumed to have independent coordinates. The form of the receiver is the same whether the random multiplier or multiplicative noise has a Pearson Type III, a Rayleigh, or a truncated Gaussian distribution. When the variance of the multiplicative noise approaches zero this receiver becomes essentially a crosscorrelator. The likelihood ratio receiver where the receiver input does not have independent coordinates may also be seen to be a generalization of the crosscorrelator. The detectability of a signal distorted by multiplicative noise depends somewhat on its form. Since that part of the variance of each coordinate of the receiver input that is due to the multiplicative noise is proportional to the square of the corresponding signal coordinate, the effect of the multiplicative noise can be made negligible by choosing a signal of low amplitude and long duration (for a fixed 2E/No). Where a crosscorrelator receiver is used, and where only the form of the signal is variable (all the signal and noise parameters are fixed), it was seen that the variance of the receiver output was a minimum when the signal energy was uniformly distributed among its coordinates [or over the interval (0, T)] 34

APPENDIX Qk AND THE PARABOLIC CYLINDER FUNCTIONS D. The authors wish to thank Prof. James A. MacFadden of Purdue University for pointing out that the Qk functions are directly related to the parabolic cylinder function. From Bateman (Ref. 3), page 119, Section 8. 3, Eq. 3, and therefore -z2/4 DV(z) V) Q-(v+)(-Z) (73) Thus X2a/4 for k (74) Qk(x) = I(k+l) ex/ D (k+l)(-x) Bateman points out that D can be related to the error function for negative integer values of v (thus positive integers for k) and to Bessel functions for t = 1/2 (thus k = -1/2). No relation to available tables is mentioned for other parameter values. 35

REFERENCES 1. W. ~W. Peterson and T. G. Birdsall, The Theory of Signal Detectability, Cooley Electronics Laboratory Technical Report No. 13, The University of Michigan, Ann Arbor, Michigan, June 1953. 2. H. C. Carver, Mathematical Statistical Tables, Edwards Brothers, Inc., 1950. 3. Bateman Manuscript Project, Higher Transcendental Functions, Vols. 1-4, McGraw-Hill, 1953. 4. G. P. Patil and P. Cota, On Certain Strategies of Signal Detection Using the Clipper Crosscorrelator, Cooley Electronics Laboratory Technical Report No. 128, The University of Michigan, Ann Arbor, Michigan, October 1962. - 5. T. G. Birdsall, "On the Extension of the Theory of Signal Detectability," U. S. Navy, Journal of Underwater Acoustics, April 1961. 6. T. Kailath, "Correlation Detection of Signals Perturbed by a Random Channel," IRE Transactions on Information Theory, June 1960. 7. H. Cramer, Mathematical Methods of Statistics, Princeton University Press, 1957. 8. National Bureau of Standards, Table of Normal Probability Functions, U. S. Government Printing Office, 1949. 36

DISTRIBUTION LIST Office of Naval Research (Code 468) Director Department of the Navy National Bureau of Standards Washington 25, D. C. (2 copies) Connecticut Avenue and Van Ness St. N. W. Washington 25, D. C. Office of Naval Research (Code 436) Attn: Mrs. Edith Corliss (1 copy) Department of the Navy Washington 25, D. C. (1 copy) Office of Chief Signal Officer Department of the Army Office of Naval Research (Code 437) Pentagon Building Department of the Navy Washington 25, D. C. (1 copy) Washington 25, D. C. (1 copy) Commanding Officer and Director Director David Taylor Model Basin U. S. Naval Research Laboratory Washington 7, D. C. (1 copy) Technical Information Division Washington 25, D. C. (6 copies) Superintendent U. S. Navy Postgraduate School Director Monterey, California U. S. Naval Research Laboratory Attn: Prof. L. E. Kinsler (1 copy) Sound Division Washington 25, D. C. Commanding Officer Attn: Mr. W. J. Finney (1 copy) Air Force Cambridge Research Center 230 Albany Street Commanding Officer Cambridge 39, Massachusetts (1 copy) Office of Naval Research Branch Office The John Crerar Library Building Chief 86 East Randolph Street Office of Ordnance Research Chicago 1, Illinois (1 copy) Box C. M., Duke Station Durham, N. C. (1 copy) Commanding Officer Office of Naval Research Branch Office National Science Foundation Box 39, Navy No. 100 1520 H Street N. W. FPO, New York (8 copies) Washington D. C. (1 copy) Armed Services Technical Information Agency Commanding General Arlington Hall Station Wright-Patterson AFB Arlington 12, Virginia (10 copies) Dayton, Ohio (1 copy) Commander Commanding Officer U. S. Naval Ordnance Laboratory U. S. Navy Mine Defense Laboratory Acoustics Division Panama City, Florida (1 copy) White Oak Silver Spring, Maryland (2 copies) U. S. Naval Academy Attn: Mr. Derrill, J. Bordelon, Annapolis, Maryland Dr. Ellingson Attn: Library (1 copy) Commanding Officer and Director Chief, Physics Division U. S. Navy Electronics Laboratory Office of Scientific Research San Diego 52, California (1 copy) HQ Air Research and Development Command Andrews AFB Washington 25, D. C. (1 copy) 37

UNIVERSITY OF MICHIGAN Illlllllll lllll llll llllllllll lllllll1111111ii 3 9015 03027 6805 DISTRIBUTION LIST (Cont.) University of California Director Marine Physical Laboratory of the University of Miami Scripps Institution of Oceanography Marine Laboratory San Diego 52, California (2 copies) Miami, Florida Attn: Dr. V. C. Anderson Attn: Dr. J. C. Steinberg (1 copy) Dr. Philip Rudnick Harvard University Director Acoustics Laboratory U. S. Navy Underwater Sound Reference Division of Applied Science Laboratory Cambridge 38, Massachusetts (1 copy) Office of Naval Research P. O. Box 8337 Brown University Orlando, Florida (1 copy) Department of Physics Providence 12, R. I. (1 copy) Commanding Officer and Director U. S. Navy Underwater Sound Laboratory Western Reserve University Fort Trumbull Department of Chemistry New London, Connecticut (2 copies) Cleveland, Ohio Attn: Mr. W. R. Schumacher Attn: Dr. E. Yeager (1 copy) Mr. L. T. Einstein University of California Commander Department of Physics U. S. Naval Air Development Center Los Angeles, California (1 copy) Johnsville, Pennsylvania (1 copy) Institute for Defense Analysis Dr. M. J. Jacobson Communications Research Division Department of Mathematics von Neumann Hall Rensselaer Polytechnic Institute Princeton, New Jersey (1 copy) Troy, New York (1 copy) Dr. B. F. Barton, Director Director Cooley Electronics Laboratory Columbia University The University of Michigan Hudson Laboratories Ann Arbor, Michigan (1 copy) 145 Palisades Street Dobbs Ferry, N. Y. (1 copy) Project File Cooley Electronics Laboratory Woods Hole Oceanographic Institution The University of Michigan Woods Hole, Massachusetts Ann Arbor, Michigan (34 copies) Attn: A. C. Vine (1 copy) Project File Johns Hopkins University The University of Michigan Office of Institute for Cooperative Research Research Administration 34th and Charles Street Ann Arbor, Michigan (1 copy) Baltimore 18, Maryland Attn: Dr. W. H. Huggins (1 copy) Director The University of Michigan Office of Edo Corporation Research Administration College Point, Long Island, N. Y. Cooley Electronics Laboratory Attn: Mr. Charles J. Loda (1 copy) Ann Arbor, Michigan Attn: Mr. T. G. Birdsall (1 copy) Melpar, Inc. Applied Sciences Division Office of Naval Research (Code 468) 11 Galen Street Department of the Navy Watertown, Mass. Washington 25, D. C. Attn: Dr. David Van Meter (1 copy) ATTN: Dr. Skocklin, ONRL (1 copy) 38