Technical Report No. 130 3674-3-T A COMPUTER-AUGMENTED TECHNIQUE FOR THE DESIGN OF LIKELIHOOD-RATIO R.ECEIVERS by 0a9$ T. G. Birdsall L. W.;olte Approved by: B. F. Barton for COOLEY ELECTRONICS LABORATORY The University of Michigan Department of Electrical Engineering Contract No. Nonr-1224(36) Office of Naval Research Department of the Navy Washington 25, D. C. October 1962 THE UNIVERSIT Or MICH!C~N ENGINEERING LiBRARY

,sp-r, " ul a Qo WyA9Lyv ifG

TABLE OF CONTENTS Page LIST OF ILLUSTRATIONS iv ABSTRACT vi 1. INTRODUCTION 1 2. REVIEW OF POWER SERIES APPROXIMATION METHODS 3 2. 1 Small-Input Method 3 2. 2 Small-Signal Method 4 3. A MULTIPLICATIVE NOISE PROBLEM INVOLVING 6 RAPID FADING 3. 1 Description of the Problem 6 3. 2 Review of H-Type Multiplicative Noise Receiver 8 3. 3 Log-Normal Multiplicative Noise 9 3. 4 Review of Competing Designs in Approximations 10 4. A COMPUTER-AUGMENTED TECHNIQUE 12 4. 1 Rationale 12 4. 2 The Use of the Computer 13 4. 3 The "Five-Operational Freedom" Receiver Design 14 4. 4 Program Results 17 5. BYPRODUCTS 27 5. 1 Clipper Crosscorrelator 27 5.2 Maximum-Likelihood Receivers 30 6. SUMMARY 32 REFERENCES 33 APPENDIX: COMPUTER DETAILS 34 DISTRIBUTION LIST 35 iii

LIST OF ILLUSTRATIONS Figure Title Page 1 Small-input-method receiver. 4 2 Small-signal-method receiver. 5 3 Schematic of multiplicative noise problem. 6 4 Receiver block diagram, H-type multiplicative noise. 8 5 Serial processor for sampled inputs. 10 6 Parallel processor for sampled inputs. 11 7 Logarithm of likelihood ratio for signal known exactly in added white Gaussian noise. 14 8(a) Logarithm of likelihood ratio for Pearson type-3 multiplicative noise, k = 1. 15 8(b) Derivative of logarithm of likelihood ratio for Pearson type-3 multiplicative noise, k = 1. 15 9 Block diagram, "five-operational freedom" receiver. 16 10 Distribution functions, k=1, b=. 3466, b=1. 0986. 19 11 ROC for one-sample case. 20 12 Logarithm of likelihood ratio for log-normal multiplicative noise, b=. 3466. 21 13 Logarithm of likelihood ratio for log-normal multiplicative noise, b=. 6289. 21 14 Logarithm of likelihood ratio for log-normal multiplicative noise, b=1. 0986. 22 15 Derivative of logarithm of likelihood ratio, aL/ax, for log-normal multiplicative noise, b=. 3466. 22 16 Comparison of L and the output of a "two-operational freedom" receiver. 23 17 Comparison of L and the output of a "two-operational freedom" receiver. 23 18 Plots of the receiver design function Cs for b=. 3466 and b=1. 0986. 25 19 Plots of the receiver design function Ds for b=. 3466 and b=1. 0986. 25 iv

LIST OF ILLUSTRATIONS (Cont.) Figure Title Page 20 Plots of the receiver design function As for b=. 3466 and b=1. 0986. 26 21(a) Logarithm of likelihood ratio for log-normal multiplicative noise, b = 1.0986. 26 21(b) Error of five-operational freedom receiver. Error in loglikelihood ratio (true-approximate). 26 22 Block diagram, clipper crosscorrelator. 28 23 P(+1) for clipper crosscorrelator, multiplicative and added noise. 29 24 Efficiency, due to multiplicative noise, for clipper crosscorrelator. 30 25 Maximum-likelihood receiver. 31 26 Maximum value of log-likelihood ratio for b=. 3466, 1. 0986 and Pearson type-3, k=1. 31

ABSTRACT Standard approximation methods of design of detection receivers are reviewed, and a method based on fitting curves calculated on a high-speed computer is presented. This computer-augmented technique is applied to the design problem of a signal distorted by a sign-preserving, rapidly varying transmission gain.

1. INTRODUCTION This report deals with the problem of designing special purpose receivers for the detection of the presence of signals in a noisy environment. When there is sufficient information about the signal type and the noise type, the optimum design for a detection receiver is specified by statistical decision theory to be a likelihood-ratio receiver. When there is only one possible signal to be looked for, s(t), and the receiver input is x(t), then the proper receiver output is the likelihood ratio f[x(t) I s(t)]. When the signal might be any one of an ensemble of signals, then the proper receiver output is &(xIS) = f (xsI ) dP(s) (1) S It is assumed that the reader has previously been exposed to statistical decision theory and likelihood ratio. All that is really used here is (1) the basic definition of likelihood ratio as the ratio of the likelihood of occurrence of specific receiver inputs when a signal is present, to the likelihood of occurrence when no signal is present, and (2) the motivation to design the receiver to calculate this likelihood ratio. When one attempts to carry out the design of the receiver he finds that there are certain difficulties common to most problems. The problem of interest may be, as is often the case, one in which the basic design data are not in terms of analytic models of probability distribution functions, but are given by measured data. A second cause of design difficulties is the lack of adequate analytic tools for following the basic design procedure directly. Equation 1 states that the likelihood ratio, given an observation x, is the average likelihood ratio (the average of all likelihood ratios if signals had been specified exactly), averaged with respect to the probability of occurrence of the individual signals that constitute the signal ensemble (Refs. 5, 6, 7, and 9). In a few simple cases this integral, with respect to the possible signals in a signal ensemble, has been determined in closed analytic form. When this has been possible the receiver design is based upon a direct implementation of that

analytic form. When no closed form has been determined, there are two avenues open. The first is to forget the entire likelihood-ratio approach and to build what the designer considers good and adequate receivers in terms of his experience and knowledge of the state of the art in receiver design. The second is to approximate the likelihood ratio in one form or another, and to build receivers which implement this approximation. The research which is reported here was undertaken with the purpose of studying the general problem of approximating likelihood ratio and the design of receivers specified by these approximations. A high-speed digital computer can often be used to numerically evaluate a great many functions at a relatively low cost in time and money. Our attempt has been to develop methods for using the computer to aid in the design of receivers. Section 2 of this report is a short review of some approximation methods in general use. Section 3 introduces a specific problem which was used as a basis for developing one particular computer-augmented technique for receiver design. The previous work on this particular problem is reviewed, and in Section 4 the general method and the specific results of the computer technique developed in this research are reported. This work and some discussion of work remaining to be done in this area are summarized in Section 6. The specific problem that was chosen to be solved was picked because of its practical importance; the case where signal strength in decibels has a Gaussian distribution. The objectives of this work were twofold. They were: 1) to obtain a solution to this specific problem and 2) to develop a method of solution that others could apply to their specific problems.

2. REVIEW OF POWER-SERIES APPROXIMATION METHODS Let us assume throughout this report that we are interested in determining an optimum receiver whose basic input data consist of samples. For simplicity of discussion we assume a finite number of such samples, which are statistically independent when the signal is not present. We shall denote the input by a vector X consisting of the components xl, x2,..., xn. When this is the case, the likelihood ratio based upon any one specific signal is equal to the product of the likelihood ratios for the individual components x.. n |(xIs) t=I (xis) (2) A more useful version of Eq. 2 is obtained by taking logarithms of both sides. as is done in Eq. 3.!n k(Xls) = Z kn f(xils) (3) The usefulness of this equation in terms of equipment is that the right-hand side consists of a summation or integration in time, when the sampled inputs, xi, occur in time. Hence, this last form has the possibility of treating a waveform in real time. Let us concentrate our attention on determining the logarithm of the likelihood ratio for a single component, xi. 2. 1 Small-Input Method The small-input method is based upon the expansion of the expression for the logarithm of likelihood ratio in terms of a power series in the input variable xi (Ref s. 6 and 7); specifically, a fn &f(xi s) xn in a(x )s) (4) x=O

where it is assumed that the power series is convergent. If such a representation is convergent it may be a good representation for small values of the input x by truncating after a few terms. If it is assumed that only the first n terms are retained, the resultant receiver is then a receiver consisting of n parallel branches, the block diagram of which is shown in Fig. 1. The top branch, L(O, s), is a bias term which depends only on the value of the signal expected, s. The next branch weights the input with a function of s. Each succeeding branch takes increasing powers of the input, weights each with a function of s, and then sums the output. The number, n, of branches used will depend on how close an approximation to L(x, s) is required. L( O,S) 2.UARE 2 S Mh n'POWwER X —-- X (o's) n L(O,S): =' L(O,S) 8X" Fig. 1. Small-input-method receiver. 2.2 Small-Signal Method An alternative power-series expansion method is to expand log-likelihood ratio in a power series in s, f" / n 17(xils) n n=O sn sO 4~~~n

By truncating the power series after a finite number of terms, one obtains the block diagram for a parallel branch receiver of n branches, as shown in Fig. 2. This receiver processes the input by distorting it in a given fashion in each branch and then crosscorrelating this distortion with a power of s. There are n branches, each having a different distorting function and different powers of s with which to crosscorrelate. When using this method one may emphasize the probable usefulness for signals of specific size by choosing specific values at which to expand the power series. That is, if one is interested in designing a receiver to be primarily effective at received inputs in the neighborhood of zero db signal-to-noise ratio, he may choose to use a power series expanded about the value s = 1 instead of s = 0. This may reduce the number of terms necessary in the expansion to obtain a sufficiently nice approximation. In Fig. 2, the uppermost branch has been shown dotted, since the logarithm of the likelihood ratio for any input for a zero-strength signal is identically zero. However, had the power series expansion been around any other value of signal than zero, this first branch might be the most significant part of the receiver. L(XO)L~~~~~~~~~~~~~~~~~L XX,S) 1 S n Fig. 2. Small-signal-method receiver.

3. A MULTIPLICATIVE NOISE PROBLEM INVOLVING RAPID FADING 3. 1 Description of the Problem Figure 3 shows schematically the problem which is considered in this section. In Fig. 3(a) is depicted a receiver input caused by the addition of white Gaussian noise and either silence or a rapidly varying signal. To be specific, it is assumed that the average value of the signal at any moment of time is known to be s(t) and that the variation of the signal about its average value is proportionate to this average value at every moment in time. Thus we could schematically show the situation in Fig. 3(b) as a signal known exactly, s(t), which is multiplied by a noise, m(t), and the result added to white Gaussian noise, a(t), this sum being then presented to the receiver input. The term "rapid fading" means that if samples of the receiver input are taken at a rate commensurate with the bandwidth of the added noise so that the noise-alone samples are statistically independent, then the samples of the multiplicative noise alone would also be statistically independent. This is the problem to which prime attention is paid in this report. Section 4 discusses the problem of a multiplicative noise, m(t), which varies so slowly that it is essentially constant over the entire observation period. We shall do no more than wave our hands at the intermediate problems of a slowly-varying multiplicative noise. RAPIDLY VARYING SIGNAL AMPLITUDE OR SILENCE - RECEIVER S(t)RECEIVER or 0 WHITE GAUSSIAN NOISE m>O (a) (b) Fig. 3. Schematic of multiplicative noise problem.

When the multiplicative noise is varying so that samples taken at the independence rate of the added noise are also independent, then the receiver can be viewed as making a sequence of observations such that the distribution of possible signals at one instant of time is the same as the distribution of possible signals at every other sampling instant of time, except for the relation due to the known parameter s(t). Hence the likelihood ratio for a complete observation can be determined by the product of the likelihood ratios of the individual samples, as in Eqs. 2 and 3. Thus all receivers that we shall consider can be viewed as operating on each input sample independently, producing the log-likelihood ratio for that sample, and adding or "integrating" these over the total observation time. Most of our attention, then, will be on determination of the logarithm of the likelihood ratio for the individual discrete samples x.. When the added noise is "white Gaussian", we can write the logarithm of the likelihood ratio in a specific form corresponding to Eq. 1, which is shown in Eq. 6. (xi Si) = J' (xil ms) dP(msi) 2 s.2 co x.ms. 2 S el I e f(m) dm (6) where: Mi(a) = 0 u2(a) =1 Since the only variation in the signal ensemble is the variation due to the implied multiplicative noise, m(t), the likelihood ratio for a specific signal is given in the familiar form of a likelihood ratio for a signal known exactly in added white Gaussian noise. The units used for measuring all values of receiver input are relative to the rms voltage value of the added noise. That is, it is convenient to assume that an attenuator has been placed at the input to the receiver, so that the measured rms value of added noise voltage is 1 volt, and the signal strength is measured relative to this noise variance. The units of m(t) are already in normalized form, since the real variable is the quantity m times s, m being dimensionless and s being normalized with respect to the added noise standard deviation. The term f(m) in Eq. 6 is the probability-density function for the multiplicative noise. Various multiplicative noise problems in added white Gaussian noise are named by giving the name for this density function.

3. 2 Review of H-Type Multiplicative Noise Receiver A previous report (Ref. 4) considered the multiplicative noise problem where the distribution f(m) was Pearson type-3, and a generalization of this called the H-type distribution. The general H-type density function is displayed in Eq. 7. k -cm -.5bm2 f(m) = Am e e m O (7) The likelihood ratio for an individual input sample x. for known value of mean signal s is m s2 + bm2 co k (xms-cm) 2 (xi, s) = A f mm) e dm (8) 0 That report showed that the function Lk(z), 2 Lk(z) = in k ePZe 2 do (9) a function which is directly related to logarithms of parabolic cylinder functions (Ref. 1), is of special importance. For H-type multiplicative noise, Eq. 8 reduces to n B(i's) Lk( x l + ) - j + (k+ l)P+ + b n A (10)'n /(xi, s) = Lk s I ls2-(; )~' + in A When the value of the parameter b is zero the multiplicative noise is Pearson type-3. The resultant block diagram is shown in Fig. 4. The order of the first blocks is somewhat arbitrary. X + NON-LINEAR AMP S C 2In A- (K+I) lna b Fig. 4. Receiver block diagram, H-type multiplicative noise. As shown in Fig. 4, the input sample is first selectively rectified, that is, multiplied by the sign of the expected signal; this is added to a bias term which is inversely proportional to the magnitude of the expected signal. ~ The resultant sum is multiplied by 2 / s2 +b and passed ~ /~~~

through the amplitude nonlinearity Lk. The nonlinearity is specified solely by the multiplicative-noise-type parameter k. The output of this nonlinear gain amplifier is integrated with a bias term to get the logarithm of the likelihood ratio. This last bias term could be omitted. Appropriate cut levels for operating detectors can be established for either condition. The reason for reviewing the work on this specific problem on multiplicative noise was to point out the relatively simple form of the receiver, consisting of selective rectification, the addition of bias terms and gain before a nonlinear amplifier, and the integration after this amplifier. This form may be familiar to many as the "second detector" pre-emphasis or peaking function before a "video integration" in many radar receivers and other types of multiple pulse detection devices. 3. 3 Log-Normal Multiplicative Noise A problem which seems quite similar to the problem of H-type and Pearson type-3 multiplicative noise considered in Section 3. 2, is that where the distribution of the logarithm of the multiplying noise is normal or Gaussian. If we choose such a multiplying noise so that its expected value is unity, and if we denote the expected value of the logarithm of the multiplying noise as -b where b > 0, then the density function for the logarithm is the normal density function, (log m + b)2 f(log m) = 1 e 4b (11) 2,/ The probability-density function for m itself is simply the Jacobian of the transformation, 1/m, times this: (log m + b)2 fM 1 4b f(m) = e 2 V/ m E(m) = 1 E(log m) = -b b > O0 (12) When f(m) (Eq. 12) is inserted in the general form for a likelihood ratio for a single sample (Eq. 6), the analyst discovers he has an integral with very nice convergence properties, about which very little is known. The authors have been unable to determine any simplifying method such as the grouping and change of variables which was so useful with the Pearson type-3 and

H-type multiplicative noise, and hence are forced to produce a receiver design by the use of approximation. 3. 4 Review of Competing Designs in Approximations In Section 2 we considered two types of approximations, both power-series methods, which would result in parallel-branch receivers. In some applications it' might be possible to realize the computation by fairly direct means. For instance, consider a situation where the samples are actually taken at times relatively far apart with respect to the speed of electronic equipment. In such a case there might be time to perform signal processing between input samples to the receiver; it might be feasible to perform the integration over the variable m between these input samples. A block diagram of such a receiver is shown in Fig. 5. The envisioned configuration makes use of a waveform generator which generates an analog version of the inverse of the probability distribution function for the multiplicative noise. In this receiver an input sample, xi, is held for M seconds while the analogcomputer-like circuitry approximates the function L(xi, s) in real time. For a fixed observation interval T. =Mn, containing n samples, all of which are due either to noise alone or to signal-plus-noise, an optimum output is the sum of the log-likelihood ratios for the individual samples. The final integrator in the block diagram performs this last summation over all xi. F (m) m(t)! -m - TIME t 0 M 2M (a) Time waveform of "m" generator for serial processor. x (t. a HOLD a DUMP F SE(t) SAMPLE +AD'NON- LINEAR INTEGRATE aM SEC PERIOD ) "ILOG' I T SEC PERIOD I (b) Block diagram. Fig. 5. Serial processor for sampled inputs. 10~~~iI~DM

NON-LINEAR ADD AMP I/2 SQUARE EXP" X() mINTEGRATE L(X,S) Fig. Parallel proessorADD DUMP T SEC CYCLE s(t) ___+ NON-LINEAR ADD AMP I/2 SQUARE "EX P" Fig. 6. Parallel processor for sampled inputs. If the detection situation does not allow real time processing between input ble alternative is a parallel branch receiver such as is shown in Fig. 6. This receiver utilizes a digital-computer-type configuration to yield an approximate integration of the integral with respect to m; that is, a discrete number of values are used to approximate the distribution. These four possible receiver configurations should be considered only when the problem can be solved by the use of a very few branches, or when a special computing technique is available and is warranted by the importance of the detection problem.

4. A COMPUTER-AUGMENTED TECHNIQUE 4. 1 Rationale The authors have been impressed by the complexity of solution suggested by either the standard power-series approximations or the direct implementation by analog or digital methods of the calculation of the likelihood ratio. They have also been impressed by the simplicity of the receiver design resulting from the study of Pearson type-3 multiplicative noise. It seemed, intuitively, that one could cut down on the number of parallel operations, and reduce the complexity of operation in each branch, if the operations chosen to be performed by each branch were tailored to the detection problem. Specifically, the role of the nonlinear amplifier as an important element in the detection device should be analyzed. In the powerseries methods the choice of a finite expansion in terms of powers of either the input or the signal means that the nonlinearities involved in the receiver must necessarily incorporate pure power terms of either the input or the local expected signal. In many cases the logarithm of the likelihood ratio does not exhibit behavior like a pure power of either the input or the signal. The authors felt that whenever possible they would like to incorporate a second feature into any method developed: the freedom to design the receiver around any specific signal value that is desired. This can be especially important in detection receiver designs when one argues as follows: when the individual component signals are large compared to the added noise, almost any good receiver is almost as good as the optimum, since receiver efficiency usually goes toward one as the signal strength increases. On the other hand, when the signal is very small compared to the added noise, (that is, as the signal strength approaches zero very closely), not even the optimum receiver can perform a suitable detection job simply because there is so little difference between the signal and the noise. Somewhere in between lies the proper range for emphasis in detection receiver design. This philosophy is not new, and this "medium range" has long been called the "threshold zone" or "signals near the minimum detectable signal strength," but much in the theoretical literature has not followed this philosophy. One must hasten to mention that in some cases, for signals at and below the threshold region, the receiver design is relatively insensitive to the actual signal size and

hence the receiver can be designed on a very small signal basis, and may be expected to behave satisfactorily, in application, in the region of interest. However, this insensitivity to signal strength is most easily observed in hindsight, once the optimum receiver has been determined. Such hindsight is not available in a great many problems. 4. 2 The Use of the Computer The procedure suggested in this report consists of using a digital high-speed computer to compute a family of curves of the logarithm of likelihood ratio and possible a family of curves of the derivative of log-likelihood ratio with respect to the receiver input for various values of signal strength s. These plots or values are then examined for "simple" relationships between the curves, so that the simplest receiver design is apparent. Thus the suggested method is half science and half art, (as is most receiver design involving approximation, where the methods are formal but the choice of the forms is left to the designer's discretion). As an example, consider the standard first case of signal detection in added white Gaussian noise. In this case one has to make a decision in some fixed time observation interval T as to whether the input was noise alone or the sum of the noise and a single specific waveform, a "signal known exactly. " The well known result for log-likelihood ratio for this example is L(xi., s) = n (xi, s) = x.s - 5 s2 = s(x-.5s) (13) 1 1 1 1 Since the inputs are statistically independent in both noise and signal-plus-noise, the logarithm of the likelihood ratio for the entire reception is the integrated value of the logarithm of the likelihood ratio for the individual receptions. This leads to the familiar crosscorrelator, which integrates the instantaneous product of the input and the expected signal. If one follows the theory to the letter, a bias term consisting of half signal strength is added either before or after the integration, as one chooses. If the analytic form had been unavailable and the computer output of log-likelihood ratio and its derivative with respect to the input had been available, one would have been able to recognize the receiver form by these plots, as is shown in Fig. 7. Let us return to the preceding example of Pearson type-3 multiplicative noise and choose the specific parameter value k = 1. In Fig. 8 are families of curves of loglikelihood ratio and its derivative with respect to x obtained from a rather simple program on

O=ENVELOPE.5 L(XS) Fig. 7. Logarithm of likelihood ratio for signal known exactly in added white Gaussian noise. a digital computer. One observes in Fig. 8(a) that simple translation of the horizontal and vertical axis matches the s = 1 curve to all the others. It is also true that in Fig. 8(b) a simple translation of the horizontal axis matches the s = 1 curve to all of the others. The amount of these translations depends on the value of s. This agrees with the receiver design previously obtained (mentioned in Eq. 10 and Fig. 4). 4. 3 The "Five-Operational Freedom" Receiver Design We shall now consider a specific method of designing a likelihood ratio receiver based upon curves of log-likelihood ratio and its derivative, supplied by a computer. There are perhaps numerous approaches for determining the relationship between the curves produced by the computer. One hopes to find "simple" relationships. We shall consider a receiver-design in which five "natural and simple" relationships are permitted. Although several alternative descriptions of the method are available, the one we shall choose here is a geometric description which seems apt for describing the relationship between curves. We shall assume that the signal range of interest is near 0 db, s = 1, and that the following five curvematching operations are allowed: 1) a horizontal shift, 2) a vertical shift, 3) a horizontal scaling, 4) a vertical scaling, and 5) rotation.

2 — L(X, S).5 -25 -2 -I.5 -I - 1.5 2 2t -2.5 Fig. 8(a). Logarithm of likelihood ratio for Pearson type-3 multiplicative noise, k = 1. s (X,S) _22.0 -.6.6 S.2 -2.5 -2 -1.5 -I -5 0.5 I 1.5 2 2.5 X Fig. 8(b). Derivative of logarithm of likelihood ratio for Pearson type-3 multiplicative noise, k = 1.

Using these operations on L(x, s = 1), it might be possible to express the likelihood ratio for any s, L(x, s), in the following manner: L(x, s) = Cs[(ax + b ) sin s + L(asx+ b, s=1) cos es] + d (14) Collecting terms, one simplifies this to L(x, s) = As x + Bs L(CSX + DS, s=1) + Es (15) The block diagram which will realize this is shown in Fig. 9. In general this is a two-branch receiver containing one nonlinear function, here based on a signal strength of O db, that is s=1. Had there been no multiplicative noise, only the lower branch would have been active and the function A would have been s itself. For the type-3 multiplicative noise only the upper branch would have been active. In fact, the functions C(s) and B(s) are identically 1, so that the multipliers would not be important. In general one hopes to be able to get by with as few of these basic operations as possible in relating the family of curves to that for s=1. 9 c d, - tinal +frd +om L(XS)r I! Es tAs Fig. 9. Block diagram, "five-operational freedom" receiver. If an exact relationship between the family of curves is not obtained using all five operations it might still be possible to obtain a very close approximation with this method. What we have described is obviously an arbitrary method for designing a receiver. The authors feel that most other designs based upon approximation methods are equally arbitrary, although possibly the arbitrariness is not so obvious. The computer program for obtaining curves of the function L and its derivative with respect to s are based upon a form of Stieltjes integration. Equation 6 is repeated

here to remind the reader of the form of the log of likelihood ratio for a given input receiver sample x, at the moment when a given mean rms signal value s is expected if the signal is present. m22 xm 2 L(x, s) = in f exsm e 2 dF(m) (6) The distribution of the multiplicative noise F(m) was approximated by M discrete steps, each of the same magnitude. This yielded a computer approximation indicated in Eq. 16. m 22 m. 5 M xsm. L(x,s)- in M i e e (16) The equation used by the computer for the approximation to the derivative with respect to the input is m. 5 M xsm. - 1 2 x m. e e a L(x, s) M i=1 i 3x - s (17) aDLx = sL(x, s) The exact manner in which the M "representative" samples were chosen is described in the Appendix. 4. 4 Program Results Table I lists the values of the parameter b (the negative of the average value of the logarithm of the multiplicative noise) together with the variance of the multiplicative noise expressed in db, the more common way of thinking of the amount of noise. For comparison the values of the index k of the Pearson type-3 multiplicative noise that were treated in the previous study are also shown. The probability distribution functions for three cases are plotted in Fig. 10. These are: the distribution of the multiplicative noise for a Pearson type-3 with parameter value k=1 which has a variance of -3 db; the log normal multiplicative noise with parameter value b=. 3466, which is a multiplicative noise variance of 0 db; and the value b=1. 0986 corresponding to 9 db multiplicative noise variance. When only one sample of input is to be integrated, no receiver design problem exists since the input itself is monotone with likelihood ratio. However, one can grasp the effect of the multiplicative noise by. considering the performance of a receiver based upon 17

Variance of Log-Normal Pearson-3 Multiplicative Noise Parameter Parameter r2 (m) b k. 125, -9 db.059 7.250, -6 db.112 3. 500, -3 db.225 1 1, 0db.3466 2, 3 db.5493 4,6 db.8047 8, 9 db 1.0986 Table I. Parameter values for log normal and Pearson-3. RMS Error Single-branch Two-branch Receiver Receiver.4.026.024 1.0 0 0 3.0.120.056 Table II. RMS error in receiver output.

one sample of input for various amounts of multiplicative noise. 9.8 t t =PEARSON m m, k=I.7 ~ // /- LOG NORM m, b =.3466 Ei / -- LOG NORM mb=1.0986 LL/ / /.4 /.3 ii O0 2 3 4 5 m Fig. 10. Distribution functions, k=1, b=. 3466, b=1. 0986. Figure 11 shows the one-sample ROC curve for the Pearson type-3 case of -9, -6, and -3 db multiplicative noise and for a variety of amounts of multiplicative noise for the log-normal case. For this ROC the signal size was +6 db with respect to the added noise. On this graph is also plotted the case where b=0, which corresponds to no multiplicative noise. The obvious effect of more and more variance of the multiplicative noise is the greater and greater loss in detectability. These figures suggest that the performance with multiplicative noise larger than 0 db can be smoothly extended from the Pearson type-3 noise into the log-normal noise and the two cases considered virtually together. There is another interpretation of these same curves, namely the extremely slow fading case. If the multiplicative noise is relatively constant over an entire reception and decision, no matter how many samples this contains, the receiver again is a simple selective rectifier, that is, a device which multiplies the input by the sign of the expected signal and integrates. Hence the performance is based upon one sample of the multiplicative noise and any arbitrary number of samples of added noise and signal. The performance of the receiver is a function only of the type and amount of multiplicative noise and the 2E/N ratio of the signal-to-added-noise. The plots in Figs. 12 through 14 show the computed curves for L(x, s) for 0 db, 4. 5 db, and 9 db multiplicative noise. In Fig. 15 is displayed the derivative of L with

-2 -I 0 +1 +2 99 I 1 1 I I // / ~~~~~~~~~~~98- ~ ~ ~ ~ ~ ~ * / —,+2 97 / 96 / - 905_.1/0 _'-+1 80 zu 50 - 0 407 PEARSON L m,k = 1,3,7 1 2 3 4 5 10 20 30 40 50 60 70 80 90 959697 98 99 /. -o PN (A) Fig. 11. ROC for one-sample case. respect to the input for 0 db multiplicative noise. Note that for very small s the curves are quite linear, indicating that for very small signal size per sample the ordinary crosscorrelator is a good receiver. The actual magnitude of signal-to-noise ratio is: s=.2 means -14 db, s=.4 means -8 db, etc. This suggests how small the input signal must be before the crosscorrela20 20

L(X,S) 2/ -2.5 -2 -1.5 - 1.5 2 2.5 __.6 S I 2 24 3 Fig. 12. Logarithm of likelihood ratio for log-normal multiplicative noise, b=. 3466. 2:.6 2 L(XS) / 3/ / 21 Fig. 13. Logarithm of likelihood ratio for log-normal multiplicative noise, b=. 6289.

L(X,S) 1.5 3 -15 22 Fig. 14. Logarithm of likelihood ratio for log-normal multiplicative noise, b=1. 0986. 8 L(X,S) 2~~~~~~~~~ 52 S2.2 -2.5 -2 -1.5 -I -.5.5 1 1.5 2 2.5 Fig. 15. Derivative of logarithm of likelihood ratio, aL/ax, for log-normal multiplicative noise, b=. 3466. 22

5 5 3 3 4 L(X,S=.4) 2 3 4 5 TWO-FREEDOM /1 2 3 4 5 I X --- RECEIVE. -L-I X — TWO-FREEDOM L(X,S: 3) - -2 RECEIVER -5 -— 5 (a) (b) Fig. 16. Comparison of L and the output of a "two-operational freedom" receiver. L(X,S=3) 5 7 L(X, S=4) TWO~~~~~~~4 ~FREEDOM RECEIVER ~3~~/ TWO- 3 FREEDOM 2 RECEIVER 2 -4 -2 -3 -4 -3 -2 -I 1 2 3 4 5 2 3 4 5 - — I X- _- I X— 2 -2 -3 1 I__3~~~~~~~~ ~~-3 -— 4,-4 — 5 -- -5 (a) (b) Fig. 17. Comparison of L and the output of a "two-operational freedom" receiver. 23

The technique used in designing the receiver was the one outlined previously using five basic operations. At first, an attempt was made to use just horizontal and vertical translations; this was based upon the results of the Pearson type-3 problem. This did not work well over the entire range of receiver inputs of interest. Figures 16 and 17 show the degree of success when one uses only these two operations for b=1. 0986. L(x, s) is matched in Fig. 16 for large positive X, and in Fig. 17 for negative X. The corresponding receiver is that which was shown in Fig. 4, involving the selective rectification, an additive bias depending upon the signal strength, and a single nonlinearity. Adding successively more complexity to the receiver, and more freedom to the fitting process, it was found necessary to use all five of the previously-mentioned operations. The receiver design is thus the two-branch receiver of Fig. 9. The functions of the signal strength are presented in Figs. 18 through 20 for 0 db and 9 db multiplicative noise. Although it is not plotted in these figures, the same basic design and the same nature of the functions held for the intermediate value of 4-1/2 db multiplicative noise variance. The value of the final biasing term Cs was not determined, since this function is one of convenience in setting the final decision bias and is relatively unimportant in receiver design. These curves are presented here to show the general nature of the result. No thorough investigation of the form, meaning, or importance of these receiver design functions was carried out, since the intent in this investigation was to study the possibility of receiver design for a somewhat hypothetical problem. The authors feel that the process for receiver design has been shown to be valid. A more thorough investigation would involve an improved computer technique for determining smooth curves of these functions and an investigation of the best way for determining these curves, since they are interrelated, and since present methods of fitting were somewhat arbitrary as to the relative importance placed upon each of the functions. It is of interest to see how closely the receiver shown here, using these parameter values, approximates the logarithm of the likelihood ratio. Therefore, Fig. 21(a) shows the value of log likelihood ratio as computed by IBM 704, and Fig. 21(b) shows the error in log likelihood ratio of the output of the five-operational freedom receiver. The s=1 curve is matched exactly by the design procedure. Table summarizes the rms error in the receiver output for two receivers. One receiver is the twobranch receiver of Fig. 9; the other receiver is that of Fig. 9 with A = 0, i. e., a single branch receiver. 24

Cs 2 0 ~~3~~~~~~~3 -e, b=.3466 Fig. 18. Plots of the receiver design function Cs for b=. 3466 and b=l. 0986. Ds -53 -2 _ | b=. 3466 o 25

AS b=.3466 o b = 1.0986 2 3 -- S -I Fig. 20. Plots of the receiver design function As for b=. 3466 and b=l. 0986. 6 o L(X,S)5 S:3 3 / 2 2.2 S=.4 S=5 -3 Fig. 21(a). Logarithm of likelihood Fig. 21(b). Error of five-operational ratio for log-normal multiplica- freedom receiver. Error in logtive noise, b = 1. 0986. likelihood ratio (true minus approximate). 26

5. BYPRODUCTS Several computations that were closely allied with the main study are included in this report. These relate to the design of certain nonoptimum (nonlikelihood-ratio) receivers, for the same problem of log-normal multiplicative noise with added Gaussian noise. In Section 5. 1 the detection efficiency of a clipper crosscorrelator is derived, and in Section 5. 2 the maximum-likelihood functions derived from previous curves are plotted. 5. 1 Clipper Crosscorrelator A clipper crosscorrelator is a practical receiving device which is often used when amplitude fluctuations are a serious problem. Although one of the prime reasons for its use is amplitude fluctuations in the noise level, in this report the amplitude of the noise is assumed stable. A simple diagram of a clipper crosscorrelator is shown in Fig. 22. Although the clipping level need not be at zero, we have assumed in this report that the level is at zero, as it is in most practical clippers used for detection purposes. The clipping circuit output is the polarity of the input signal. This operation is sometimes referred to as "hard clipping" or "inifinite clipping" to differentiate it from the saturation type of peak clipping that occurs in many amplifying devices. The polarity of the input is then compared with the polarity of the expected signal. If the polarities of the two are the same, the input to the adder is +1; if the polarities are different, the input to the adder is not +1 (either 0 or -1 is commonly used). The analysis contained here follows the assumption of the rest of the report: that the inputs are statistically independent samples. If the integrator is operated for a fixed number of samples, that is, over a fixed time, then the output has a binomial amplitude distribution which is a function only of the number of samples considered and the probability of having a +1 at the integrator input on each of the identical independent samples. Because the input signal clipper is at the median of the noise, this probability is exactly. 5 when noise alone is present at the input. When the input is due to signal-plus-noise, the probability of a one has been determined as a function of the 2E/N ratio for the individual samples and the variance of the multiplicative noise. 27

Although ROC curves could have been determined for this specific case, a much briefer analysis which isolated the effect of the multiplicative noise from the total receiver performance was used. A more thorough analysis of the general clipper crosscorrelator performance, the ROC curves, and the efficiency are treated in Ref. 8, and a brief analysis of the efficiency in the absence of multiplicative noise is given in Ref. 2. The analysis here is based on the observation that the performance of the clipper crosscorrelator can be broken into two parts: first, the effect of the physical parameters (signal strength, additive noise strength and multiplicative noise variance) on the probability of having a positive input to the adder, and second, the general effect of using a clipper crosscorrelator. This latter is a function solely of the probability p, the binomial distribution, and the operating point desired on the ROC. The first effect, that of obtaining a certain probability at the input to the adder, can be easily studied by numerical methods. INPUT X CLIP x INTEGRATE (SUM) LOCAL CLIP SIGNAL set) Fig. 22. Block diagram, clipper crosscorrelator. Let us denote the energy of a single sample as E c. When there is no multiplicative noise, but only added white Gaussian noise of power density No watts per cycle per second, then the probability of the input polarity corresponding to the expected signal polarity, that is the probability of a +1 to the adder, is given by the normal probability distribution function 2E 2E N (p < ( NAX) JAY 1 e t/2 dt (18) -co This is the straight line, the uppermost curve, in Fig. 23. Using the digital computer, graphs of the probability of a correct polarity were determined for a wide range of log-normal multiplicative noise variance values. These are shown in Fig. 23. They indicate that there is very 28

0.01 99.99 0.05 CL 0.1 / 99.9 0.2 / 99.8 0.5 7 g 0: 5 i 91009db - 99 2 1- 98 w Odb 0 C. 5 95 w l 0 db ~~~~~~~~~~10 ~~~~-90 LL _ 20 - 80 a30 - 70 0,0 - 60 50 - 50 60 I I 40 0 2 3 4 2E No Fig. 23. P(+1) for clipper crosscorrelator, multiplicative and added noise. little effect for small input signal-to-noise ratiosjthat is, 2Ec/No, and that there is also very little effect unless the variance of the multiplicative noise is more than about -10 db (of course each reader must use his own criterion for "very little"). A somewhat more meaningful interpretation of these results is shown in Fig. 24 where the efficiency loss is considered as a function of the amount of multiplicative noise and of the individual sample signal-to-noise ratios. The efficiency 7 mn is defined as the ratio of signal energy required when there is multiplicative noise, to that amount required to give an equivalent performance without multiplicative noise using the clipper crosscorrelator. This efficiency was determined by using the plots of Fig. 23 in the following manner: for a given 2Ec/N ratio the graph is entered and the ordinate read for the appropriate amount of multiplicative noise. At this ordinate value one also reads the 2Ec/No ratio for the case with no multiplicative noise; that is, one determines the input strength necessary to get the same p value. The efficiency is simply a direct comparison of these two energy ratios. 29

1.0 - Odb Ic 010db.8 -- IOdb C — 2db 0.6 Z _ I ~I- Odb — 3db.4 _-4db C I I I 0 2 3 0 2Ec 2 3 No Fig. 24. Efficiency, due to multiplicative noise, for clipper crosscorrelator. One concludes from this efficiency plot that the multiplicative noise causes a progressively larger loss in efficiency as the individual component signal strength increases. Therefore, as was shown for both the clipper crosscorrelator and the optimum receiver, if one can spread signal energy over many components of the signal there will be better performance than if the energy is concentrated in a few components. This is the same effect as the clipper crosscorrelator efficiency with no multiplicative noise displays. Hence when both are taken into account to get the over-all efficiency of such a device, there is a serious decrease in total efficiency unless the individual component strengths are very low. 5. 2 Maximum Likelihood Receivers A procedure that is sometimes recommended (Ref. 5) for design of receivers is to have the receiver output be the maximum value of likelihood ratio for the input over all possible signal levels. Let M(x) = max L(x,s) s 30

Then the receiver, following the assumption of independent sample values, is as shown in Fig. 25. The functions M(x) for several values of b are plotted in Fig. 26. When the input is of the opposite sign from s, then M is zero. INPUT x NON-LINEAR AMPLIFIER M(x) S Fig. 25. Maximum-likelihood receiver.'L0) 1.5 0 2 3 Fig. 26. Maximum value of log-likelihood ratio for b=. 3466, 1. 0986 and Pearson-3, k=1. 31

6. SUMMARY In this report we have considered a method of design for detection receivers based on log-likelihood ratio and a computer technique. This method is applicable for descriptions of the noise which are based either upon data or upon approximations to analytic forms. It is hoped that the computer-augmented technique which has been described has suggested new avenues for the design of likelihood ratio receivers when purely analytic methods fail. This report has not contained a description of the performance of such receivers nor a comparison of this performance with that of simple and more standard competitors for detection receiver design. Such work will be reported in the future. More complex problems involving a variety of signal uncertainties should also be investigated to see if this computer technique is valid for such complex problems. 32

REFERENCES 1. Bateman Manuscript Project, Higher Transcendental Functions, Vol. 2, p. 119, McGraw-Hill, 1953. 2. T. G. Birdsall, "On the Extension of the Theory of Signal Detectability," U. S. Navy Journal of Underwater Acoustics, April 1961. 3. H. C. Carver, Mathematical Statistical Tables, Edwards Brothers, Inc., 1950. 4. L. Halsted, T. G. Birdsall, and L. W. Nolte, "On the Detection of a Randomly Distorted Signal in Gaussian Noise, " Cooley Electronics Laboratory Technical Report No. 129. The University of Michigan, Ann Arbor, Michigan, October 1962. 5. C. W. Helstrom, Statistical Theory of Signal Detection, Pergamon Press, 1960. 6. D. Van Meter and D. Middleton, "Modern Statistical Approaches to Reception on Communication Theory," Transactions at the 1954 Symposium on Information Theory, September, 19-54. 7. D. Middleton, Introduction to Statistical Communication Theory, McGraw-Hill, 1960. 8. G. P. Patil and P. Cota, "On Certain Strategies of Signal Detection Using the Clipper Crosscorrelator," Cooley Electronics Laboratory Technical Report No. 128, The University of Michigan, Ann Arbor, Michigan, October 1962. 9. W. W. Peterson, T. G. Birdsall and W. C. Fox, "The Theory of Signal Detectability," Transactions of the 1954 Symposium on Information Theory, September, 1954; W. W. Peterson and T. G. Birdsall, "The Theory of Signal Detectability," Cooley Electronics Laboratory Technical Report No. 13, The University of Michigan, Ann Arbor, Michigan, June 19 53. 33

APPENDIX COMPUTER DETAILS The general computational method used throughout this study was to replace all continuous random variables with a discrete random variable with a matching distribution function. It was decided that a 50-point discrete distribution would be adequate for this work, each point being assigned 2 percent probability. The staircase distribution function of the discrete distribution was matched to the continuous distribution function at the midpoint-inprobability for each step. That is, the two distributions were matched at values of.01,.03,..,.99. This method gives a rather good representation of the random variables within the mid 96 percent of the range, and a crude representation of the smallest 2 percent and largest 2 percent of the range. The values of the normalized additive noise, ai, were obtained from tables of the normal probability distribution in Ref. 3. These, in turn, were used in the computer to obtain the values of the multiplicative noise variables, mi, by the formula -2-/2b a. mi = e e 1 since the distribution for log m is normal with mean -b and variance 2b. 34

DISTRIBUTION LIST Office of Naval Research (Code 468) Director Department of the Navy National Bureau of Standards Washington 25, D. C. (3 copies) Connecticut Avenue and Van Ness St. N. W. Washington 25, D. C. Office of Naval Research (Code 436) Attn: Mrs. Edith Corliss (1 copy) Department of the Navy Washington 25, D. C. (1 copy) Office of Chief Signal Officer Department of the Army Office of Naval Research (Code 437) Pentagon Building Department of the Navy Washington 25, D. C. (1 copy) Washington 25, D. C. (1 copy) Commanding Officer and Director Director David Taylor Model Basin U. S. Naval Research Laboratory Washington 7, D. C. (1 copy) Technical Information Division Washington 25, D. C. (6 copies) Superintendent U. S. Navy Postgraduate School Director Monterey, California U. S. Naval Research Laboratory Attn: Prof. L. E. Kinsler (1 copy) Sound Division Washington 25, D. C. Commanding Officer Attn: Mr. W. J. Finney (1 copy) Air Force Cambridge Research Center 230 Albany Street Commanding Officer Cambridge 39, Massachusetts (1 copy) Office of Naval Research Branch Office The John Crerar Library Building Chief 86 East Randolph Street Office of Ordnance Research Chicago 1, Illinois (1 copy) Box C. M., Duke Station Durham, N. C. (1 copy) Commanding Officer Office of Naval Research Branch Office National Science Foundation Box 39, Navy No. 100 1520 H Street N. W. FPO, New York (8 copies) Washington D. C. (1 copy) Armed Services Technical Information Agency Commanding General Arlington Hall Station Wright-Patterson AFB Arlington 12, Virginia (10 copies) Dayton, Ohio (1 copy) Commander Commanding Officer U. S. Naval Ordnance Laboratory U. S. Navy Mine Defense Laboratory Acoustics Division Panama City, Florida (1 copy) White Oak Silver Spring, Maryland (2 copies) U. S. Naval Academy Attn: Mr. Derrill, J. Bordelon, Annapolis, Maryland Dr. Ellingson Attn: Library (1 copy) Commanding Officer and Director Chief, Physics Division U. S. Navy Electronics Laboratory Office of Scientific Research San Diego 52, California (1 copy) HQ Air Research and Development Command Andrews AFB Washington 25, D. C. (1 copy) 35

DISTRIBUTION LIST (Cont.) University of California Director Marine Physical Laboratory of the University of Miami Scripps Institution of Oceanography Marine Laboratory San Diego 52, California (2 copies) Miami, Florida Attn: Dr. V. C. Anderson Attn: Dr. J. C. Steinberg (1 copy) Dr. Philip Rudnick Harvard University Director Acoustics Laboratory U. S. Navy Underwater Sound Reference Division of Applied Science Laboratory Cambridge 38, Massachusetts (1 copy) Office of Naval Research P. O. Box 8337 Brown University Orlando, Florida (1 copy) Department of Physics Providence 12, R. I. (1 copy) Commanding Officer and Director U. S. Navy Underwater Sound Laboratory Western Reserve University Fort Trumbull Department of Chemistry New London, Connecticut (2 copies) Cleveland, Ohio Attn: Mr. W. R. Schumacher Attn: Dr. E. Yeager (1 copy) Mr. L. T. Einstein University of California Commander Department of Physics U. S. Naval Air Development Center Los Angeles, California (1 copy) Johnsville, Pennsylvania (1 copy) Institute for Defense Analysis Dr. M. J. Jacobson Communications Research Division Department of Mathematics von Neumann Hall Rensselaer Polytechnic Institute Princeton, New Jersey (1 copy) Troy, New York (1 copy) Dr. B. F. Barton, Director Director Cooley Electronics Laboratory Columbia University The University of Michigan Hudson Laboratories Ann Arbor, Michigan (1 copy) 145 Palisades Street Dobbs Ferry, N. Y. (1 copy) Project File Cooley Electronics Laboratory Woods Hole Oceanographic Institution The University of Michigan Woods Hole, Massachusetts Ann Arbor, Michigan (34 copies) Attn: A. C. Vine (1 copy) Project File Johns Hopkins University The University of Michigan Office of Institute for Cooperative Research Research Administration 34th and Charles Street Ann Arbor, Michigan (1 copy) Baltimore 18, Maryland Attn: Dr. W. H. Huggins (1 copy) Cooley Electronics Laboratory The University of Michigan Edo Corporation Ann Arbor, Michigan College Point, Long Island, N. Y. Attn: Mr. T. G. Birdsall (1 copy) Attn: Mr. Charles J. Loda (1 copy) Melpar, Inc. Applied Sciences Division 11 Galen Street Watertown, Mass. Attn: Dr. David Van Meter (1 copy) 36

UNIVERSITY OF MICHIGAN 3 015 02514 796111111 39015 02514 7961