THE UNIVERSITY OF MICHIGAN RESEARCH INSTITUTE ANN ARBOR APPROXIMATIONS TO THE NONCENTRAL CHI-SQUARE DISTRIBUTIONS WITH APPLICATIONS TO SIGNAL DETECTION MODELS Technical Report No. 101 Electronic Defense Group Department of Electrical Engineering By: D. E. Lamphiear Approved by: ____ T. G. Birdsall A. B. Macnee Project 2899 TASK ORDER NO. EDG-3 CONTRACT NO. DA-36-039 sc-78283 SIGNAL CORPS, DEPARTMENT OF THE ARMY DEPARTMENT OF ARMY PROJECT NO. 3-99-04-106 May 1960

Because of the interest expressed in the subject of this report additional copies have been printed for distribution through the University of Michigan Research Institute Project 2803 under Contract No. AF 49(638)-369, Department of the Air Force Project No. 9778C, Task No. 37710, AF Office of Scientific Research of the Air Research and Development Command, Washington 25, D. C.

TABLE OF CONTENTS PREFACE iv ABSTRACT v 1. INTRODUCTION 1 2. OBSERVATIONS FROM POPULATIONS HAVING EQUAL VARIANCES 1 3. OBSERVATIONS FROM POPULATIONS HAVING UNEQUAL VARIANCES 9 L. CONDITIONAL DISTRIBUTIONS UNDER LINEAR RESTRAINTS 10 5. APPLICATION OF NONCENTRAL CHI-SQUARE TO DECISION MODELS OF SIGNAL DETECTION AND DIFFERENTIAL DISCRIMINATION 11 5.1 Application in Detection Model 14 5.2 Application in Discrimination Model 15 6. CONCLUSION 17 REFERENCES 19 DISTRIBUTION LIST 21 iii

PREFACE The objective of this report is to study power- or energy-measuring devices operating with a steady input of bandlimited white Gaussian noise, and possibly a signal added to the noise input. The intent of this report is to evaluate the performance of such devices in detecting signals or in discriminating between signals with slightly different energies. Common examples of energy-measuring devices are (1) radar receivers with square-law second detectors that average over a number of pulses from the same range, (2) broadband superheterodyne and crystal-video receivers with square-law detectors, used for intercepting radar pulses or for receiving ordinary AM modulated communications signals. The relevant statistical distributions, the central chi-square and the noncentral chi-square, have been calculated and approximated by a number of authors for various purposes. The interest in this report is in approximations that will allow a comparison of the central versus the noncentral distributions to be used in the detection of the signal-and-noise versus noisealone problem, and in comparing two noncentral distributions for the increase-in-signal-power problem. The authors' original interest in this problem stemmed from observing the "suppression effect" in square-law detectors in communication and radar receivers. This is an effective loss of detection efficiency as the signal-to-noise ratio into the detector decreases. Although this effect has been extensively studied for special narrowband and broadband cases, the authors feel that the present treatment of the loss of efficiency in energy-measuring devices forms a broad general basis for understanding the suppression effect. iv

ABSTRACT Closed form and tabular approximations for the central and noncentral chi-square distribution are reviewed and compared, and an approximation suitable for application to signal-detection problems chosen. This approximation is used to evaluate the efficiency of energy-detecting devices masked by white Gaussian noise to detect signals, and to discriminate between signals with slightly different energies.

APPROXIMA.TIONS TO THE NONCENTRAL CHI-SQUARE DISTRIBUTIONS WITH APPLICATIONS TO SIGNAL DETECTION MODELS 1. INIRODUCTION The central X2 distribution has been widely investigated and because of its use in statistical applications has been tabulated in more or less detail in a variety of places; see, for example, Pearson and Hartley (Ref. 1). Such a table can give complete coverage of the nonlinear part of the function since it depends on a single parameter. The noncentral X2 distribution, which has general utility in many applications, depends on two parameters and, for this reason, would require much more space for tabulation. Such tables are not generally available. The usual procedure is either to reduce the problem to one that requires the central chi-square, or to compute the required percentage point or probability level from other tabulated functions. An approximation to the noncentral X2 distribution proposed by Patnaik (Ref.2) is recommended by Pearson and Hartley (Ref. 1). However, the error in the approximation is not stated. The purpose of this report is to examine various approximations to the noncentral X2 and to arrive at some conclusion as to their utility. 2, OBSERVATIONS FROM POPULATIONS HAVING EQUAL VARIANCES If xi is a randomly selected variate from a normally distributed population with zero mean and unit variance [x. - N(O, 1)], the probability

distribution of xi is given by X 2 Xi P xis<X} - i e dx (1) The sum of the squares of n randomly selected variates from the population follows the X2 distribution with n degrees of freedom: 2 Xo n-2 t__ 2 2 j 2 2'= xi< Xo n t d t P{i ~X1 2 r(2) t (2) X2 n-2 2 x ~o Suppose now that xi is drawn from a normal population with unit variance but with an arbitrary mean: x = N (ai, 1). The distribution of Xi is given by 2 P{x X<}= - e d (3) -aO) For n randomly-selected variates from the same or from different populations, the distribution function of the sum of squares is =Pz x> <,2} x 2,2 (4) / 2X(X )2 I~ZX~CZ!x/2 C:,' ~e 1n+

I (_X,2 2 d 2 n(n+2) 21 x/2 )d (4) 2 2 X c (X e2 (X C) z 2 where 2 n 2 C -Za. i=l and I is the Bessel function of the first kind with imaginary argument. The distribution function of X'2 is called the noncentral Chi-square distribution with n degrees of freedom and parameter c2. The noncentral. X2 distribution function cannot be evaluated directly, nor are tables available which are adequate for most applications. Fisher (Ref. 3) has given expressions for the exact computation. The complexity of the computations increases rapidly with increasing degrees of freedom. For general use, the computational work is excessively long. An alternative is to approximate the function by expanding it in an Edgeworth seriesl, using enough terms to reduce the maximum error to some specified size. Coefficients of the series depend on the 1 The Edgeworth series is an expansion in terms of the normal distribution and its derivatives. See Cramer (Ref. 17), p. 227-231. 3

cumulants of the distribution, which can be determined from the characteristic function of Eq. (4). Marcum (Ref. 4) and Patnaik (Ref. 2) use this method. For small values of n and c2 the convergence is slow, but it becomes more rapid as either n or c2 increases. Patnaik has made use of the Edgeworth expansion to produce a rapidly converging series approximation. He obtains the Edgeworth series expansion of the best fitting central x distribution and subtracts it from the Edgeworth series expansion of the noncentral distribution. The first approximation for the noncentral case is a central X2, and the second and following terms are correction terms. This method produces high accuracy but requires the use of tables of derivatives of the normal distribution function or tables of the Hermite polynomials. In general, interpolation in these tables is required. The computations are still laborious, but convergence to three significant figures is usually obtained with seven terms, the greatest errors occurring with small n and c2 To investigate the possibility of finding simple approximations we consider the limiting distributions of X~ As c2 approaches zero it is clear that the distribution of X'2 approaches that of X2. For constant values of c2 and increasing values of n, the distribution of X'2 approaches normality, a consequence of the central limit theorem. It will appear later that as the effective number of degrees of freedom of the best fitting central X2 distribution increases with increasing c2, the distribution also approaches normality. To find the best fitting X2 distribution, we consider the 1 Cumulants, often called semi-invariates, are coefficients in the series expansion of the log of the characteristic functions, hence the nth cumulant of the random variable which is the sum of several independent variables is the sum of the nth cumulants of these several variables. See Cramer (Ref. 17), p. 187-192.

characteristic function of Eq. (4), Xit I-2it +)(t )= (i nT ) 5) (I-2 it)2 The formal power series expansion of the logarithm of this characteristic function is Xit log c(t)- i2t jIog(I2it) = X2- i t) - n 2 r(it r-l i2 rfl r rr_ =2 2-' (X+T)(it) The definition of the cumulants k in terms of this series is r log ~(t )- (it) From this we obtain the cumulants k, =n +X k2 = 2(n +2 X) (6) o 0 0 0 kr =2' r-I)!(n+rX) If only the first two cumulants are used to determine an approximating distribution, and we restrict ourselves to Pearson type III distributions 1 Cramer (Ref. 17), pp. 248-249; type III distributions are a generalization of the X;2 distribution.

we obtain the density function y v-2 f (Y) e V y (7) fly) =e where y X, n+2c2 Y P n + c2 (8) (n+c2) n + 2C2 Thus we are approximating the distribution of X' by the central X2 distribution with D degrees of freedom, v being in general a fraction. The normal approximation can be developed independently, but the same formulas are obtained by taking the limiting normal distribution to the X2 approximation. It is clear that in the limit the X'2 distribution and the X2 distribution tend to the same normal distribution, since they have the same first two moments, and the limiting normal distribution is completely determined by the first two moments. As Patnaik has shown, the distribution of X' approaches normality faster than that of X'2, an analogous property to that enjoyed by the X2 distribution. We may expect, therefore, that J2 y - 2 -H (9) or 2 X' 2(n + c2) 2 (n+ c2 ) n+2 2 n+2c2 is approximately normally distributed with zero mean and unit variance for sufficiently large values of n and c. This is based on Fisher's 6

well-known approximation for the X2 distribution that V/2X tends to N(2/n-i, 1) as n increases. A faster converging normal approximation due to Wilson and Hilferty (see Ref. 5) is that 32I + I 32 _ (11) tends to N(O, i) with increasing n. Accordingly, we may take.1 1 [(V) +9V-{ 0( ) (12) to be N(O, 1) for v sufficiently large. To get some idea as to the accuracy of simple approximations, the probability of exceeding X2 was calculated by various methods. In each case the values of y and v were computed from the parameters n and c2 and the observed X2, using formulas (8). The first two approximations are based on the central X2 distribution, the first being obtained by linear interpolation in central X2 tables and the second by exact interpolation. The remaining two approximations are based on the normal approximation, using Fisherts approximation in one case and Wilson and Hilferty's in the other. These are shown in Table I. The exact value shown was taken from Patnaik (Ref. 2). The values of X'2 are shown to the number of significant figures used in the computation. These values were taken mostly from Patnaik's paper, although some which Patnaik had taken from Fisher (Ref. 3) were obtained from Fisher to one-more decimal place. Some accuracy in the exact value of the probability was lost by Patnaik in the rounding off of X'2

TABLE I Approximations to the Probability of Exceeding an Observed Sum of Squares for Various Non-Central Parameters and Degrees of Freedom Using Patnaik's Transformation n c' U X2 Exact I1 II2 III3 IV4 2 1 2.25.17.05.0492.0860.0500 2 4 3.6.646.05.0329.0233.0537.0313 4 1 4.1667.91.05.0502.0465.0700.0500 4 4 5.3333 1.765.05.0427.0387.0576.0414 7 1 7.1111 2.49.05.0500.0489.0628.0499 7 4 8.0667 3.664.05.0462.0454.0580.0462 2 16 9.5294 6.322.05.0389.0369.0483.0379 4 16 11.1111 7.884.05.0406.0400.0498.0405 7 16 13.5641 10.257.05.0439.0426.0512.0431 2 25 14.0192 12.08.05.0406.0404.0487.0408 4 25 15.5741 13.73.05.0427.0417.0494.0421 7 25 17.9649 16.23.05.0437.0434.0504.0436 16 32 28.8 30.000.0609.0594.0590.0639.0590 24 24 32 36.000.1567.1556.1556.1565.1553 7 1 7.1111 4.000.1628.1635.1621.1661.1610 12 18 18.75 24.000.2901.2926.2920.2863.2913 4 10 8.1667 10.000.3148.3190.3179.3085.3163 16 8 18 20.000.3369.3380.3380.3304.3374 24 24 32 48.000.5296.5333.5332.5290.5332 7 16 13.5641 24.000.5898.5943.5949.5827.5947 4 4 5.3333 10.000.7118.7180.7197.7062.7199 16 8 18 30.000.7880.7887.7902.7858.7880 12 6 13.5 24.000.8174.8178.8188.8162.8193 16 32 28.8 60.000.8316.8326.8329.8320.8332 2 1 2.25 8.64.95.9480.9581.9515 2 4 3.6 14.64.95.9470.9488.9555.9497 4 1 4.1667 11.71.95.9490.9500.9564.9506 4 4 5.3333 17.309.95.9478.9491.9550.9496 7 1 7.1111 16.004.95.9288.9298.9341.9302 7 4 8.0667 21.23.95.9491.9494.9545.9497 2 16 9.5294 33.06 ~95.9467.9474.9522.9478 4 16 11.1111 35.43.95.9474.9479.9523.9480 7 16 13.5641 38.970.95.9476.9482.9523.9483 2 25 14.0192 45.31.95.9469.9478.9515.9476 4 25 15.5741 47.61.95.9467.9478.9515.9478 7 25 17.9649 51.06.95.9476.9481.9517.9481 16 8 18 40.000.9632.9626.9626.9664.9626 4 4 5.3333 24.000.9925.9909.9912.9946.9911 1 Linear Interpolation in Table 7 of Pearson and Hartley. 2 Exact Interpolation Using Pearson and Hartley's Formnnulas 3 Normal Approximation Using Fisher's Normal Approximation to the Chi-square Distribution 4 Normal Approximation Using Wilson and Hilferty's Approximation 8

The conclusion to be drawn from the table is that no one approximation is superior to the others over the whole table. In particular, the more exact approximations (Methods II and IV) are not significantly better than their more easily computed counterparts (Methods I and III). For moderately large v (say, D> 5) the approximation based on Fisher's normal approximation gives sufficient accuracy for a large number of practical applications. For small values of v, the Wilson-Hilferty approximation is better at lower probabilities, since the Wilson-Hilferty approximation is more symmetrical. 3. OBSERVATIONS FROM POPULATIONS HAVING UNEQUAL VARIANCES So far we have assumed that the x. were selected from populations having variances equal to one. When both the means and the variances vary, we write xi E N(bi, vi). Under these conditions the distribution function of r2 = x.2 no longer satisfies the conditions of the central limit theorem. However, a sufficient condition for the distribution of to be asymptotically normal is that the set {vi} be bounded. While this condition is always satisfied in physical experiments, the upper bound on the v. may be so large that convergence to the limiting distributions is extremely slow. The limiting distributions have the same form as before, when fitted by the first two moments. We have Y v-2 e2 y ( f(y) v2 (7) 22 2Y

where y- ~ p _:~v~+2z bi2,P p Vi + biVi (13) (EV;, +: b+ V, b. - ZVi2+22b2 4. CONDITIONAL DISTRIBUTIONS UiDER LINEAR RESTRAINTS It is well known that for the central X2 distribution, if the xi 2 2 are subject to s linear restraints, t~en C xi follows the X distribui=l 2 tion with n-s degree of freedom. The distribution of X' has a similar property. Bateman (Ref. 6) gives the general proof from which the result given by Patnaik follows as a special case. Suppose the xi are subject to s orthogonal linear restraints n CP X= (14) with C. C = p (15) I mi where the cji, p~ are constants. Let E(xi) = ai. Then n 2 s 1 E x2 2 x2 1(16) i=l -Q is distributed as X'2 with n-s degree of freedom and parameter C dr t a-i (d in i (17) For the conditional distribution of ~r (defined in the preceding 10

section) under s linear restraints, the moments can be determined from the conditional characteristic function, and the best fitting X distribution can be determined by fitting a Type III curve, using the first two moments. 5. APPLICATION OF THE NONCENTRAL CHI-SQUARE TO DECISION MODELS OF SIGNAL DETECTION AND DIFFERENTIAL DISCRIMINATION A class of decision theory models which has been applied with some success to both electronic and psychophysical problems (Refs. 7-12, 15, 16) is the following. A point in a given finite dimensional space is considered to be an "observation." This point is distributed, if viewed repeatedly, according to one of several different possible probability distributions, called "hypotheses," HOw 1.0.. The decision task is to optimize the procedure for deciding which hypothesis holds, on the basis of one "observation." The problems arising in detection always corrnsider two possible alternatives. Whenever all of the parameters of these two distributions are known, the ratio of the two probability density functions is of particular importance. It is called the likelihood ratio, and it has been shown (Ref. 7) that it is the relevant statistic in this decision. That is to say that optimum decisions assume some critical value of likelihood ratio and decide for one hypothesis whenever the likelihood ratio of the observation is greater than this critical value and for the other hypothesis whenever the likelihood ratio of the observation is not greater than the critical value. Two types of problem lead to a consideration of the noncentral 11

chi-square distribution. Case I: The null hypothesis is that the observations are distributed according to a completely symmetric normal (multivariate) distribution. This is called "white Gaussian noise" in engineering problems. The mean of the distribution is the origin of the space, and in engineering problems the variance per coordinate is No/2, i.e., half the noise power per cps. The "signal" hypothesis is a composite hypothesis: each simple hypothesis is a simple translation of the null hypothesis, with mean displaced c No/2 from the origin, and these means are uniformly distributed over an n-l dimensional sphere about the origin. It should be obvious that the only relevant coordinates of the observation are those in the n-dimensional space containing this sphere. Because of symmetry, the radius in this n-dimensional subspace is monotone with likelihood ratio, If the space is normalized to have unit variance on each coordinate axis, the sphere will have radius c. The null hypothesis is now the normalized central chi-square distribution, and the signal hypothesis is the noncentral chi-square distribution with parameter c2 Case II: The null hypothesis is, as in Case I, simple white Gaussian noise. The signal hypothesis is just a translation of the mean to some point c JNo/2 from the origin. A non-optimum decision can be based on the radius of the observation in some n-dimensional subspace which contains the translation vector. Although this is a non-optimum procedure, it does arise when the actual axis on which the signal mean lies is unknown but can logically be bounded to some subspace. Such is the case when the signals are sine-wave-like signals with uncertain phase and starting times, The distribution of the measured statistic, the radius, is the 12

"chi" distribution, central for the null hypothesis, and noncentral for the signal hypothesis. The parameter c is V, where E is the signal o energy and No the noise power per cps. In this latter case it is customary to compute the efficiency (Ref. 13) of the decision device relative to the optimum device. This efficiency is the ratio of the energy Ell necessary for the optimum decision device to reach the same performance as that achieved by the nonoptimum device which used energy E12 7L E * (18) In this, Case II, when the signal is specified exactly, Ell can be determined as E,,aL=x~ N, eO() (19) where 4i.(X') is the difference in the means of the two "chi" distributions, and a(X') is the standard deviation. So far, the use of the chi-square distribution in two specific detection cases has been discussed. "Detection" usually carries the connotation in white Gaussian noise that one of the hypotheses has mean at the origin. When the two hypotheses that are possible on both "signal" hypotheses with different values of the parameter "c", the label "differential discrimination" is often used. In computing the efficiency for such a situation and for signals specified exactly and differing only in amplitude, the "energy" El or El2 referred to is the energy of the difference signal, which is proportional to the square of the difference of the rms voltages of the signals. 13

5.1 Application in Detection Model The problem in detection is the comparison of the observation X' (or X' 2) under the two alternative hypotheses Ho, that c = 0; and Hc, that is some specific non-zero value. Clarke, Birdsall, and Tanner (Ref. 14) have suggested that in comparison of two normal but unequal variance hypotheses, the average measure V-d be used, where de -. (20) e -o2 The corresponding efficiency n is then - de (21) C2 The means and variances are obtained from equation (10): H ~o = ~h. 5, ~o-.5,5 (22) 2(n+c2)-(n+2c2) 2 (n+2C2) H,: H'C 2( c-2) C- 2(n+ C2 These yield an efficiency of 2in$c:''(n +C2 c2) 2(nn (n+2 2(n+C 2) 2 (2n+ 3C ) -C 2 2n+2C2 f (23) 2n +3C2 C2V1n n 2nf+2C2-n 14

For large n (n > 10) the second term in each radical is small compared to the first term. If these second terms are ignored, the expression for efficiency becomes dependent only on the ratio of c to n, as follows: 2n+ c2 2 (24) 2n+3C2 c2 or +2 "r -. (25)'17 e C2 2+3n n Equation (25) is plotted in Fig. 1. 5.2 Application in Differential Discrimination Model The model of differential discrimination is that two hypotheses are compared, H and H, where the parameters cl and c2 are both large 11 2 but approximately equal. Specifically, the ci should be large enough so that simple detection is nearly perfect. The following analysis will assume that the difference between c values is considerably less than the smaller c value. The equation for the mean and variance of X' can be rewritten as r2(x') ~n+2C02 2n+2c2, (26) -L (X')= f+C2+.5:2 If a small change of c to c + e is made, the variance remains relatively unchanged, and the mean increases to,Lc+E (x') /nc2+.50 2+ 2cE +E2 (27) = fl2..Lg52I %2(C~E)E =A/n +c+5-2 I + 2C +. 2) 15

1.0 MAX. DET. EFF.= 2/3 0.5 0.1 - 0.05 0.01 0.1 0.5 1.0 5.0 10.0 50.0 lO.O 2 n FIG.I NON-CENTRAL CHI -SQUARE ANALYSIS OF DETECTION AND DIFFERENTIAL DISCRIMINATION EFFICIENCY, n>10.

Under the foregoing assumption that the difference between the c values is much less than the smaller, a good approximation to the second radical is the first two terms of the power series expansion, yielding P'C+e (X,):=/n c2+52 + n+cc2+5~C( (28) n +C2+.5 2' The discriminability of two such hypotheses (d') is measured by the ratio of the difference of the means divided by the standard deviation. C d C (29)' -' n + c +.5c~2 2 2 The efficiency of differential discrimination is the ratio of d'2 to E2 C I 2 2 (30) D.D. n + C2 +.52 2 *) This can be further simplified with very little change by noting that the range of.5a2 is from.25 to.50, which is very small compared to c + n. Dropping.5a in the denominator, and expressing a in terms of c and n, we obtain C2 X =.2.* (31) o.o. C +.5n This is also plotted in Fig. 1. 6. CONCLUSION The various approximations to the non-central chi-square distribution (the distribution of X'2) available in the literature have been reviewed and compared. It is concluded that for n > 10 the Fisher approximation 2 2) f(2(n+cn +2C) ln+2C 2(n +C) (32) is the simplest and quite adequate for use in models of detection and 17

differential discrimination. Based on this approximation the efficiency of a specific decision device has been determined for detection and discrimination in additive white Gaussian noise. 18

PREFERENCES 1. Biometrika Tables for Statisticians, E. S. Pearson and H. 0. Hartley, Vol. I., Cambridge: University Press (1954). 2. "The Noncentral X - and F- Distributions and Their Applications," P. B. Patnaik, Biometrika, 36, 202-232 (1949). 3. "The General Sampling Distribution of the Multiple Correlation Coefficient," R. A. Fisher, Proc. Roy. Soc., Series A, 121, 654-673 (1928). 4. "A Statistical Theory of Target Detection by Pulsed Radar: Mathematical Appendix," J. I. Marcum, Project Rand ASTIA Doc. No. AD 101882 (1948). 5. The Advanced Theory of Statistics, M. G. Kendall, Vol. I, London: Chas. Griffin and Company (1948). 6. "The Characteristic Function of a Weighted Sum of Noncentral Square of Normal Variates Subject to s Linear Restraints," G. I. Bateman, Biometrika, 36, 460-462 (1949). 7. "The Theory of Signal Detectability," W. W. Peterson, and T. G. Birdsall, Electronic Defense Group Technical Report No. 13, The University of Michigan Research Institute, June 1953. See also "The Theory of Signal Detectability," W. W. Peterson, T. G. Birdsall, and W. C. Fox, Trans. Prof. Group Inf. Theory, Inst. of Radio Engineers, PGIT-4, pp. 171-212, September 1954. 8. "Some General Properties of the Hearing Mechanism," W. P. Tanner, Jr., J. A. Swets, and D. M. Green, Electronic Defense Group Technical Report No. 30, The University of Michigan Research Institute, March 1956. 9. "The Evidence for a Decision-Making Theory of Visual Detection," J. A. Swets, W. P. Tanner, Jr., and T. G. Birdsall, Electronic Defense Group Technical Report No. 40, The University of Michigan Research Institute, March 1955. See also "A Decision-Making Theory of Visual Detection," J. A. Swets and W. P. Tanner, Jr., Psych. Review, Vol. 63, pp. 401-409, 1954. 10. "A Re-Evaluation of Weber's Law as Applied to Pure Tones," W. P. Tanner, Jr., Electronic Defense Group Technical Report No. 47, The University of Michigan Research Institute, August 1958. See also "A Re-Evaluation of Weber's Law as Applied to Pure Tones," W. P. Tanner, Jr., J. Acoust. Soc. of Amer., (to be published). 19

11. "A Theory of Recognition," W. P. Tanner, Jr., Electronic Defense Group Technical Report No. 50, The University of Michigan Research Institute, May 1955. See also "Theory of Recognition," W. P. Tanner, Jr., J. Acoust. Soc. of Amer., Vol. 28, No. 5, pp. 882-889, 1956. 12. "Effect of Vocabulary Size on Articulation Score," D. M. Green, Electronic Defense Group Technical Report No. 81, The University of Michigan Research Institute, February 1958. 13. "Definitions of d' and j as Psychophysical Measures," W. P. Tanner, Jr., Electronic Defense Group Technical Report No. 80, The University of Michigan Research Institute, March 1958. See also "Definitions of d' and q as Psychophysical Measures," W. P. Tanner, Jr., and T. G. Birdsall, J. Acous. Soc. Amer., Vol. 30, pp. 922-928, 1958. 14. "Two Types of ROC Curves and Definitions of Parameter,," F. R. Clarke, T. G. Birdsall, and W. P. Tanner, Jr., J. Acous. Soc. Amer., Vol. 31, p. 629, 1959. 15. "Statistical Criteria for the Detection of Pulsed Carriers in Noise," D. Middleton, J. of Applied Physics, Vol. 24, No. 4, pp. 379-391, April 1953. 16. "Signal Detection as a Function of Signal Intensity and Duration," D. M. Green, T. G Birdsall, and W. P. Tanner, Jr., J. Acoust. Soc. Amer., Vol. 29, No. 4, pp. 523-531, April 1957. 17. Mathematical Methods of Statistics, H. Cramer, Princeton University Press, 1957 Printing. 20

DISTRIBUTION LIST Copy No. Copy No. 1-2 Commanding Officer, U. S. Army Signal 27 Commander, Air Proving Ground Center, Research and Development Laboratory, ATTN: Adj/Technical Report Branch, Fort Monmouth, New Jersey, ATTN: Senior Eglin Air Force Base, Florida Scientist, Countermeasures Division 28 Commander, Special Weapons Center, Kirt3 Commanding General, U. S. Army Electronic land Air Force Base, Albuquerque, New Proving Ground, Fort Huachuca, Arizona, Mexico ATTN: Director, Electronic Warfare Department 29 Chief, Bureau of Ordnance, Code ReO-l, Department of the Navy, Washington 25, 4 Chief, Research and Development Division, D. C. Office of the Chief Signal Officer, Department of the Army, Washington 25, 30 Chief of Naval Operations, EW Systems D. C., ATTN: SIGEB Branch, OP-347, Department of the Navy, Washington 25, D. C. 5 Chief, Plans and Operations Division, Office of the Chief Signal Officer, 31 Chief, Bureau of Ships, Code 840, DeWashington 25, D. C., ATTN: SIGEW partment of the Navy, Washington 25, D. C. 6 Commanding Officer, Signal Corps Elec- 32 Chief, Bureau of Ships, Code 843, Detronics Research Unit, 9560th USASRU, partment of the Navy, Washington 25, D. C. P. O. Box 205, Mountain View, California 33 Chief, Bureau of Aeronautics, Code EL-8, 7 U. S. Atomic Energy Commission, 1901 Con- Department of the Navy, Washington 25, stitution Avenue, N.W., Washington 25, D. C. D. C., ATTN: Chief Librarian 34 Commander, Naval Ordnance Test Station, 8 Director, Central Intelligence Agency, Inyokern, China Lake, California, ATTN: 2430 E Street, N.W., Washington 25, Test Director-Code 30 D. C., ATTN: OCD 35 Commander, Naval Air Missile Test Center, g Signal Corps Liaison Officer, Lincoln Point Mugu, California, ATTN: Code 366 Laboratory, Box 73, Lexington 73, Massachusetts, ATTN: Col. Clinton W. Janes 36 Director, Naval Research Laboratory, Countermeasures Branch, Code 5430, Wash10-19 Commander, Armed Services Technical In- ington 25, D. C. formation Agency, Arlington Hall Station, Arlington 12, Virginia 37 Director, Naval Research Laboratory, Washington 25, D. C., ATTN: Code 2021 20 Commander, Air Research and Development Command, Andrews Air Force Base, Wash- 38 Director, Air University Library, Maxwell ington 25, D. C., ATTN: RDTC Air Force Base, Alabama, ATTN: CR-4987 21 Directorate of Research and Development, 39 Commanding Officer-Director, U. S. Naval USAF, Washington 25, D. C., ATTN: Chief, Electronic Laboratory, San Diego 52, Electronic Division California 22-23 Commander, Wright Air Development Center, 40 Office of the Chief of Ordnance, DepartWright-Patterson Air Force Base, Ohio, ment of the Army, Washington 25, D. C., ATTN: WCOSI-3 ATTN: ORDTU 24 Commander, Wright Air Development Center, 41 Chief, West Coast Office, U. S. Army Wright-Patterson Air Force Base, Ohio, Signal Research and Development LaboraATTN: WCLGL-7 tory, Bldg. 6, 75 S. Grand Avenue, Pasadena 2, California 25 Commander, Air Force Cambridge Research Center, L. G. Hanscom Field, Bedford, 42 Commanding Officer, U. S. Naval Ordnance Massachusetts, ATTN: CROTLR-2 Laboratory, Silver Springs 19, Maryland 26 Commander, Rome Air Development Center, 43-44 Chief, U. S. Army Security Agency, Griffiss Air Force Base, New York, ATTN: Arlington Hall Station, Arlington 12, RCSSLD Virginia, ATTN: GAS-24L 21

IVERS OF MICHIGAN 3 9015 03023 2147 DISTRIBUTION LI'-. kount'd) Copy No. Copy No. 45 President, U. S. Army Defense Board, 61-62 Commanding Officer, U. S. Army Signal Headquarters, Fort Bliss, Texas Missile Support Agency, White Sands Missile Range, New Mexico, ATTN: SIGWS-EW 46 President, U. S. Army Airborne and Elec- and SIGWS-FC tronics Board, Fort Bragg, North Carolina 63 Commanding Officer, U. S. Naval Air 47 U. S. Army Antiaircraft Artillery and Development Center, Johnsville, Guided Missile School, Fort Bliss, Texas, Pennsylvania, ATTN: Naval Air DevelopATTN: E & E Department ment Center Library 64 Commanding Officer, U. S. Army Signal 48 Commander, USAF Security Service, San Antonio, Texas, ATTN: CLR Research and Development Laboratory, Fort Monmouth, New Jersey, ATTN: U. S. 49 Chief of Naval Research, Department of Marine Corps Liaison Office, Code AO-4C 49 Chief of Naval Research, Department of the Navy, Washington 25, D. C. ATTN: othe 1Navy, Washington 25, D. C. ATTN: 65 President U. S. Army Signal Board, Fort Monmouth, New Jersey 50 Commanding Officer, U. S. Army Security 66-76 Commanding Officer, U. S. Army Signal ReAgency, Operations Center, Fort Huachuca, search and Development Laboratory, Fort Arizona search and Development Laboratory, Fort Arizona Monmouth, New Jersey 51 President, U. S. Army Security Agency Board, Arlington Hall Station, Arlington, 12, Virginia 1 Copy - Technical Documents Center ADT/E i Copy - Chief, Ctms Systems Branch, 52 Operations Research Office, Johns Hopkins, Ctms Systems Branch University, 6935 Arlington Road, Bethesda 1 Countermeasufes Dvsion 14, Maryland, ATTN: U. S. Army Liaison Copy - Chief, Detection & LoOfficer cation Branch, CounterOfficer measures Division 53 The Johns Hopkins University, Radiation 1 Copy - Chief, Jamming & DeLaboratory, 1315 St. Paul Street, Balti- ception Branch, Countermeasures Division more 2, Maryland, ATTN: LibrarianFile Unit No. 4, Mail & i Copy - File Unit No. 4, Mail & 54 Stanford Electronics Laboratories, Stan- Records, Countermeasures ford University, Stanford, California, 1 Copy - Chief, Vulnerability Br., ATTN: Applied Electronics Laboratory Chief, Vulnerability Br Document'~~~ Library ~~Electromagnetic EnvironDocument Library ment Division 55 RB-Singer,, Inc., Science Park, State 1 Copy - Reports Distribution Unit, Countermeasures Division College, Penna., ATTN: R. A. Evans, Manager, Technical Information Center File 3 Cpys - Chief, Security Division 56 ITT Laboratories, 500 Washington Avenue, (for retransmittal to BJSM) Nutley 10., New Jersey, ATTN: Mr. L. A. ~ 77 Director, National Security Agency, Ft. DeRosa, Div. R-15 Lab. George G. Meade, Maryland, ATTN: TEC 57 The Rand Corporation, 1700 Main Street, 78 Dr. H. W. Farris, Director, Electronic Santa Monica, California, ATTN: Dr. J. L. -ult Defense Group, University of Michigan Research Institute, Ann Arbor, Michigan 58 Stanford Electronics Laboratories, Stanford University, Stanford, California, 79-99 Electronic Defense Group Project File, ATTN: Dr. R. C. Cumming University of Michigan Research Institute, Ann Arbor, Michigan 59 Willow Run Laboratories, The University of Michigan, P. 0. Box 2008, Ann Arbor, 100 Project File, University of Michigan Michigan, ATTN: Dr. Boyd Research Institute, Ann Arbor, Michigan 60 Stanford Research Institute, Menlo Park, California, ATTN: Dr. Cohn Above distribution is effected by Countermeasures Division, Surveillance Dept., USASRDL, Evans Area, Belmar, New Jersey. For further information contact Mr. I. 0. Myers, Senior Scientist, telephone PRospect 5-3000, Ext. 61252. 22