Technical Report No. 194 ERRATA The following corrections are applicable to Technical Report No. 194, Adaptive Detection Receivers and Reproducing Densities, July 1968. Page 8 Change Eq. 6 to read &[X(T1, T2) jX(O, T1)] = [X(O, T2)]/ [X(O, T1)] (6) Page 11 Change line 1 in Table I to read H: f[X(T1, T2) )] the conditional objective density of the observation Page 12 Change Eq. 11 to read f[X(T1, T2) 0 IX(O, T1)]= f[X(T1, T2)X(O, T1)] f[G0 IX(0, T2)] (11) Page 15 Change Eq. 19 to read g[n l a(T1)] f[X(T1, T2)IX(O, T1), N] = [l1(T)] f[X(T1 T I (19) 13 2 x(O) - 1 Pd n I dT- TT 1) 2 (19

Page 25 Change Eq. 16 to read f[X(T1, T2)!17, N] f[X(T, T2)I1', SN] (16) COOLEY ELECTRONICS LABORATORY Department of Electrical Engineering The University of Michigan Ann Arborj Michigan

Technical Report No. 194 3674-17-T ADAPTIVE DETECTION RECEIVERS AND REPRODUCING DENSITIES by(,Eop Theodore G. Birdsall COOLEY ELECTRONICS LABORATORY Department of Electrical Engineering The University of Michigan Ann Arbor, Michigan for Contract No. Nonr-1224(36) Office of Naval Research Department of the Navy Washington, D. C. 20360 July 1968 Reproduction in whole or in part is permitted for any purpose of the U. S. Government. El t U i u X VP L F', ~~.~.~..., I E I. LI aA ENGI-NEERING LIBRAR':

(s A- (z 4'

Abstract The design of detection equipment that can "learn for itself" the values of necessary constants (parameters) unknown to the designer or user has been the recent goal of theoretical research. The theory developed to date required the designer to specify the a priori distribution of the uncertain parameters. This paper points out a physically important property: the likelihood ratio of observation may be obtained in a two-step process. The observed input is first processed through a receiver section designed on the basis of a simple mathematically tractable a priori distribution. This is the major bulk of the processing. The output of this first section is a small handful of numbers (sufficient statistics) which are then used in calculation of the likelihood ratio for the user's chosen a priori function. The output of this second computation section is then compared to a threshold to obtain a terminal yes-no decision. Thus the major design work can proceed without knowledge of the actual statistical distribution to be used. iii

Table of Contents Page Abstract iii List of Illustrations v 1. Introduction 1 2. Review of Classical Forced Choice Detection Theory 4 2. 1 Goal Separation 4 2. 2 Knowledge and Uncertainty 5 3. Sequential Receiver Design and Open-Ended Observation 8 4. Reproducing Densities and Finite Capacity Requirements 10 4. 1 Single Hypothesis Situation 10 4. 2 Two Hypotheses and Their Likelihood Ratio 14 4. 3 Receiver Models and Adaptation 16 5. Families of Reproducing Density Functions 21 5. 1 Single Hypothesis Situation 21 5. 2 Two Hypothesis Situations and Likelihood Ratio 24 5.3 Receiver Model 26 6. Summary Discussion 28 Appendix A 30 Appendix B 34 References 38 Distribution List 39 iv

List of Illustrations Figure Title Page 1 Separation of roles of processing, termination, and threshold level 5 2 Block diagram based on Eq. 22 17 3 Block diagram based on Eq. 23 18 4 Three-part processor 26

1. Introduction This paper deals with the theory of optimum forced choice detectors. This means that the situation is the two-by-two situation with two possible causes, and two possible alternative decisions. The goal of the equipment is to deliver the best possible decision quality. The observations shall be of the open-ended type; that is, the initial time of observation is specified (denoted by time equals zero), but the terminal time is not specified until its moment of occurrence. Lastly we shall restrict ourselves to situations for which only finite computation and memory capacity is required of the optimum receiver. No restriction is placed on either the size or cost of the receiving equipment except that they be finite. The paper briefly reviews some of the salient points of classical forced choice detection theory in Sec. 2. The basis for sequential receiver design as obtained from likelihood ratio is the subject of Sec. 3. The relationship of finite capacity requirements and reproducing density functions is considered in Sec. 4. This latter subject is viewed in extended generality by the new work presented in Sec. 5. Section 6 is a very brief summary of the consequences of the theoretical work developed in Sec. 5. Likelihood ratio is a demanding taskmaster. It demands that the receiver designer and operator fully specify the situation that governs the observation. It demands that they fully specify all distributions of random variables to be included and with respect to which they wish to maximize performance. It demands that they fully specify the goal of the decision device with respect to the balance of errors that is to be achieved.

The purposes of this paper are to identify the knowledge demands placed on the equipment designer (who may be ignorant of the exact goals and exact situations in which the receiver might be used) and to outline the design principles that the designer may use to forestall specifying those quantities that he is not likely to know during the design and construction of the equipment. This must be realistic so that those specifications which are initially postponed may be taken up at a more appropriate time and incorporated in the use of the receiver without detrimental effects on performance, and without placing undue requirements on the operator of the equipment. All quantities are either considered as known or as random variables with specified distributions. Although some quantities considered as random variables may themselves be parameters in distribution of other random variables, the total problem description consists of the complete specifications of all distributions and all known values. "Unknown constants" do not appear in any of this work although parameters usually considered as "unknowns" are treated extensively. The work as reported is still in the research stage, and at this time all adaptation or learning occurs with respect to quantities that are constant throughout the entire observation. This is a reflection of the present state of development of the theory and is not viewed as essential. This paper's principal item of interest is to point out that most of the equipment may be designed based on knowledge which is not as

stringent as that of the complete distribution of uncertainty. Indeed, the equipment may be designed in part on mathematical tractability, and if sufficient flexibility is put into some minor computational parts of the hardware; the same equipment may function for a whole family of a priori distributions of uncertain parameters. The primary description that the receiver designer needs to know is the objective conditional probability density functions of the observation.

2. Review of Classical Forced Choice Detection Theory 2. 1. Goal Separation One of the very first contributions of detection theory was the separation of the processing objective from the overall objective of the equipment. Specifically, detection theory indicates that the total observation should be processed to develop one real number at the time of decision and that this real number should be compared with a, threshold. In order to be optimum, the threshold value w should be appropriately determined from the goal of the user of the decision. The observation should be processed to yield either the likelihood ratio or any monotone function of the likelihood ratio of the observation. The advantage of this separation is that it allows the equipment designer to concentrate on designing hardware which processes the total observation into one real number; he may ignore the description of the problem in terms of values, costs, risk functions, and a priori probabilities of hypothesis occurrence. A second separation occurs because the detection theory result is basically a forced-choice decision result. This means that the command to terminate the observations and to respond with the optimum decision is an external command. For example, Wald's sequential analysis [Ref. 1] and studies of deferred decision theory [Ref. 2] concentrate on obtaining this terminate -and-decide command from other values and costs along with the observation. Such theories preserve the basic likelihood ratio nature

of the processing between the observation and the comparator. The separation of the roles of processing, termination, and threshold level are summarized in Fig. 1. Observable Likelihood x(t) Ratio l -'l Gate | 1 Comparator Decision Processor I -! w "Terminate and Decide" command Fig. i. Separation of roles of processing, termination, and threshold level. 2. 2. Knowledge and Uncertainty One of the primary concerns of the classical theory of signal detectability [ Refs. 3, 4, 5] is the effect of uncertainty as to the signal parameters, interacting with the interfering noise, to limit the performance of the optimum receiver. Uncertainty and the noise dictate the design of the optimum receiver. The "first case" considered in detection theory is that of the sure signal in a known-noise process. For exposition purposes this paper will assume that all probabilities are described by probability density functions. The overall receiver design is, of course, based on the likelihood ratio. The likelihood ratio for the observation x (where x is an element in an appropriately described space) is

k (x) = f(x I SN)/ f(x I N) (1) The likelihood ratio formula for the sure signal constitutes the basis for a number of more complicated situations usually described as the composite signal hypothesis situation (with a known-noise process). In each situation a number of parameters used in describing the signal are considered imprecisely known, although they are constant throughout the observation period if the signal is present. They are therefore treated as random variables with known distributions. These parameters are considered the elements of a parameter vector By and their distribution as 1(j/). Therefore the probability density of the observation under the condition SN is determined from the conditional statistics of observation, conditional to fixed values of the parameters 4, and then appropriately averaged. f(x I SN) = Sf(x I4, SN) d1(4) (2) In this detection problem where the noise process is known, the denominator of the likelihood ratio is independent of the parameters of the signal and both sides of Eq. 2 may be divided by the noise density function to yield.(x) = f.(x l/) di1l(4) (3) This is the well-known result for a known-noise process that the likelihood ratio is the average of the sure-signal likelihood ratios. More recently, detection theory work [Ref. 6] has considered the "doubly composite hypotheses" case where, in addition to unknown

parameters 4 in the signal process, there are unknown parameters z1 in the noise process. This means that the noise density function must be similarly developed. In general form f(xIN) ='f(x 7I, N) dAO0(r/) (4) If Eq. 2 is divided by Eq. 4, the resulting likelihood ratio is the ratio of integrals. It is the ratio of averages, and not the average of ratios. Development of doubly composite hypothesis detection theory proceeds with a "two-branch" type of receiver with averaging in each branch, and the ratio taken after the averages. It is apparent that in classical detection theory the distributions of the parameters of the signal and of the noise must be specified before the likelihood ratio of the observation may be determined. Therefore, it looks as if the designer of the hardware,which is to absorb the input observation and produce the value of the likelihood ratio, must have complete knowledge of these distributions before he can begin. It is also apparent that at the present level of abstraction the likelihood ratio design principle does not suggest to the designer any specific way of proceeding. In fact, it appears that the most obvious design is to take the complete observation x, store it, and then compute the relevant statistics. This is not the way that most equipment works, and hence a large part of the designer's work is to make theory and practice come together.

3. Sequential Receiver Design and Open-Ended Observation In an attempt to manipulate the theory of signal detectability to yield more obvious results in terms of sequential-in-time operation, and in an attempt to understand adaptive receiver processing, a development such as the following has been used [Refs 7 and 8]. First, the notation for the observation is changed to indicate its development in time. One possible notation is X(T1, T2) means x(t) on T1 < t < T (5) X(O, 0) denotes the a priori state at t = 0 Next, an incremental likelihood ratio (the liklihood ratio of an observation increment conditioned by the a priori conditions and all of the observation which has occurred up to the particular increment in question) is developed so that 4[ X(T1, T2) IX(0, T)] = 4[X(0, T2)] / [X(0, T1)] (6) Obviously such a development is used so that the likelihood ratio can be written in the form.[X(0, T2)] = [X(0, T1)].[X(T1, T2) IX(0, T1)] (7) The logarithm of the likelihood ratio is the logarithm of the likelihood ratio developed up to the beginning of the increment of interest, plus the conditional log likelihood ratio for that particular increment. This is a

useful form if the random processes are such that they have statistically independent increments when all the uncertain parameters are specified, because then the incremental conditional likelihood ratios are determined from incremental conditional probability density functions with respect to each of the hypotheses. When such is the case, the theory proceeds to use equations of the type fLX(T1, T2) IX(O, T1), H] = f f [X(T1, T2) 10, H] d/ [o IX(O, T1), H] (8) The distribution of the uncertain parameters conditioned on past observation can be determined from the conditional "objective" observation statistics and the a priori distributions. This is sketched as f[X(O, T1) 10, H], g [e IX(O, 0), H] -/[0 OIX(O, T1), H] (9) meaning that knowledge of the probabilities indicated on the left is sufficient to yield the "updated" distribution of the parameters. The development sketched above indicates the method by which one of the objections mentioned in Sec. 2 can be eliminated; that is, the theory can be developed to indicate how equipment should operate "in real time. " The objection that has not been eliminated is that specification of the a priori distribution of uncertain parameters must be known at the time of the design of the equipment.

4. Reproducing Densities and Finite Capacity Requirements In this section we shall consider that under each hypothesis the a priori density function and the conditional observations statistics combine to yield what are called reproducing density functions. Each hypothesis is first considered separately, and then the effect on likelihood ratio for the two hypothesis situation is treated, together with receiver models and adaptive receiver design. 4. 1. Single Hypothesis Situation One possible definition for a reproducing density is given, and the relationship between reproducibility and finite memory requirements is shown. The finite set of numbers which may change as observation progresses, that is, the quantities which must be retained in "soft memory, " are denoted by a vector y. The receiving equipment, the hardware or a computer-program software, which is influenced by the specific mode of observation (observation statistics and the a priori densities) will be thought of as "hard memory, " i. e., fixed throughout the observation. We will concentrate on those parts of the receiver (in other words, those parameters of the equations) that change under the influence of the observation. We shall assume that there are some constants 0 relating to the probability density of observation which are "unknown." If the specific value of these parameters 0 were known, the random process being observed would have statistically independent increments (at least

independent for those increments bounded by possible observation termination times Tk). It is assumed that the a priori distribution of this "uncertain" parameter 0 is known, that is, 0 is a random variable. The receiver may wish to know the a posteriori distribution of 0 at the possible termination times of the observation. The concept of reproducing usually relates to some "fixed functional form" q(0 I y), which has all of the properties of a probability density function. (It is a nonnegative function of 0 and its integral over 0 for each value of y is unity.) We shall interpret the function q as a real single-valued function of 0 and suppress its probability density function character. The parameter of this function y is going to play a central role in the design of the receiving equipment. In order to keep these quantities before us they are summarized in Table I. HO: f[X(T1, T 2)I 0)] the conditional objective density of the observation f [0 IX(O, 0)] the a priori density of 0 f [0 IX(0, T)] the a posteriori density of 0 q (0 Iy) a nonnegative function of 0, with parameter y, such that fq(o Iy)d0 = 1 Table I. A Definition of Reproducing If f[ 0 IX(O, T1)]= q [0 Iy(T1)T" then there exists a value y(T2) which is dependent on y(T1), X(T1, T2) and Ha such that

f[ 0 IX(O, Ta)] =q [O Iy(T2)] This means that when we know what the parameter value y is at time T1, and know the definition of the problem as given in Table I, then, if q is reproducing, we can determine from the prior conditions and from observation a new value y(T2) from which we can read off the a posteriori density function of 0. Furthermore, it will be shown following that we can determine the unconditional density function of the observation by knowing the prior and post values of the parameter y. Let us assume a situation such that the a priori density function and the conditional observation statistics match up to make a reproducing situation. We then wish to develop the technique for determining the updating of the parameter y and for determining the unconditional observation statistics. The joint probability density of the observation and the parameter 0 can be written in two forms. f[X(T!, T2), 01 X(0, T1)] = f[0 IX(0, T1)] f[X(T1, T2)I 0] (10) f[X(Ti, T2)0 IX(O, T1)] = f[X(T1, T2) IX(0, T1)] f[ 0 IX(O, T2)] (11) The unconditional probability of the observation can be determined by integrating over the entire 0 region, that is, by equating the right-hand side of Eqs. 10 and 11 and integrating over the entire 0 region. ff[ 0 Ix(o, T1)] f[X(T1, T2) 10']dd' = f[X(T1, T2) IX(0, T1)] (12) Now assume that. the situation yields a reproducing density function for

some particular form q. Substitute q for the 0 probability density functions and equate the right-hand sides of Eqs. 11 and 12. q[lo I(T1)] f[X(T1, T2)1 0] = f[X(T1, T2) IX(O, T1)] q[0 Iy(T )] (13) Divide by the unconditional probability density of the observation, making use of Eq. 12 to obtain q[0 - Y(T1)] f[ X(T1,T2) 0] q[ 0 I y(T2)] = (14) Sq[' IIy(T1)] f[ X(T1, T2) 1'] dO' This is half of the answer sought, and this equation will be used to determine the equations for updating the y parameter. Divide both sides of Eq. 13 by the q function evaluated at time T2 to obtain q[0 Iy(T1)] f[X(T!, T)IX(O, T1) = q[ (T)] f[X(T T2) I 0] (15) The value of the y parameter at the beginning of the observation T1 and at the end of the observation T2 determines the unconditional probability density of the observation on the interval T1 to T2. Equation 15 has hidden in it some implicit cancellation. The left-hand side is not a function of the uncertain parameter 0. Thus the right-hand side cannot truly be a function of 0 since the left-hand side is not. In actual practice one finds that indeed the cancellation always occurs and that knowledge of the y parameter at time T1 and time T2 is all that is necessary, and one need not estimate or assume any specific value of 0 in order to utilize Eq. 15.

A specific situation is given in Appendix A and the reader may refer to this example to obtain a more concrete idea for how reproducing may come about, how the y parameter may be updated, and finally how the updating may be utilized to determine the probability density function for the observation. 4. 2. Two Hypotheses and Their Likelihood Ratio The manipulation in this section is basically formal and follows directly from that of the previous section; however, to proceed additional notation must be introduced. One of the two hypotheses, conventionally labelled SN and N in detection theory, will condition all probabilities. Two conditional objective density functions for the observation are used. Under the noise alone condition N, the "uncertain" parameter is called r7; under the SN condition, the "uncertain" parameter is denoted by 4'. f[X(T1, T2) Ii, N] f[X(T1, T2) I,, SN] (16) It will be assumed that the observation statistics and the a priori distribution of the uncertain parameters are matched so as to produce reproducing density functions. The functional form of these reproducing density functions are labelled g(i) for the N condition and h(4/) for the SN condition. f [rl IX(0, T), N] = g[ r7 i ol (T)] (17) f[ /I X(0, T), SN] = h [I 1/3 (T)] (18) We shall omit the equations similar to Eq. 14 which are used to obtain parameter updating equations and shall pass on to the expressions derived from Eq. 15 for the unconditional probability density functions for the 14

observation. These are Eq. 19 for the N condition g[10 l(T1)] f[X(T1, T2) IX(0, T), N] = g[vl (T2] f[X(Tl, T2) 1'r, N] (19) and Eq. 20 for the signal and noise condition h[4' 1j (T1)] f[X(T, T2) IX(0 T1), SN] = h[ (T2)] f[X(T1 T2) 14 SN] (20) The likelihood ratio for the observation is the ratio of Eq. 20 to Eq. 19 and is h[y I (T1)]] gl[, I (T2)] [X(T1, T2) IX(0, T1)] = h[i3T2)] g/l0ot(T1l)] [(T1lT2) 14, (21) Once the initial values for the a priori distribution of the "uncertain" parameters are specified [ao(0), (0)], the observation is then processed to obtain the terminal value ac(T) and / (T). The total observation must also be processed to determine the likelihood ratio of this observation for any specific choice of the uncertain parameters t7 and 4l. The formal expressions for the ratios of the reproducing density functions at these chosen values of r7 and 4 are then evaluated, and the product as indicated in Eq. 21 is the likelihood ratio of the observation. It should be emphasized that the choice of the values r1 and % used to evaluate Eq. 21 does not affect the value obtained from the right-hand side. In Appendix B a specific case of doubly composite likelihood ratio test is developed based upon these equations. The reader may use Appendix B to follow in more detail what is happening in this type of development. The specific cases in both appendixes are of no particular interest by themselves, but are used merely as examples of the type of development we are considering.

4. 3. Receiver Models and Adaptation In Sec. 3 it was pointed out that the likelihood ratio for a complete observation can be broken into the product of the individual conditional likelihood ratios such as those developed in Eq. 21. The use of such expressions for sequentially conditional likelihood ratios may be used to develop both adaptive and nonadaptive receiver designs. Receiver design and realization are part of the art and not of the formal mathematical development of decision theory. In this section the author would like to sketch how such an equation might be used in receiver design. If the total likelihood ratio for an observation is to be realized by a nonadaptive form of receiver, we can base the design on Eq. 21 by letting the time T1 equal 0. A resulting form of Eq. 21 is then shown below knt[X(O, T) IX(O, O)] = tn4[X(O, T) 141, 71] - I{nh[ / I3(T)] - Inh[ [ I(0)] } + {Ing[ 7 l(T)] - Ing[7Ja1(0)] } (22) Using this type of equation the receiver designer may choose (without loss) any two particular values e/ and il. For example, these may be chosen so that the first term in the expression is particularly simple. Figure 1 shows one block diagram based on Eq. 22. Box 1 in the diagram represents the hardware or computation necessary to evaluate the logarithm of the likelihood ratio of the total observation based on some preselected values for 4 and 71. Boxes 2 and 3 develop the sufficient statistics necessary for the probability updating, the vector value parameters au(T) and:(T). As indicated in Eq. 22 these parameter values need not be used to evaluate

probability density functions directly but may be used to evaluate the difference in the logarithm of these density functions at the specific #1 I n ((X(O T) r) I / (T) nh(C/) x(t) #2 1 #6 f nk (X(O, T)) 3 a(T) 5 fng(rl) Fig. 2. Block diagram based on Eq. 22. preselected values p/ and qr. The three outputs from boxes 1, 4, and 5 are then combined to yield the logarithm of the likelihood ratio of the total observation based on the a priori state. Such a receiver design does not yield estimates or probability distributions of the "uncertain" parameters, nor does it appear to be particularly adaptive. It does perform an optimum job of detecting the presence or absence of the signal, optimum with respect to the designated a priori distributions of the uncertain parameters. The sequentially developed likelihood ratio formula of Eq. 21 can be utilized in a more adaptive appearing form. To emphasize the sequential development let us rewrite the total likelihood ratio as shown below.

n nk[X(O, TTn)IX(O, O)] = L n[X(Tkl, Tk) IX(O, TkQl)] (23) k=1 ~(t~ t Fig. 3. Block diagram based on Eq. 23. The design here (Fig. 3) is as follows: the observation is absorbed in increments and temporarily stored in a delay circuit. Boxes 2 and 3 are the same as in Fig. 2 and calculate the sufficient statistics Ot and f. These statistics are then utilized in box 7 to determine a value 4* such that the two h(4/) functions evaluated at 3(T1) and 1(T2) are identical. Similarly box 8 is used to determine a value of al so that the two g(,j) functions of Eq. 21 are equal. If the incremental likelihood ratio is evaluated in box 9 for the now-incoming delayed signal for these particular values of A4* and q*, it will be equal to the desired incremental likelihood ratio. Box 10 is simply an adder which adds up the incremental log-likelihood ratio values.

The particular values 4* and t7* chosen on each interval may be viewed as some sort of estimator since they are used in much the same way as the estimators might be used in a matched-filter type of receiver. However, the memory built into the receiver for all past observation exists in boxes 2 and 3 which calculate the sufficient statistics a(T) and:(T) and the rationale for the "estimates" is based upon an evaluation of Eq. 21. This means that one cannot replace these estimators by estimators based on another statistical principle and expect improvement; indeed, one cannot improve upon performance of a likelihood-ratio receiver when one wishes to maximize performance under conditions of the given a priori statistics. Many other forms of sequential, adaptive receivers can be determined by manipulating Eqs. 23 and 21. Specific detection cases may suggest very apt forms for such receivers, However, we shall leave this and turn to some criticisms which might be made about this type of development. One of the major criticisms is that the development demands that the a priori distribution of the uncertain parameters be known before the designer can begin work. Some workers take the viewpoint that a priori distributions do not even exist, others, that the a priori distributions are not known. This author takes the viewpoint that the a priori distributions are a personal description of the variation of the uncertain parameters with which one wishes to maximize performance. However, the distributions may very well be completely unspecified to the designer

of the hardware which is designed a long time prior to the actual use of the equipment. Thus it appears while in theory we can design optimum receivers, we must do so with lightning speed in order to have them available and correctly parameterized for use in the field. It will be the purpose of the next section on families of reproducing densities to establish that this demand for knowledge of the actual distribution of the uncertain parameters is in part an unnecessary requirement. The principal item of interest in this present paper is to point out that the major part of the equipment may be designed based on knowledge which is not as stringent as that of the complete distribution of uncertainty; indeed, the equipment may be designed in part on mathematical tractability, and with some flexibility in minor computational parts of the hardware the same equipment may function for a whole family of a priori distributions of uncertain parameters. The primary description that the receiver designer needs to know is the objective conditional probability density functions of the observation. 20

5. Families of Reproducing Density Functions The purpose of the work in this section is to show that the major receiver design may be based on knowledge of the conditional objective observation statistics and mathematical tractability, when the uncertain variable and the conditional observation statistics match sufficiently well to yield a reproducing density function. 5. 1. Single Hypothesis Situation. We shall assume that the conditional observation statistics do match with the "uncertain" parameter 0 to yield one form of reproducing density. Let us use the subscript 1 to denote this particular density function. fl[ 0 IX(O, T)] =q[O ly(T)] (24) However, we shall consider that this description of the distribution of uncertain parameter may not be the description that the user will wish to employ. We shall denote the user's description with a subscript 2. If the desired density f2(0) is absolutely continuous with respect to q(0) (i. e., does not give positive probability to some set that the q(O) measure gives zero probability), then f2(0) is related to f1(G) by a Radon-Nikodym derivative (a likelihood ratio, a real function modifier). f2[ H IX(O, 0)] = r(O)q[O 1y(0)] (25) The following shows that the distribution of 0 after observation, and based upon the f2(&) prior distribution, can be obtained from the updated version of the fl(&) probability, and memory of this RadonNikodym derivative r(&). The result will be

f2[0lX(0,T)] = fr(0) q[ 0'y(T)] d (2 ~~~2 r(0') q[' l y(T)] dO' The probabilities and likelihood ratio will be developed for the total observation from zero to T, and not for the incremental observations as was done in Sec. 4. The work in Sec. 4 was developed in the incremental form to emphasize the possibility of the adaptive receiver design. Although the same could be done here, our objective is to show the universality of design of the receiver, more or less independent of the a priori distribution. For this reason we shall use equations derived in Sec. 4 with the initial time of the observation T1 set to zero and with the terminal time of the observation T2 written as T. The only additional change is that we must denote in each density function whether we are assuming the f1(0) or the f2(0) a priori density function for the uncertain parameter 0. Equations 10 - 12 with these changes are fk[X(O, T), 01 X(O, 0)] = fk[ 1 X(0, 0)] f[X(O, T) I 0] (27) fk[X(O, T), 0.1 X(O, 0)] = fk[X(O, T) IX(O, 0)] fk 0 IX(O, T)] (28) fk[X(O, T) IX(O, )] = ffk[ IX(O, 0)] f[X(O, T) I']dO' (29) The f1(0) density is reproducing and has the form q(0). The derivation of Sec. 4 yielded Eq. 14, which after modification, appears as Eq. 30 below. q[01y(T)] - q[0lY(O)] f[X(O,T) 1] (30) Sfq[ a'ly(O)] f[X(O,T) la'] d' 22

With the same reasoning one could derive a similar equation for the f2(O) a priori density function. f2 0 IX(O, 0)] f[X(0, T) 1] f2[ IX(0, T)] =(31) f2[ 0' IX(O, O)] f[X(O, T) I 0'] dO' Substituting the r q product for f2 in the right-hand side of Eq. 31; that is, using Eq. 25 in Eq. 31 yields fZ[oIX(OT)I = r(0)q[0ly(0)] f[X(0, T)OJ (32) Sr(o' ) q[ 0' 1y(0)] f[X(O, T) I']dO' Equation 30 is used to change the qf product in the numerator yielding Sq[ 0 I'1(0)] f[X(O, T) I'] dO' f[ 0 1 X(O, T)] = r(O) q[ 0 y(T)] (33) Sr(o') q['ly(0)] f[X(O, T) I 0'] d' Since the ratio of the two integrals in Eq. 33 is not a function of 0, and because the left-hand side is a true density function, the ratio of integrals must constitute the normalizing constant for the density function, It may therefore be replaced by the reciprocal of the integral of the rq product. f2[0 IX(O, T)] = r(O) q[ly(T)] (26) Sr(o') q[ 0' Iy(T)] dO' The above constitutes the derivation of the important fact that any probability density function for the uncertain variable which is absolutely continuous with respect to a known reproducing density function is similarly reproducing. That is, the sufficient statistics y are sufficient for both probability density functions. The only additional item that need be retained in the memory of the receiver is the Radon-Nikodym derivative r(0) which

relates the densities; the only additional computation that is necessary is the integration providing the normalizing constant of Eq. 26. The remaining step is to determine the unconditional probability density of the total observation using f2 a priori density for 0. This is obtained as follows: Equate the right-hand sides of Eqs. 27 and 28. The equation resulting from the f2 density is divided by the equation resulting from using the fl density yielding Eq. 34 below f2[X(O, T)IX(0, 0)] f2[0 IX(0, T)] f2[0 IX(0, )] fX(, T)I 0] f1[X(0, T)IX(0, 0)] fl 01X(0, T) f[ l01 X(0, 0)] fX(0, T)I 0 (34) Then Eqs. 25 and 26 are used to simplify some of the ratios yielding f2[X(O, T) IX(O, O0)] =fl[X(O, T) IX(O, O)] S r(O')q[0'ly(T)] dO' (35) This is the desired result, showing that the unconditional probability of observation based on the f2 a priori density can be found from the unconditional probability density of the observation based on the fl density, the stored Radon-Nikodym derivative, and the updated value of the y parameter [y(T)]. If one desires to go one step further and follow the derivation of Sec. 4, Eq. 15 may be used to develop a similar equation for the f2 density. f2[X(O, T) X(O, q[0y7(T)] Sr(') q[ 0'(T)] dO' f[X(O, T) I 0] (36) 5. 2. Two Hypothesis Situations and Likelihood Ratio Similar to the previous development, for the hypothesis noise alone there is an uncertain parameter 7j and for the signal and noise hypothesis

there is an uncertain parameter y. It is further assumed that there are natural reproducing density functions (not necessarily those the operator will wish to use) which have reproducing densities when used with the given objective conditional observation statistics. As before, these are f[X(T1, T2 I 7, N] f[ X(T1, T2) I zp, SN] (16) fl[ rll X(O, T), N] = g[r la (T)] (17) f2[! I-X(O, T), SN] h[ l (T)] (18) The situation to be solved is one in which the a priori densities of the uncertain parameters are not those given by Eqs. 17 and 18 but can be expressed in terms of these by use of the Radon-Nikodym derivatives. f3[7 IX(O, 0)] = r3 (1) g[77 Ia(0)] (37) f4[41 X(O, 0)] = r4(/) h[ [p (0)] (38) It follows directly from Eq. 35 that the likelihood ratio for the total observation based on the a priori state and the desired descriptions of the uncertainties f3 and f4 can be written as the product of the likelihood ratio of the total observation based on the "natural" a priori density functions g and h and the modifier, which requires the memory of the Radon-Nikodym derivatives and the updated parameter values a(T) and 3(T) at the end of the observation. Sr4(~') h[i' I/(T)]dI' a [X(O, T) IX(O, O), f3, f4] = 2[X(O, T) IX(O, O), g, h] fr() g[' (T)]' (39)

5. 3. Receiver Model. The receiver suggested by Eq. 39, a three-part processor, is sketched in Fig. 4. x(t) Primary ca(t) Secondary n Gated Decision Processor Processor Comparator r3(4) r4(r) "T w Fig. 4. Three-part processor. The primary processor is basically the receiver discussed in Sec. 4. It computes the likelihood ratio of the total observation dependent on the a priori state, and utilizes natural reproducing density functions for convenience. Its design principles may be specified completely by knowing the conditional objective statistics of the observation. The output of the primary processor consists of the finite fixed number of lines carrying the information contained in the continually updated parameters a! and e and of one quantity which is either equal to or sufficient for the calculation of the likelihood ratio of the observation conditional to a specific choice of the uncertain parameters. The secondary processor has two types of input. The first type is that due to the observation, the output of the primary processor. The second type is the a priori distribution of the uncertain parameters, 26

with respect to which the operator wishes to maximize performance. The design of the secondary processor can be completed without knowledge of what these are, so long as provision can be made for sufficiently accurate descriptions. The exact form of entry of this knowledge of the distributions desired is necessarily a matter to be considered in each specific application. The important feature is that both the primary processor and the secondary processor can be designed without knowledge of the exact distributions for the uncertain parameters. The output of the secondary processor (the logarithm of the likelihood ratio, or any sufficient statistic monotone with the likelihood ratio) is fed into the comparator. In the comparator the threshold level is determined with respect to the user's objectives based on his values and costs, his desired constant false-alarm rate, or any of his other specific objectives. The ultimate output is the "present" or "absent" decision. 27

6. Summary Discussion This paper has considered the problem of forced choice optimum detection receiver designs. The receiver design principle is the likelihood ratio computation for the observation. Such a principle demands that the operator or designer specify the best possible description of the objective observation statistics, that the a priori distribution of all random variables or uncertain parameters be specified, and that sufficient information be available to compute the correct threshold to yield the desired operating point of the receiver. Likelihood ratio design requires a complete specification of the decision problem. This means that the analyst seriously contemplating the actual construction of equipment must separate those quantities which have to be specified in advance before the hardware can be designed, and those quantities for which provision need be made but which need not be specified until the moment of operation of the equipment. In classical detection theory a keypoint is that the quantities necessary to compute the threshold level (the values, the costs, the a priori probabilities of the hypotheses to be tested) need not be specified until the moment of operation. These quantities are taken care of by providing a variable threshold which need not be set until the moment of decision. The result of this present work is that the a priori distribution of uncertain constant parameters need not be specified until the moment of operation of the receiver. The primary processor may be designed on the basis of the conditional 28

objective statistics of observation for those situations in which a reproducing density can be found (a natural conjugate a priori density) and the primary processor is designed as if these natural a priori densities are the true densities. This partitioning of information is necessary for the further development and application of the theory of signal detectability.

Appendix A An Example of a Reproducing Density Considering One Hypothesis Only The purpose of this appendix is to display one example of the general treatment of Sec. 4. 1. The method exemplified demonstrates that retention of the sufficient statistics for updating the uncertain parameter is also sufficient for calculation of the probability density of the total observation. The example chosen is a point process of positive real numbers. X(O,T) = ix 1'X2''. xT xi > 0, integer T (A.1) The objective statistics of observationwhich have been chosen are simple exponentials with the individual observables statistically independent. The uncertain parameter of observation is the reciprocal of the mean value of the observation. -0x. f[X(O, T) I] = I Oe 0 > (A. 2) i=l We digress momentarily to consider a specific function with three parameters. This set of three parameters is denoted as simply y. = (1' Y2' 73) Y71 > 0, 72 > 0, 3 = 1 2/ r (y2) (A. 3) This parameter is used in a function of 0 which is defined for all positive 0 and whose integral over 0 is unity. Let Y2-1 -y 0 q(0 b7) = y30 e 1 > 0 (A. 4) 30

If the prior density function for the parameter 0 in Eq. A. 2 is of the form of Eq. A. 4, we have a situation which will yield a reproducing density function. We will now proceed to verify this. Let the prior density function for the parameter 0 be given by f 0 I X(0, 0)] = q(O I y(O)) (A. 5) where the condition X(O, 0) indicates the a priori or initial state. The joint probability density function for the observation and the parameter 0 based on this initial state of knowledge can be developed by taking the product of the a priori density on 0 times the objective probability density of observation conditional to 0. When this is done and manipulated slightly, we have f[X(O, T), oIX(O,O)] = f[0IX(O, 0)] f[X(0,T)10] z2(o)-1 - l(~) T -O Ex = y2(0) 0 e 0 e [y2(0)+T] -1 -o [yl(O)+Zxi] = 3() e (A. 6) It is obviously convenient to single out the last two bracketed terms and denote them by symbols which we will temporarily call y1(T) and y2(T). Specifically, let T 2 (T) 71(T) -= 71(O) + Exi; 72(T) = y2(0) + T; y3(T) /r [y2(T)] (A. 7) Equation A. 6 may thus be more compactly written as y2(T)-1 - 0Y3(T) f[X(O, T), olX(O, O)] = 73(0)0 e (A. 8) 31

The probability density of the observation alone is obtained from Eq. A. 8 by integrating over the entire range of the parameter 0. When this is done, we obtain f[X(O, T)IX(O, 0)]= y3(0)/y3(T) (A. 9) because the parameter y3 is the "normalizer" of the q function. We are now in the position to determine the a posteriori density function for the parameter 0 based on the a priori information and the observation X(O, T). This is the joint density divided by the probability density of the condition and is f[ o IX(0, T)] = f[X(0, T), 0 IX(O, 0)] / f[X(0, T) IX(0, 0)] y2(T)- - y3(T) = Y3(T) e (A. 10) We recognize immediately that this probability density function has the form of the q function and therefore we may write f[0 X(0, T)] = q[01y(T)] (A. 11) Thus we have reached the conclusion that the form q yields a reproducing density function with respect to the specific objective probability function of observation. Although we have already obtained an expression for the probability density function of the observation in terms of the y parameter, we shall proceed to show that Eq. 15 yields the same answer. That is, we wish to evaluate the quantity

q Iy(OA)] f[X(O,T) 10] (A. 12) q o Jy(T) We formally use the defintion of the q function, Eq. A. 4, and the "updating" expressions of Eq. A. 7 and the given objective observation statistics, Eq. A. 2. Inserting these in the expression A. 12 we obtain )2(o)-1 -OyVl(O) 93(0) 0 e T -O9x y3(O) 0 e =_3 (A. 13) y2(T)-1 -OyV(T) Y3(T) Y3(T) 0 e Comparing this answer to Eq. A. 9 we conclude that indeed the specific expression does yield the unconditional probability of a specific observation. q[e0y(O)] f[X(O, T)I 0] = f[X(O,T)IX(O,O)] (A. 14) qL 0 1 y(T)J We have thus verified that for a specific case, the detailed manipulations correspond to the general results derived in Sec. 4. 1. 33

Appendix B An Example for Section 4. 2. In this appendix we shall consider a specific case of reproducing density functions under two hypotheses and the resultant likelihood ratio. As in Appendix A it will be assumed that the observation is a point process of nonnegative numbers indexed by integer time values. X(O, T) = {x1' 2' X. XT} xi > 0, integer T (B. 1) The probability density function or the observations under noise alone, if the noise parameter r71 were known, is the same as in Appendix A. T -7 f[X(0, T)l 71, N] = I re, r > 0 (B. 2) i= 1 where r is positive. Following directly from the work in Appendix A we know that the density function of the form o,(0) -1 -al(O)w a f[lI X(O, 0)] 3= 3(0) 7 e,3 =1 /r(o2) (B. 3) with the parameter updating c-2(T)/ o l(T)- 1(0) + Zi; x2 (T) = L2(0) + T; o3 (T) = 1(T) 2(T)] (B. 4) gives rise to a reproducing density function. The unconditional probability density of the observation under noise alone is therefore f[X(0, T) IN] = 33(0)//33(T) (B. 5) 34

For this example the objective observation statistics under the conditional signal and noise with an uncertain parameter g4 is f[ X 20, 2T,N]=T [ 2 f[X(O T) PSN [ x. e i] T e i x (B. 6) i= 1 Let us see if the same general form of parameter uncertainty is reproducing for this type of observation statistic. That is, let us try the functional form,-1 -a, ~2/ h(/ Ip = / P 2 e 1 -> O, P > 0, 3 = F1 (/2) (B.7) and assume that the prior density function of 4= is from this class. f[4IX(o, O)] - hi IP(0)] (B. 8) Direct application of the basic equation, Eq. 14, yields h[ IPl(T)] h[ lP13(O)] f[X(o, T)I 4] Sh[' 1pI(O)] f[X(O, T)I 4']d4'? (B. 9) P (0)(1 e 2(0)t' 2T e-Y'~ x; (0) V/2 e x oo 2T+P2(0)-1 -[/ l(0)+x]"' P3 (0) [nxi] S0 e d from which it follows directly that P2(T) 1 (T) = 1 (0) + xxi; x 2(T) = 12(0) + 2T; 13(T) = B1(T) / r [P2(T)] (B. 10) From Eq. 20 for this particular situation we have 35

f[X(O, T) ISN] - h [l IP3(O)] f[X(O, T)I A, SN] - h I (T) P2(0)-1 -P31(0) 2T -Z [x. 133(0) e x e [fxX] P (T)-i - 1(T) 13(T) /3( e I /93(0) T 3T() TII Xi. (B. 11) 3(T i= Unlike the noise alone case considered, the sufficient statistics for this distribution consist not only of the three parameters of the If vector, but contain, a fourth item, the product of the point observations taken up to time T. Although this is implicit in Eq. 20 it was not explicit in the form for the noise density function (Eq. B. 5), and was brought out here for the sake of contrast. This completes the work on this example to show that the form chosen for the prior density function in each case does indeed reproduce and that the memory requirement is finite. Equations B. 11 and B. 5 can now be used to determine the likelihood ratio of the observation. O3(T) P3(0) T [X(0, T)] =o3(0) 1T() n xi (B. 12) a3(0 /3 (T) This is indeed the form of Eq. 21; one notes that the product of the individual observations is the likelihood ratio conditional to the value,= 1, =l1. 2T x. 4 e fnx. fiX(0, T) I1], r] = (B. 13) T x 6 e 36

so I [X(O, T) I, 1] = nx. Finally, we use the specific form of the updating of the c. and j parameter vectors to write out the equation for the likelihood ratio in terms of the initial constants and the actual observation T knk[X(O, T)] = Z In xi + [T + a2(0)]Ln[el(O) + Zxi] - 2T + 2( 0)] n[l() + Zxi] r[2T + 2(0o) r[ol2(0)] + nr i[T + a2(0)] r 32(o)] + 2(0) En!1(0) - a2(0) an a1(0) (B.14) Equation B. 14 has been written in four lines to emphasize four different types of functions in the likelihood ratio equation. The first term, the sum of the logarithms of the observations, comes directly from the likelihood ratio conditional to p = 1, r = 1. The second line is a type of logarithm of the sum (compared to the first line which is the sum of the logarithms). The third line is deterministic, that is, a function of the observation times T. The fourth line is a constant bias due to the initial conditions. Basically this problem is a test between two functional forms for the observation statistics; the test of whether the observation has X2 with four degrees of freedom or X with two degrees of freedom statistics. 37

References 1. A. Wald, Sequential Analysis, John Wiley and Sons Inc., New York, 1947. 2. T. G. Birdsall and R. A. Roberts, "The Theory of Signal Detectability: Deferred Decision Theory, " Jour. Acoust. Soc. Am., Vol. 37, No. 6, pp. 1064-1074, June 1965. 3. D. Middleton and D. Van Meter, "Modern Statistical Approaches to Reception on Communication Theory, " IRE Trans. Inform. Theory, Vol. 4, pp. 119-145, 1954. 4. U. Grenander, "Stochastic Processes and Statistical Inference," Arkiv Math., 1, pp. 195-277, 1950. 5. W. Peterson, T. Birdsall and W. Fox, "The Theory of Signal Detectability, " IRE Trans. Inform. Theory, Vol. 4, pp. 171-212, 1954. 6. R. L. Spooner, The Theory of Signal Detectability: Extension to the Double Composite Hypothesis Situation, Technical Report No. 192, Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan, April 1968. 7. L. W. Nolte, Adaptive Realization of Optimum Detectors for Synchronous and Sporadic Recurrent Signals in Noise, Technical Report No. 163, Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan, March 1965. 8. T. G. Birdsall, "Likelihood Ratio and Optimum Adaptive Detection," NATO Adv. Study Institute: Signal Processing with Emphasis on Underwater Acoustics, Grenoble, September 1964. 38

DISTRIBUTION LIST No. of Copies Office of Naval Research (Code 468) 2 Navy Department Washington, D. C. 20360 Director, Naval Research Laboratory 6 Technical Information Division Washington, D. C. 20360 Director 1 Office of Naval Research Branch Office 1030 East Green Street Pasadena, California 91101 Office of Naval Research 1 San Francisco Annex 1076 Mission Street San Francisco, California 94103 Office of Naval Research 1 New York Annex 207 West 24th Street New York, New York 10011 Director 1 Office of Naval Research Branch Office 219 South Dearborn Street Chicago, Illinois 60604 Commanding Officer 8 Office of Naval Research Branch Office Box 39 FPO New York 09510 Commander, Naval Ordnance Laboratory Acoustics Division White Oak, Silver Spring, Maryland 20910 Commanding Officer and Director 1 Naval Electronics Laboratory San Diego. California 92152 39

DISTRIBUTION LIST (Cont.) No. of Copie s Commanding Officer and Director Navy Underwater Sound Laboratory Fort Trumball New London, Connecticut 06321 Commanding Officer Naval Air Development Center Johnsville, Warminister, Pennsylvania Commanding Officer and Director 1 David Taylor Model Basin Washington, D. C. 2007 Superintendent 1 Naval Postgraduate School Monterey, California 93940 Attn: Prof. L. E. Kinsler Commanding Officer 1 Navy Mine Defense Laboratory Panama City, Florida 32402 Superintendent 1 Naval Academy Annapolis, Maryland 21402 Commander 1 Naval Ordnance Systems Command Code ORD- 0302 Navy Department Washington, D. C. 20360 Commander 1 Naval Ship Systems Command Code SHIPS-03043 Navy Department Washington, D. C. 20360 40

DISTRIBUTION LIST (Cont.) No. of Copies Commander Naval Ship Systems Command Code SHIPS- 1630 Navy Department Washington, D. C. 20360 Chief Scientist Navy Underwater Sound Reference Div. Post Office Box 8337 Orlando, Florida 38200 Defense Documentation Center 20 Cameron Station Alexandria, Virginia Dr. Melvin J. Jacobson Rensselaer Polytechnic Institute Troy, New York 12181 Dr. Charles Stutt General Electric Company P. O. Box 1088 Schenectady, New York 12301 Dr. J. V. Bouyoucos General Dynamics/Electronics 1400 N. Goodman Street P. O. Box 226 Rochester, New York 14609 Mr. J. Bernstein EDO Corporation College Point, New York 11356 Dr. T. G. Birdsall Cooley Electronics Laboratory The University of Michigan Ann Arbor, Michigan 48105

DISTRIBUTION LIST (Cont.) No. of Copies Dr. John Steinberg 1 Institute of Marine Science The University of Miami Miami, Florida 33149 Dr. R. A. Roberts Dept. of Elec. Eng. University of Colorado Boulder, Colorado Commander 1 Naval Ordnance Test Station Pasadena Annex 3203 E. Foothill Boulevard Pasadena, California 91107 Dr. Stephen Wolff 1 Johns Hopkins University Baltimore, Maryland 21218 Dr. M. A. Basin Litton Industries 8000 Woodley Avenue Van Nuys, California 91409 Dro Albert Nuttall Litton Systems, Inc. 335 Bear Hill Road Waltham, Massachusetts 02154 Dr. Philip Stocklin Box 360 Raytheon Company Newport, Rhode Island 02841 Dr. H. W. Marsh Raytheon Company P. O. Box 128 New London, Connecticut 06321 42

DISTRIBUTION LIST (Cont.) No. of Copie s Mr. Ken Preston Perkin-Elmer Corporation Electro- Optical Division Norwalk, Connecticut 06852 Mr. Tom Barnard 1 Texas Instruments Incorporated 100 Exchange Park North Dallas, Texas 75222 Dr. John Swets 1 Bolt, Beranek and Newman 50 Moulton Street Cambridge 38, Massachusetts Dr. H. S. Hayre 1 The University of Houston Cullen Boulevard Houston, Texas 77004 Dr. Robert R. Brockhurst 1 Woods Hole Oceanographic Inst. Woods Hole, Massachusetts Cooley Electronics Laboratory 50 The University of Michigan Ann Arbor, Michigan Director 1 Office of Naval Research Branch Office 495 Summer Street Boston, Massachusetts 02210 Dr. L. W. Nolte 2 Dept. of Elec. Eng. Duke University Durham, N. Carolina 43

DISTRIBUTION LIST (Cont.) No. of Copies Mr. F. Briggson 1 Office of Naval Research Representative 121 Cooley Building The University of Michigan Ann Arbor, Michigan 44

Unclassified Security Classification DOCUMENT CONTROL DATA - R&D (Security claaaification of title, body of obatract and indexing annotation must be entered when the overall report is classilied) 1. ORIGINATING ACTIVITY (Corporate author) 2a. REPORT SECURITY CLASSIFICATION Cooley Electronics Laboratory Unclassified The University of Michigan Zb. CROUP Ann Arbor, Michigan 48105 3. REPORT TITLE Adaptive Detection Receivers and Reproducing Densities 4. DESCRIPTIVE NOTES (Typ of report and incluslve dates) Technical Report No. 194 3674-17-T 5. AUTHOR(S) (Lost nameo firat name, initial) Birdsall, Theodore G. 6. REPO RT DATE is. TOTAL NO.'F PACES 7b. NO. OF REPS July 1968 51 8 8a. CONTRACT OR GRANT NO. 9F. ORIGINATOR'S REPORT NUMBER(S) Nonr-1224(36) b. PROJECT NO. 3674-17-T ~~~~c. |)~~~ O~b. TMHER r#PPORT NO(S) (Any other number thatU may be asaigned d. TR194 10. A VAIL ABILITY/LIMITAtION NOTICES Reproduction in whole or in part is permitted for any purpose of the U. S. Government. 11. SUPPLEMENTARY NOTES 12. SPONSORING MILITARY ACTIVITY Office of Naval Research Department of the Navy Washington, D.C. 20360 13. ABSTRACT The design of detection equipment that can "learn for itself" the values of necessary constants (parameters) unknown to the designer or user has been the recent goal of theoretical research. The theory developed to date required the designer to specify the a priori distribution of the uncertain parameters. This paper points out a physically important property: the likelihood ratio of observation may be obtained in a twostep process. The observed input is first processed through a receiver section designed on the basis of a simple mathematically tractable a priori distribution. This is the major bulk of the processing. The output of this first section is a small handful of numbers (sufficient statistics) which are then used in calculation of the likelihood ratio for the user's chosen a priori function. The output of this second computation section is then compared to a threshold to obtain a terminal es-no decision. Thus the major design work can proceed without knowledge of the actual statistical distribution to be used. D D 1 JAN 6 1473 Unclassified Security Classification

UNIVERSITY OF MICHIGAN UNC LASSIFIED 1l lIII IIIIII IIIIIIiIIII IIII I - -- 3 9015 02514 8100 Security Classification 02514 8100 14. LINK A LINK B LINK C KEY WORDS ROLE WT ROLE WT ROLE WT Information Theory Signal Detection Theory Adaptive Theory Reproducing Probabilities Bayes Likelihood Ratio INSTRUCTIONS 1. ORIGINATING ACTIVITY: Enter the name and address imposed by security classification, using standard statements of the contractor, subcontractor, grantee, Department of De- such as: fense- activity or other organization (corporate author) issuing (1) "Qualified requesters may obtain copies of this the report. report from DDC. " 2a. REPORT SECURITY CLASSIFICATION: Enter the over- (2) "Foreign announcement and dissemination of this all security classification of the report. Indicate whether d." "Restricted Data" is included. Marking is to be in accord-authorized ance with appropriate security regulations. (3) "U. S. Government agencies may obtain copies of this report directly from DDC. Other qualified DDC 2b. GROUP: Automatic downgrading is specified in DoD Di- users shall request through rective 5200.10 and Armed Forces Industrial Manual. Enter the group number. Also, when applicable, show that optional. markings have been used for Group 3 and Group 4 as author- (4) "U. S. military agencies may obtain copies of this report directly from DDC. Other qualified users 3. REPORT TITLE: Enter the complete report title in all shall request through capital letters. Titles in all cases should be unclassified., If a meaningful title cannot be selected without classification, show title classification in all capitals in parenthesis (5) "All distribution of this report is controlled. Qualimmediately following the title. ified DDC users shall request through 4. DESCRIPTIVE NOTES: If appropriate, enter the type of. _.,.. report, e.g., interim, progress, summary, annual, or final. If the report has been furnished to the Office of Technical Give the inclusive dates when a specific reporting period is Services, Department of Commerce, for sale to the public, indicovered. cate this fact and enter the price, if known. 5. AUTHOR(S): Enter the name(s) of author(s) as shown on 11. SUPPLEMENTARY NOTES: Use for additional explanaor in the report. Enter last name, first name, middle initial. tory notes. If military, show rank and branch of service. The name of the principal author is an absolute minimum requirement. 12. SPONSORING MILITARY ACTIVITY: Enter the name of the departmental project office or laboratory sponsoring (pay6. REPORT DATE. Enter the date of the report as day, ing for) the research and development. Include address. month, year; or month, year. If more than one date appears on the report, use date of publication, 13. ABSTRACT: Enter an abstract giving a brief and factual summary of the document indicative of the report, even though 7a. TOTAL NUMBER OF PAGES: The total page count it may also appear elsewhere in the body of the technical reshould follow normal pagination procedures, i.e., enter the port. If additional space is required, a continuation sheet shall number of pages containing information. be attached. 7b. NUMBER OF REFERENCEsS Enter the total number of It is highly desirable that the abstract of classified reports references cited in the report. be unclassified. Each paragraph of the abstract shall end with 8a. CONTRACT OR GRANT NUMBER: If appropriate, enter an indication of the military security classification of the inthe applicable number of the contract or grant under which formation in the paragraph, represented as (TS), (S), (C), or (U) the report was written. There is no limitation on the length of the abstract. How8b, 8c, & 8d. PROJECT NUMBER: Enter the appropriate ever, the suggested length is from 150 to 225 words. military department identification, such as project number, subproject numbersystem numbers task number etc14. KEY WORDS: Key words are technically meaningful terms subproject number, system numbers,'task number, etc. or short phrases that characterize a report and may be used as 9a. ORIGINATOR'S REPORT NUMBER(S): Enter the offi- index entries for cataloging the report. Key words must be cial report number by which the document will be identified selected so that no security classification is required. Identiand controlled by the originating activity, This number must fiers, such as equipment model designation, trade name, military be unique to this report. project code name, geographic location, may be used as key 9b. OTHER REPORT NUMBER(S): If the report has been words but will be followed by an indication of technical conassigned any other repcrt numbers (either by the originator text. The assignment of links, rules, and weights is optional. or by the sponsor), also enter this number(s). 10. AVAILABILITY/LIMITATION NOTICES: Enter any limitations on further dissemination of the report, other than those UNCLASSIFIED Security Classification