THE UNIVERSITY OF MICHIGAN KES.EARCH INST2UIPEr ANN ARBOR THE THEORY OF SIGNAL DETECTABILITY AS AN ITJERPRETIVE TOOL FOR PSYCHOPHYSICAL DATA Technical Memorandum No. 78 Electronic Defense Group Department of Electrical Engineering By: Wilson P. Tanner, Jr. Approved by: _^______ A. B. Macnee AFCCDD TN 60-13 Contract No. AF19(604)-2277 Operational Applications Laboratory Air Force Cambridge Research Center Air Research and Development Command May 1960

TABLE OF CONTENTS Page PREFACE iv ABSTRACT vii 1. INTRODUCTION 1 2. STATEMENT OF THE PROBLEM 3 3. ASSUMPTION OF EXPECTED VALUE MAXIMIZATION 10 4. THE FOURIER SERIES BAND-LIMITED ASSUMPTION 13 5. THE FOURIER TRANSFORM BAND-LIMITED ASSUMPTION 17 6. COMPARISON OF THE SAMPLING THEOREMS 19 7. THE PHYSICAL APPROXIMATION 21 8. SUMMARY AND CONCLUSIONS 25 REFERENCES 26 iii

PREFACE I have written this paper to make explicit and clear the philosophy underlying the application of the theory of signal detectability to the study of psychophysics. I have undertaken this task because frequently criticisms of the applications have been brought to my attention. Usually it appears to me that the criticisms stem from the idea that we are trying to do something different from what we actually are, and consequently the criticisms are not relevant. By stating our position explicitly, I hope to make it possible for these criticisms to be aimed more directly to relevant points. Two points in particular have attracted criticism. One is our use of the expected value criterion in our model; and the other is the use of the efficiency variable, iT, dependent upon calculations derived from a particular finite sampling plan. In the text I have tried to make it clear why we are using these two concepts as we are. For our purpose it is unnecessary to think or believe that the expected value criterion is the one which people really use in everyday life. In fact we are aware that there are many situations in which this is an unreasonable criterion, and also of the large quantity of data which support the facts upon which the criticism is predicated. We contend that, while the criticism as made is supported by strong experimental evidence, it is not applicable. The criticism of the use of the efficiency variable, rI, has been more difficult to handle: partly because those making the argument have been exceedingly persistent and partly because their arguments have been presented in more elegant form. It has not always been easy to ideniv

tify exactly how the problem they have solved differs from the problem we use, nor is it easy to determine exactly what the issue is. I sometimes suspect that the criticism is predicated on the notion that we intend to produce efficiency statements which can be used in engineering applications. This is a purpose which I have considered only superficially. Our purpose is to use the efficiency statements to tell us how to interpret data to develop a model which will describe the sensory behavior of a human observer. Of course, the model could be presented as it develops without any explanation of the mental processes involved in its development. This would make it more difficult to evaluate and more difficult to criticize. It seems wiser to make the underlying philosophy explicit. The argument over sampling theorems has produced its profits. It has forced those involved to turn their attention to underlying philosophies: a subject matter which scientists frequently try to avoid. Differences in philosophy frequently lead to apparently unresolvable arguments since it is not always recognized that the points of view differ because they are predicated on different basic assumptions. The arguments should be concerned with the assumptions underlying the conclusions drawn from them. The discussions of the sampling theorems have increased my confidence that, for our purposes, the use of n is sound. In particular, I am impressed with some recent ideas presented by Dr. Claude Shannon which I interpret as forming the basis for an argument which supports our use of the finite sampling plan. I would like to express my appreciation to Dr. Shannon for permitting me to read a rough draft of his paper and for discussing this raatter in detail with me. V

T. G. Birdsall of the Electronic Defense Group and Julian H. Bigelow of the Institute for Advanced Study have spent hours discussing this subject with me and to them I am deeply grateful for helping me formulate the problem. In addition I acknowledge my debt to the many persons who have attended informal meetings with me for the purpose of thorough reviews and rehashing of the problems. These include M. V. Matthews and E. E. David of Bell Telephone Laboratories, John A. Swets and David Green of the Massachusetts Institute of Technology, J. C. R. Licklider of Bolt, Beranek and Newman, and Frank R. Clarke and Allan Macnee of the Electronic Defense Group. In writing this paper I have leaned heavily on the discussions we have had; and while I must acknowledge that much of the discussion in this paper stems from their ideas, I must also take full responsibility for the statements which follow. suspect that some of these people will not agree with all of the statements. vi

ABSTRACT The theory of signal detectability is examined from the standpoint of determining a set of satisfactory assumptions for the purpose of developing an interpretive tool for use in psychophysical experiments. It is concluded that the assumption that the observer attempts to maximize the expected value of the outcome of the experiment is satisfactory for this purpose, and that a set of physical conditions can be established which justify a computation of the detectability of a signal in noise based on a finite sampling plan involving 2WT amplitude values over the open interval, 0 to T. vii

THE THEORY OF SIGNAL DETECTABILITY AS AN INTERPRETIVE TOOL FOR PSYCHOPHYSICAL DATA 1. INTRODUCTION In order to use a mathematical model in a scientific investigation, it is necessary to show that there is a satisfactory agreement between the conditions of the phenomena under investigation and either the assumptions of the model or an equivalent set of assumptions. This can be done in at least two ways: by altering the conditions of the investigation to agree with the assumptions of the model; or by adding to, or modifying, the assumptions of the model so that it more nearly agrees with the conditions of the investigation. Usually both methods are required to establish an adequate agreement. It is the purpose of this paper to examine the techniques which have been employed in establishing agreement between psychoacoustic experiments and the theory of signal detectability. The theory of signal detectability is based, as all mathematical theories are, on precisely stated assumptions. While it is likely that the theory was developed to increase understanding about some interesting and urgent practical problems, the assumptions were chosen not only because of the application but also to permit mathematical manipulation. It is a rare event that both purposes can be satisfied simultaneously; and it is the mathematicians' tendency to prefer com1

promise in favor of permitting manipulation. In almost every case, it is necessary to make some compromise if one is going to apply a mathematical model to either an experimental or "real-life" problem. In applying the model of signal detectability to psychoacoustics the problem is made even more difficult since psychoacoustics was not among the interesting and urgent problems the mathematicians had in mind when they developed the theory. The initial studies appeared during World War II as a result of the need for detecting radar signals embedded in noise. In this context Siegert (Ref. 1) presented his concept of the ideal observer. After the war interest continued and in 1954 Peterson, Birdsall, and Fox (Ref. 2) and Van Meter and Middleton (Ref. 3) presented independent developments which described a far more sophisticated ideal observer than Siegert's. The discussion in this paper will be based on the paper of Peterson, Birdsall, and Fox since it is presented in a more useful form. It is presented in two parts: the first part presents the general theory and the second considers some special cases. The general theory demonstrates that the theory of signal detectability is a special case of the theory of testing statistical hypotheses and decision theory. The second part consists of a series of studies in which some specializing assumptions are made to tailor the theory to handle certain hypothetical cases in a quantitative way. This part brings into the context of the theory of signal detectability the relation between such parameters as the signal energy and the noise energy, and the separation between the statistical hypotheses condi2

tional upon the existence of noise alone and those conditional upon the existence of signal plus noise. The general theory appears directly applicable to the study of psychoacoustics. However, because of its generality, it is not a very powerful tool. It does not contain the mechanism for quantitative prediction and, without quantitative prediction, it does not lend itself to experimental use. Specializing assumptions are required which tailor the theory to agree acceptably to the conditions encountered in psychoacoustic experiments. The agreement between these assumptions and the experimental conditions must be carefully evaluated, for the success of the use depends on this agreement. The evaluation is based partly on experimental evidence and partly on faith. The faith in turn is based on convincing, although not conclusive, arguments. This paper presents certain logical considerations supporting the convincing arguments for the adequacy of the agreement between the specializing assumptions employed and the conditions encountered in the psychoacoustic laboratory. 2. STATEMENT OF THE PROBLEM The concept of the ideal observer can be illustrated by a block diagram (Figure 1). The task of this observer is to accomplish an optimum mapping from an input space onto an output space. According to the theory of signal detectability, this is accomplished by computing the likelihood ratio associated with the input and comparing this 3

PROBABILITIES OF ALTERNATIVES CRITE RION UTILITIES OF DECISIONS COMPUTER PARAMETERS OF NOISE DISTRIBUTION DR I DISTRIBU TIONI F UNCTION SIGNAL PARAMETERS FUNPUTIR | TRANSDUCER 2 RATIO 3 DECISION 4 -— JOUTPU? SPACE COMPUTER PACE COMPU TER FIG. I BLOCK DIAGRAM OF IDEAL OBSERVER

with a number. If the likelihood ratio turns out to be greater than this number, then the input is said to include both signal and noise. If it is less than this number, then the input is said to include only noise. This is a general statement applying to all detection problems within the framework of the theory. The statement in its general form does not furnish the framework for quantitative predictions of the performance of the ideal observer, i.e., what percentage of detections and what percentage of false alarms might be expected in any given detection task. It is necessary to examine the statement to see how it must be expanded in order to permit such predictions. It is also necessary to consider the problem of why these predictions are desired. The likelihood ratio is defined as 2(x) = (X) ( where fSN(x) if the likelihood, or probability density, of the hypothesis signal plus noise leading to the input x; and fN(x) is the likelihood of the hypothesis noise alone leading to the input x. In order to compute the likelihood ratios it is necessary to assign numbers to the likelihoods fsN(x) and fN(x) for all possible values of x. In other words, it is necessary to know the distributions of x given that the hypothesis signal plus noise is true and given that the hypothesis noise alone is true. In order to compute these distributions, the ideal observer includes a distribution computer. These distributions depend upon the parameters of the signal, the parameters of the ise, and the way n which the signal ae nd noise are combined. Only if one is willing to specify these conditions in sufficient detail to permit calculation can predictions of false-alarm 5

rate and detection rate be made. It is, therefore, necessary to make specializing assumptions about the particular environment in which the ideal observer is going to operate. In the general theory, the rule for decision is to state that the signal exists whenever 2(x) _ W, otherwise state that noise alone exists. W is specified only to the extent that it is a number an arbitrary number. Before predictions can be made of detection rates and false-alarm rates it is necessary to specify W as a particular number. There are a number of ways in which W might be specified and each depends on the particular purpose for which the ideal observer is designed. Six ways of choosing a number for W are reviewed by Birdsall (Ref. 4). These include such methods as maximizing correct decisions, maximizing expected value, maximizing detection rate for a given, fixed false-alarm rate and maximizing the preservation of information. To make predictions of false-alarm rate and detection rate, it is necessary to assume a method for computing W. For this computation, the ideal observer has a decision computer and this, in turn, needs to know the method for computing W and the values of the variables necessary to make the computation. Thus, in order to make quantitative statements based on the model, it is necessary to make specializing assumptions to permit the distribution computer and the criterion computer to perform their respective functions. Any one of a number of assumptions will permit the operation in either case. This choice is not completely arbitrary; some conditions are imposed on he selection by the purpose one has in mind for making the assumptions. 6

Before attempting to select the particular assumptions, it is necessary to consider exactly why it is desirable to have a model which leads to these quantitative predictions. In this paper two reasons, leading to different criteria for selecting the specializing assumptions, will be considered. One of these is that one may want a model which is descriptive or explanatory. A model of this sort plays the role of a scientific theory describing the relations between classes of observable phenomena. The assumptions for a model of this sort must lead to predictions which permit experimental verification. The other is that one may want a model which he can use as an interpretive tool in considering the implications of experimental data with reference to particular questions he is asking. In this case, he may find it reasonable to make assumptions which can be fairly well satisfied only after appropriate experimental manipulation. The two cases might lead to quite different assumptions. If a psychoacoustician is interested in a descriptive model which explains how people interpret acoustic stimuli in the course of their everyday experience, he must concern himself with the types of signals and the types of noise encountered in what he hopes are typical environments. He must also conduct experiments to find out which of the methods for the selection of W appear to be consistent with behavior. Based on the evidence, one might make a cautious statement which says: "People behave in a way which can be described as selecting W by a particular method." That they actually choose a number may not be determinable. One might also find experimentally that there are lower bounds which appear to be placed on W (that a 7

threshold exists), but this, of course, is an experimental question. On the other hand, the psychoacoustician may be interested in studying the auditory equipment available to observers in listening to acoustic signals. He may find that the analyses he wishes to perform can best be carried out if he uses signals and noises which are atypical to those of everyday environments. This may permit greater agreement between the physical conditions of the experiment and the assumptions made to permit the computations. The fact that the physical conditions are atypical with regard to everyday environments does not necessarily degrade the quality or the usefulness of the answers to the questions he is asking. He may also select a particular method as his assumption for computing W and again there is no reason why this method has to be typical of those encountered in everyday environments. It need only be a reasonable assumption for the class of experiments being performed, If it is necessary to train observers to behave in a particular way in experiments in order to get at the answers to particular experimental questions, then this training is permissible. That this behavior is atypical to everyday environments is not a valid criticism of the assumption made for this purpose. The block diagram of Figure 2 illustrates the experimental schema of the second purpose. The diagram, from Tanner and Birdsall (Ref. 5) describes a procedure for using optimum and non-optimum models to draw conclusions about the human observer's hearing mechanism. The non-optimum models are degraded from the optimum by introducing statistical uncertainty into the form of the signal as it is transmitted. 8

BAND LIMITED WHITE GAUSSIAN NOISE SIGNAL IDEAL KNOWN EXACTLY RECEIVERS <r~o S.~ ~ ^~-~I -S1I S2 2 2 SIGNAL ~~~~~~~~~~~~SIGNAL l l RECEIVER KNOWN STATISTICALLY UNDER ST FIG. 2 COMPOSITE BLOCK DIAGRAM OF CHANNELS FOR PSYCHOPHYSICAL EXPERIMENT.

Under matched conditions (using the same energy and yielding the same performance), the human observer is said to introduce the same degree of uncertainty into a channel using an ideal transmitter as the nonoptimum transmitter does in the channel which has an ideal receiver. Conclusions involving this type of reasoning require measures of efficiency and, in order to compute efficiencies, it is necessary to have a standard of reference. Assumptions are required to permit this computation. These assumptions should be of such a nature that either they, or an equivalent set, can be reasonably approximated by physical conditions in the laboratory. The purpose of this discussion is to select and justify the specializing assumptions necessary to bring the theory of signal detectability into agreement with the laboratory situation so that this theory can be used as a tool to aid in interpreting data to answer questions about the performance of the hearing mechanism. It is hoped that the answers to these questions will form a basis for developing a descriptive model of the hearing mechanism. This description may then be incorporated into a more comprehensive model describing or explaining behavior in everyday environments. For the present, however, attention will be restricted to the use of the model as an interpretive tool. 3. THE ASSUMPTION OF EXPECTED VALUE MAXIMIZATION One of the first problems encountered in developing techniques for using the theory of signal detectability as a tool for psychoacoustic studies was the determination of the observers' control of the cut-off 10

value (the number W). As his false-alarm rate varied, did it look as if he was guessing or behaving as if he w.s varying a cut-off value? To study this problem it was desirable to trace out ROC curves. The most logical procedure appeared to be to give the observer a basis for computing W in his experiments. An examination of the various criteria reviewed by Birdsall (Ref. 4) suggested that the expected value maximization would be one of the easiest to explain to an observer. The point which must be kept in mind is that the purpose of introducing a function which is to be maximized is to permit the necessary manipulation for collecting data permitting an ROC curve to be traced. The number W is referred to as B for this special case to distinguish it from the general weighting function. In order to maximize the expected value, the input is said to consist of signal plus noise whenever the following condition is satisfied: ^ (x) 26 - P) VN.CA +.A SN'A + SN*CA where P(N) and P(SN) are the a priori probabilities of noise alone and signal plus noise respectively; V is a gain and K a cost or loss. The subscripts N and SN refer to the truth of the hypotheses noise and signal plus noise; and the subscripts A and CA to the event falling into the criterion A or falling into the criterion CA, the complement of A leading to the acceptance of the hypothesis noise alone. By changing either the a priori probabilities or the values and costs, it is possible to shift the observer's position on his ROC 11

curve, provided that the observer is attempting to maximize the expected value of the outcome of his performance. The first problem is to establish for an observer an environment in which it is desirable to maximize expected value. Then, if it can be established through experiment that the observer is reacting to the changes in a consistent way, even if he is not actually maximizing the expected value of the outcome, the purpose is achieved. The first problem is to determine those conditions which make the expected value of the outcome a desirable function to maximize. First of all, the expected value of a game is realized when both the player and the opponent have infinite resources. If the resources of the player are finite but sufficiently large to permit a great enough number of plays, such that the law of large numbers is applicable and, if the expected value of the game is greater than zero for the observer, then the expected value is a desirable function to optimize. These conditions can be established in the following way. An observer is paid a basic rate, say one dollar an hour during which he makes 400 observations. For each correct decision he receives one tenth of a cent, for each incorrect decision he loses one tenth of a cent. Thus, his maximum loss in a one hour period is 40 cents and his basic pay rate is sufficient to keep him in the game infinitely long. Since the task always involves detecting a signal with some energy, the game can always be made to be favorable to the observer. Since it is possible to satisfy the conditions, it remains only to test experimentally whether or not the observer, either through training or through an already developed ability, can react to changes 12

in a priori probabilities or to changes in the pay-off matrix. Some of the early experiments support the contention that a human observer can do this. Data reported by Smith and Wilson (Ref. 6), Munson and Karlin (Ref. 7), Tanner and Swets (Ref. 8), and Tanner, Swets and Green (Ref. 9) all support the contention that the behavior of the human observer can be manipulated by changing the expected value of the game. The assumption that the expected value is maximized in a carefully designed laboratory experiment'is used because it appears to accomplish the purpose for which it is intended. It is not important whether people seem to attempt to maximize expected value in everyday environments. The linearity of the utility of money is not involved. The fact that the pay per correct decision is an apparently negligible amount is also unimportant. There are thousands of observations involved and the observer can increase his pay rate by as much as twenty cents an hour. The assumption is adopted because it appears to work in the laboratory experiment. 4. THE FOURIER SERIES BAND-LIMITED ASSUMPTION The reason for wanting to make an assumption permitting computation of the distributions of the likelihood ratio, or some monotonic function of the likelihood ratio, is to be able to state the performance of an ideal receiver under conditions which are at least very closely related to those of the laboratory experiment. This performance depends on the separation between the two hypotheses: that for signal plus noise and that for noise alone. 13

In undertaking this problem, Peterson, Birdsall and Fox assumed a sampling theorem which permitted them to apply discrete statistics to the analysis. Slepian (Ref. 10) argued that their results depended on the particular form of sampling theorem they employed and showed that a different assumption would lead to drastically different results. David and Matthews (Ref. 11) computed the separation between two distributions as a function of the number of sampling points, using an assumption essentially the same as Slepian's, showing that, as the number of sampling points increases beyond that of Peterson, Birdsall, and Fox, the separation increases slightly. It is the purpose of this section to analyze and evaluate these assumptions with reference to their accuracy and usefulness in psychoacoustic experiments in which the model is employed as a tool to assist in the interpretation of the data. Peterson, Birdsall, and Fox analyzed the case of a signal specified exactly embedded in a noise which was assumed to be additive, Fourier series band-limited white Gaussian noise. In their analysis, both the noise waveforms and the signal plus noise waveforms were assumed to be band-limited in the same way, In this analysis both the signal and the noise are defined as voltage waveforms which can be defined over an open interval of time, 0 to T. According to the sampling theorem, an input waveform consisting either of noise alone, n(t), or signal plus noise, s(t) + n(t), can be described precisely by 2WT amplitude values where W is the bandwidth of the signal plus noise waveforms or of the noise waveforms. It is to be noted that the description of the waveform applies only to the interval 0 to T. Nothing is said about its form outside of the interval or about its method of generation. 14

The 2WT values can be taken in a number of ways. However, Peterson, Birdsall, and Fox work with 2WT equally spaced independent values, permitting the application of the following statistical theorems: 1) The likelihood ratio of a sequence of independent measures, taken under the same conditions, is the product of likelihood ratios of the individual measures in the sequence. 2) Since they work with logarithms of likelihood ratios, the theorems applying to the means and variances of the distributions of sums of random variables, The analysis shows that the distributions of the natural logarithms of the likelihood ratio under the condition noise alone and under the condition signal plus noise are both normal having equal variances. The separation between the means, divided by the standard deviation, is 2E 1/2 (N ), where E is the signal energy and No is the noise power per unit 0 bandwidth. Fox (Ref. 12) points out that "under the restrictions: 1) the populations N and SN are finite dimensional; and 2) that the functions of time in these populations be (real) analytic, it is possible to prove that sampling plans utilizing arbitrarily small sample intervals can be found, all of which yield the same error probabilities or ROC curves." He further points out that the proof depends on the assumption of errorless measurements. He says "It is not at all uncommon that the assumption that errorless measurements are possible should lead to physically ridiculous conclusions. " 15

Pushed to its limit, a proof of this nature permits one to incorporate information based on energy which exists outside of the observation interval. Consider for example the conditions outlined as follows. An observation interval, 0 to T, is defined. The populations N and SN are defined over the interval 0' to T' which completely includes the interval 0 to T. According to the proof, a sampling plan on the interval 0 to T can be devised which will define precisely the waveform on the interval 0' to T', and presumably the results would be the same as if the observation had been over the interval 0 to T'. If the sampling theorem employed by Peterson, Birdsall, and Fox applies to an observation over the interval 0' to T', then it would be possible to increase information by using more than 2WT measures over the interval 0 to T. In fact, one would expect that at least 2WT' measures would be required. The separation between the hypotheses would be greater 2E 1/2 than ( ) if the signal energy is calculated as that energy cono tained within the interval 0 to T. If it is calculated on the interval 1/2 Ot to T', then the separation would again be exactly () * Whether or not the detectability of a signal can be expressed 2E /2 by the number (t) depends on what assumptions are made with reference to the way in which one can use knowledge of events happening outside of the observation interval. If the assumptions apply only to the 2 1/2 observation interval, then (.-) appears to be an acceptable quantity. Results which predict greater detectability depend upon using the observations within the interval to describe the waveform outside of the interval. In the study of Peterson, Birdsall, and Fox, there was no basis for using information outside of the interval. The populations N and SN are defined only over the open interval 0 to T. While this 16

analysis can be extended, the applicability of the extension to conditions outside of the interval can be seriously questioned, 5. THE FOURIER TRANSFORM BAND-LIMITED ASSUMPTION Slepian (Ref. 10) showed that, for the case of a noise signal in a noise background, the results of an analysis based on sampling theorems depend critically on the particular form of the assumptions employed. Particularly important are the relations between the way the "signal-noise" and the "noise-noise" fall off at the skirts of the band. Taking one's assumptions literally, and assuming perfect measurement, it is possible to establish a set of conditions which lead to perfect detection. He argues that attention should be redirected to the more interesting cases where perfect measurement is not assumed, and only finite detectability is possible. David and Matthews (Ref, 11) performed some computations to show how detection increases as the number of measures is increased, In making these computations, they took literally the assumptions which extend the observation beyond the interval 0 to T. In the limit, the equations upon which David and Matthews computations were based can be shown to predict perfect detectability. Even though this is the case, their results indicate that detectability increases only slightly as the number of sampling points is increased beyond 2WT, From their graphs, one gets the impression that the separation between the two distributions is approaching a finite asymptote, only slightly greater than that predicted by the finite sampling plan employed by Peterson, Birdsall, and Fox. Certainly, if the separation is to become infinite, 17

it is approaching infinity very slowly. The fact that these results do not appear to be consistent with the predictions based on carrying their assumptions to the limit might be attributed to the fact that the computer (IBM-704) which they employed has only a finite capacity to carry out errorless computations. The fact that error, no matter how small, was introduced into the computations may be equivalent to introducing error, no matter how small, into the measurements. If so, then David and Matthews have succeeded in constructing an excellent demonstration of Slepian's point. The Fourier transform analysis describes the waveform as if it has always been in existence and always will continue to exist. The waveform is completely deterministic. Given a complete description of the waveform over any small interval, the description can be extended from -= to +o. If the signal is present, one description applies, if noise alone is present, another applies. The noise is completely predictable. Pushing the assumption to the limit, the signal becomes detectable with certainty. Extending the assumptions, and given absolutely precise measurements, it should then be possible to design an experiment in which, as the measures are increased in number, a description will evolve which converges on one of two descriptions. To be convincing, however, it will have to be a real experiment, it must be different from a set of computations determining the consequences of a carefully stated set of assumptions. Using the assumptions to construct an analytic tool, or for the basis of the development of a model is one thing; to claim that in the limit they apply to a set of real conditions (either laboratory or everyday life) is quite different. This claim must stand or 18

fall with the outcome of carefully conducted experiments. 6. COMPARISON OF THE SAMPLING THEOREMS In comparing the sampling theorems, it is necessary to keep carefully in mind that Fourier analysis is a mathematical tool used to describe voltage waveforms. The mathematics incorporates precisely stated assumptions involving band limitations and processes extending over infinitely long times. To use Fourier analyses, it is not necessary to assert that the generating process agrees in detail with the assumptions; it is necessary to argue only that the analysis leads to results which are consistent with the purposes for which the analysis is being performed. In order to compare the two sampling theorems, let it be accepted that neither Fourier series nor a Fourier transform bandlimitation assumption can be precisely satisfied in the physical world. Even if a precise match could be achieved, it would be impossible to demonstrate the match with certainty based on experimental data. Thus, the first problem in the evaluation is to determine what happens to the analysis in the two cases if the assumptions are only approximated. Both the analysis based on the Fourier transform assumption and that based on the Fourier series assumption assume that the measures are exactly precise and that they are taken at exactly precise points in time. Suppose that arbitrarily small error is permitted in the location of the point in time, what happens to the results based on the two 19

analyses? In the case where the Fourier transform band-limited assumption is incorporated, it is argued that the separation between the two hypotheses no longer goes to infinity in the limit. It appears to approach a finite asymptote, perhaps that suggested by the curves of David and Matthews. It is also possible that this asymptote applies to any finite set of measures and that certain detection can only be achieved based on an infinite number of measures. The analysis based on the Fourier series assumption describes the separation of the two distributions based on a finite number (2WT) of exactly precise measures. If arbitrarily small error is permitted in these measures, the result differs only slightly. As the error becomes smaller, the separation between the two distributions approaches asymptotically the finite value predicted by the analysis based on the exactly precise measurements. It appears that the necessity of introducing approximations, in attempting to match physically the assumptions of the analyses, does not do particular violence to the Fourier series analysis, while it does to the Fourier transform analysis. Because the Fourier series assumption appears to be less sensitive to approximation, it seems to be the more desirable of the two with which to work, particularly for the purpose of developing an analytic tool to be used as an aid in interpreting experimental data. In this case, one does not have to accept the conditions as they exist; one can choose his conditions to suit his purposes. How closely can one hope to create a set of conditions agreeing with the assumptions of the Fourier series analysis? 20

7. THE PHYSICAL APPROXDITION The problem is to create a set of physical conditions to which an analysis based on a precisely stated set of assumptions applies, If this can be achieved, it is not necessary to have the physical conditions agree closely with the assumptions underlying the analysis. The success in achieving a laboratory condition for which the theory is useful depends on the extent to which the analysis applies. In order to evaluate the extent to which the analysis applies to the physical conditions, it is worth recalling to mind that the purpose of the application is to determine performance upper bounds which can be used as references against which the subjects' performance can be compared. These comparisons are valuable in interpreting the experimental results. It is also worthwhile that the physical conditions are intended to be such that a particular analysis applies. If one chooses a different set of physical conditions, then this analysis might not apply. If one chooses a different type of analysis, then the physical conditions described below may not be satisfactory. In this case, the task is to find a set of physical conditions to which an analysis with a known solution applies. One also might attempt to study a set of physical conditions to determine what types of assumptions would be required of an analysis applying to the particular set of physical conditions. As long as satisfactory agreement can be established, it makes little difference from which method of 21

approach it was evolved. However, it might be better to start with a problem which has been solved and attempt to manipulate the physical conditions than to start at the other end and try to manipulate the mathematics. A suitable mathematics may not exist for the set of physical conditions selected. The analysis to be applied is that based on the Fourier series band-limited assumption. The assumptions of the analysis are: 1) That the waveforms arising from signal plus noise and the waveforms arising from noise alone are band-limited in the same way. 2) That over the open interval, 0 to T, both the signal plus noise waveforms and the noise alone waveforms conform to the conditions of Fourier series band-limitation. 3) That the noise is white Gaussian noise over the frequency band. Both the second and third of the assumptions appear impossible to satisfy physically. The first probably can be satisfied, A physical arrangement which conforms to the conditions necessary for the analysis to apply is illustrated in Figure 3. The noise and the signal are generated by a General Badio noise generator and a Hewlett-Packard audio oscillator respectively. The signal is gated, yielding a pulse which is a close approximation to a segment of a sine wave. The spectrum of the noise is approximately white from 50 cycles to 30 or 40 thousand cycles, and, except for the absence of rare infinite peaks, its amplitude distribution is nearly Gaussian. Since the signal is gated, almost all of its energy is concentrated in the inter22

50 40K NOISE A l l GENERATOR 300 30K 200 6K C T WT FSBL TE2 SML FSB LA PHONES AUDIO ro -^GATE OSCILLATOR 40KC li~f(Af) [ f(Af), ( )2 J v'V v 1a: IZF) 0 T 50 40K FIG.3 BLOCK DIAGRAM ILLUSTRATING A LABORATORY ARRANGEMENT CONSISTENT WITH THE USE OF THE 2 WT SAMPLING PLAN

val, 0 to T. A major concentration of its energy is found at its center frequency. As one goes to either side, the frequency drops as ( L)2 Thus, there can be only a negligible portion of the energy outside of that portion of the noise which is white. Any signal energy outside of this band could be only infinitesimally detectable because of the noise of the error of measurement. The gated signal is then added to the noise, at which point the spectrum of the signal is well contained within the spectrum of the noise. Thus, both signal plus noise and noise alone appear to be band-limited in the same way. Assume now that both are placed through a Fourier series band-limiter, an unrealizable piece of equipment, This band-limiter is one whose output actually satisfies the assumptions of Peterson, Birdsall, and Fox during the open interval, 0 to T, and whose output is unanalyzable outside of the interval. An analysis of the detectability at the output of that filter is an analysis which considers all of the information at the input, with the exception only of that small portion which lies outside the filter transmission band, This is nearly zero and can be ignored, If the output of the band-limiter is now fed to the earphones, the output of the earphones is essentially the same as if the bandlimiter did not exist. The earphones are known to have a flat response approximately 200 cycles to 6000 cycles. Again, a Fourier series bandlimiter creates the conditions for which the analysis applies and again removes almost no information from the waveform at the output of the adder. At each point the analysis leads to the same result, barring only minute differences. The earlier the analysis is performed, the more likely it is to be an over-estimate of the performance. 24

It may be possible to describe the waveform within the interval, 0 to T, with other analyses. For example, it may be possible to describe a set of generators which could generate this waveform if they had been operating forever. But one should not forget that this waveform was more likely to have been generated by equipment which is turned on in the morning and turned off at night. The fact that waveforms can be analyzed as if they are made up of a set of sine wave components leads to a useful mathematical tool. It does not limit the method of generating waveforms to the process of combining sine waves. If one wishes to attribute to the physical waveforms the properties he has assumed for the purpose of analysis, then, as Slepian shows, he is likely to arrive at conclusions which are obviously ridiculous from a realistic standpoint. 8. SUMMARY AND CONCLUSIONS The theory of signal detectability has been examined from the standpoint of determining a set of satisfactory assumptions for the purpose of developing an interpretive tool for use in psychophysical experiments. It was concluded that the assumption that the observer attempts to maximize the expected value of the outcome of the experiment is satisfactory for this purpose, and that a set of physical conditions can be established which justify a computation of the detectability of a signal in noise based on a finite sampling plan involving 2WT amplitude values over the open observation interval 0 to T. 25

REFERENCES 1. J. F. Siegert, Chapter 7 in J. L. Lawson and G. E. Uhlenbeck, Threshold Signals. New York; McGraw-Hill, 1950. 2. W. W. Peterson, T. G. Birdsall, and W. C. Fox, "The Theory of Signal Detectability," Trans. Professional Group on Information Theory, Inst. Radio Engrs., 1954, PGIT-4, 171-212. 3. D. Van Meter and D. Middleton, "Modern Statistical Approaches to Reception in Communication Theory," Proc. Professional Group on Information Theory, Inst. Radio Engrs., 1954, PGIT-4, 119-145. 4. T. G. Birdsall, "The Theory of Signal Detectability," in H. Quastler (ed.) Information Theory in Psychology. Glencoe, Ill.; Free Press, 1954. 5. W. P. Tanner, Jr. and T. G. Birdsall, "Definitions of d' and r as Psychophysical Measures," J. Acoust. Soc, Amer,, 1958, 30, 922-928. 6. M. Smith and Edna A. Wilson, "A Model for the Auditory Threshold and its Application to the Multiple Observer," Psychol. Monogr., 1953, 57, No. 9. 7. W. A. Munson and J. E. Karlin, "The Measurement of the Human Channel Transmission Characteristics," J. Acoust. Soc. Amer., 1956, 26, 542-553. 8. W. P. Tanner, Jr. and J. A. Swets, "A Decision-Making Theory of Visual Detection," Psych. Rev., 1954, 61, 401-409. 9. W. P. Tanner, Jr., J. A. Swets, and D. M. Green, "Some General Properties of the Hearing Mechanism," Tech. Rept. No. 30, Electronic Defense Group, The University of Michigan, March, 1956. 26

10. D. Slepian, "Some Comments on the Detection of Gaussian Signals in Gaussian Noise," Trans. Information Theory, Inst Radio Engrs., IT-4, 1959, 65-68. 11. E, E. David and M. V. Matthews (Unpublished Memorandum) 12. W. C. Fox, "Signal Detectability: A Unified Description of Statistical Methods Employing Fixed and Sequential Decision Processes," Tech. Rept. No. 19, Electronic Defense Group, The University of Michigan, 1953. 27