THE UNIVERSITY OF MICHIGAN OFFICE OF RESEARCH ADMINISTRATION ANN ARBOR THE THEORY OF ELECTRONIC RANDOM SELECTORS Technical Report No. 115 2899-37 -T Cooley Electronics Laboratory Department of Electrical Engineering By: G. A. Roberts Approved by: __/_ A. B. Macnee Project 2899 TASK ORDER NO. EDG-3 CONTRACT NO. DA-36-039 sc-78283 SIGNAL CORPS, DEPARTMENT OF THE ARMY DEPARTMENT OF THE ARMY PROJECT NO. 3A99-06-001-01 December 1960

TABLE OF CONTENTS Page LIST OF ILLUSTRATIONS iv ABSTRACT v 1. INTRODUCTION 1 2. THEORY OF ELECTRONIC RANDOM SELECTORS 1 2.1 Preliminary Discussion 1 2.2 Randomness Requirements 2 2.3 Probability-Control Circuit 3 2.3.1 Probability Control for Equal Probabilities 8 2.5.2 Probability Controls for Unequal Probabilities 8 2.4 Randomness Generation 11 2.4.1 Satisfaction of Randomness Requirement 3 11 2.4.2 e and Q for Several Distributions 15 2.4.5 The Satisfaction of Randomness Requirements 1 and 2 15 2.4.4 Determination of Minimum Allowable Time Between Questions 16 2.4.5 Random Selector With Random Source as an Integral Part of the Probability Control 17 2.5 Predictable Binary-Sequence Generators 19 5. SUMMARY 21 APPENDIX A 23 A.1 Derivation of e and Q for Uniform Distribution 23 A.2 Derivation of C and Q for Triangular Distribution 23 A.3 Derivation of e and Q for Exponential Distribution 25 A.4 Derivation of e and Q for Exponential Distribution Alternated 26 A.5 Derivation of c and Q for Normal Distribution 28 A.6 Derivation of c and Q for 2 Distribution with 2N Degrees of Freedom 29 REFERENCES 50 DISTRIBUTION LIST 31 iii

LIST OF ILLUSTRATIONS Figure Page 1 Random selector 2 2 Transformation of the infinite time-domain into a finite space by the spinning disk 3 Probability-density function, g(x), over the finite space vs. x 4 Probability-density function, f(t), in the infinite time domain 7 5 The probability-density function, g(t), in the finite time domain obtained from Fig. 4 by rotating the disk at different speeds 7 5 Equally-likely random selector with N possibilities 8 7 Binary random selector with adjustable probability 9 8 Binary random selector with continuously adjustable probability control 9 9 Waveform for voltage-adjustable random selector for small probability range 10 10 Probability-density curve wrapped around a cylinder 15 11 Peak-to-peak fractional error of the wrap-around probability-density as a function of the wrap-around factor K 14 12 Probability-density function 16 15 Full-wave rectified gaussian noise 17 14 Density functions for rectified and unrectified Gaussian noise 18 A.1 Uniform distribution 25 A.2 Triangular distribution 24 A.5 Exponential distribution 25 A.4 Exponential distribution alternated 26 A.5 Normal distribution 28 A.6 X distribution with 2N degrees of freedom 29 iv

ABSTRACT The basic theory of electronic random selectors is presented. An electronic random selector is a completely electronic device that makes a truly random selection of one item from a set of items. Randomness requirements, probability control circuits, and randomness generation are discussed. A brief comment on predictable binary sequence generators is included. v

THE THEORY OF ELECTRONIC RANDOM SELECTORS 1. INTRODUCTION In 1953 when the development of N. P. Psytar (Noise Programed Psycophysical Tester and Recorder) was begun at the University of Michigan there were no electronic machines which could produce truly random selections in response to a given request. Thus, as a part of the overall N. P. Psytar program electronic random selectors were developed. An electronic random selector can usually be separated into two parts —-a probability control circuit and a randomness generator. In line with this breakdown,the organization of this report consists of a discussion of the randomness requirements, probability control, randomness generation, and a brief comment on predictable binary-sequence generators. 2. THEORY OF ELECTRONIC RANDOM SELECTORS 2.1 Preliminary Discussion The input to a random selector is a request that a selection be made. This request is called a "question," and the time at which it is made is called the "question time." The output of a random selector is a selection from a set of possibilities. The probability of selection for each of the members of a set of possibilities is established prior to the "question time." Following this, the random selection is made in accordance with the established probabilities. For the purpose of analysis,

then, any random selector or random-number generator can be thought of as having two main parts: a probability control; and a randomness generator (See Fig. 1 ). RANDOMNESS PROBABILITY GENERATOR CONTROL Fig. 1. Random selector. A set of possibilities for a random selector might be, for example, the letters of the English alphabet; and the probability of any particular letter being selected could be equal to the frequency of its occurrence in the language. The balance of this section is devoted to the expansion and clarification of the foregoing material in the following order: first, the randomness requirements are set forth; second, the theory of the probability control is described with the assumption that the input is a random function of time; third, the theory of generating this random input is described; and fourth, predictable binary-sequence generators are mentioned. 2.2 Randomness Requirements In general three requirements must be met if selections are to be random: 1. The selections must be independent of one another. This means that the a priori probability of any selection must be the same as the conditional probability for all selections. 2. The selections must be independent of their applications. For this to be meaningful a universe of discourse must be 2

specified. Within this universe there is an experiment or test with events from which random selections are required. That the selections must be independent of their applications means that the events within this universe and the selections must be independent. Thus, the probability of any selection from the random selector must remain unchanged by any knowledge concerning the experimental or test events. 3. For each "question" one and only one selection can be made from N possibilities. The probability of each of the N possibilities must depend on only the probability control. In effect, this requirement means that probabilities should not change unless the probability control is changed. These requirements have been set forth to give sequences of selections that do not have predictable patterns and that have stable probabilities. 2.3 Probability-Control Circuit The probability-control circuit is a device which, upon receiving a "question" input, makes one selection from a set of N possibilities. The probability of each selection is determined by the setting of each probability control. In some circuits the probabilities will be predetermined when the circuit is designed; manually operated controls will be provided in others so that an operator may adjust the probabilities. It is meaningless to talk of a probability-control circuit if nothing is said of randomness. For instance, by improperly designing a random selector it would be possible to always make the same selection independently of the probability setting for that selection. A proper design 1. For other types of outputs, logic circuits can be employed to obtain the desired characteristics. 3

requires that the following conditions be met: (1) If the probabilitycontrol circuit does not have a built-in random source, then the "question times" must be randomly distributed in time. (In effect, the probabilitycontrol circuit transforms the random "question times" into random spatial selections.) (2) If the probability-control circuit has a built-in random source, then the "question times" need not be randomly distributed in time. But, depending upon the nature of the random source, certain restrictions on the input may be necessary. Any random selector and probability control is dependent on timevarying factors. For this reason it is possible to describe all probability controls as spinning disks. This means that an analogy can be made between any electronic probability-control circuit and an appropriate mechanical spinning disk- (Refs. 1, 2). The typical operation of a spinning-disk probability control is as follows: The disk has N segments, one for each selection, and is normally spinning. A selection is obtained by stopping the disk at a random time. The segment indicated by a fixed arrow indicates the selection. A set of selections is obtained by repeating this procedure for each additional member of the set. Greater attention will now be given to the general theory of the probability of a selection. The above ideas should be kept in mind as background material, but they should not be allowed to confuse the new ones. The spinning disk may be considered as a device which transforms a part or all of the infinite time-domain into some finite space. A 1. Mechanical spinning-disk random selectors with which many people are familiar are roulette wheels and carnival wheels. 4

graphical intepretation is shown in Fig. 2. If the disk spins at a uniform rate it transforms the infinite time domain into a finite time domain -o0 -- INFINITE TIME DOMAIN - + O \N \ \d / / N \/ X\ / / FINITE SPACE Fig. 2. Transformation of the infinite time-domain into a finite space by the spinning disk. of duration T. The time for the disk to make one revolution is called the modulus period and is symbolized by T. There is associated with each of the possible random selections a segment in the finite space. These segments are nonoverlapping because they correspond to the segments on the disk, which are nonoverlapping. A "question time" occurs in the infinite time domain, and the disk is stopped at this "question time." Corresponding to this time in the infinite space there is a point in the finite space which is the transform of the "question time." The segment of the finite space containing this point is the selection. Over the finite space there is a probability-density distribution, g(x), of the stopping point. An arbitrary function, g(x), is shown in Fig. 3. The shape of the probability-density distribution is, of course, a function of the design of the equipment. More will be said about the actual formation of this probability-density distribution in section 2.3. The segments shown in Fig. 3 identify the parts of the finite space corresponding to the possible selections. The probability of selecting 5

g(x) PROBABILITY- - / DENSITY FUNCTION I N OVER X Xo Xi X2 XN XN+I X No' FINITE SPACEFig. 3. Probability-density function, g(x), over the finite space vs. x. the ith segment, pi, is given by i+l, w= I g(x) dx (2.1) x. where n+l f g(x) dx = 1 0 and Xi is the abscissa of the left-hand side of the ith segment, xi+l is the abscissa of the right-hand side of the ith segment, g(x) is the probability-density function of the stopping point in the finite space, and Pi is the probability of the ith segment being selected. If the disk is spinning at a uniform rate, the finite space is a finite 6

time domain and x may be replaced by t. The probability-density function, g(x) or g(t), results from the transformation of some other probability-density function, f(t), in the infinite time domain into the finite space. Suppose that in the infinite time domain f(t) is described by the function in Fig. 4. Further suppose that a spinning disk has a constant speed and is phased such that the leading edge of the "0" segment f (t) - T. — I t, t2 t —Fig. 4. Probability-density function, f(t), in the infinite time domain. coincides with t. The effect on g(t) [ 9g(t) of the disk rotating at three differ- FOR T=0.75T, ent speeds is shown in Fig. 5, where T is the time for one rotation. It g (t) should be obvious from these curves FOR T=T, that the probability of a selection is influenced by the speed of rotation of the disk relative to the probabil- [ (t) ity-density function f(t) for the par- FOR T=1.2T( ticular conditions of the example. Fig. 5. The probability-density funcIf it is possible to obtain a proba- tion, g(t), in the finite time domain obtained from Fig. 4 by rotating the bility-density function g(t) in the disk at different speeds. 7

finite space that is essentially uniform and not influenced by small changes in the speed of the disk, then a satisfactory probability control which will have stable probability settings can be designed. The method of obtaining such a uniform distribution is described in the section on randomness generation. For the present discussion it is assumed that a uniform density function is available in the finite space. 2.3.1 Probability Control for Equal Probabilities. The spinning-disk probability control can be realized electronically in a number of ways. If N equally-likely selections are needed, then a modulus-N counter (i.e., a counter with N states, 1, 2, 3,... N) driven by a cyclic pulse generator is one of the best realizations (see Fig. 6). PULSE RECURRENT _^ QUESTION MODULUS-N _ SELECTION GENERATOR SWITCH COUNTER READOUT INPUT (OPEN THE SWITCH NOTE: AN ALTERNATIVE CIRCUIT AT A RANDOM TIME PLACES THE QUESTION TO READ OUT THE SWITCH BETWEEN THE SELECTION) COUNTER AND THE READOUT CIRCUIT. Fig. 6. Equally-likely random selector with N possibilities. 2.3.2 Probability Controls for Unequal Probabilities. If a binary selector with an adjustable probability is required, a modulus-N counter driven by a recurrent pulse generator with a flip-flop readout can be used. If the flip-flop is placed in state A at the end of count N (i.e., just preceding count 0) and in state B at the end of count M, then the probability of selection A, P(A), is 8

P(A) = N (2.2) and the probability of selection B, P(B), is P(B) - = 1 - P(A) (2.3) assuming, of course, that f(t) is uniform. Note that the smallest increment of probability adjustment is 1/N. See Fig. 7 for a block diagram. RECURRENT QUESTION MODULUS" N PROBABILITY FLIP A PULSE GENERATOR SWITCH COUNTER SELECTOR FLOP B INPUT Fig. 7. Binary random selector with adjustable probability. Continuous probability control may be desired in some applications. A realization for a spinning disk of this type may consist of a cyclic sawtooth generator, a Schmidt-trigger circuit, and a flip-flop (see Fig. 8). The probability of a selection A, P(A), is given by Eq. 2.4, RECURRENT SCHMIDT QUESTION FLIP A SAWTOOTH ---- TRI GGER GENERATOR l CIRCUIT i VOLTAGE TO INPUT ADJUST PROBABILITY Fig. 8. Binary random selector with continuously adjustable probability control. 9

provided that the sawtooth waveform is linear and has a negligible retrace time. e - E P(A) = E 1 (2.4) where e is the trigger level of the Schmidt circuit (e is the parameter used to adjust the probability), E2 is the peak voltage of the sawtooth, and E1 is the minimum voltage of the sawtooth. If the desired relationship between probability control voltage and probability is nonlinear, then the desired result can generally be achieved by modifying the waveform of the sawtooth generator. If the probability range to be controlled by the voltage is small, then the waveform of the sawtooth generator should be modified as shown in Fig. 9. E2 es 0t tl | t2= T+ to ~ * t t STATE A ON STATE B ON STATE A ON T Fig. 9. Waveform for voltage-adjustable random selector for small probability range. 10

The probability of selection A, P(A), for the waveform of Fig. 9 is t1 e - E1 t2 - t P(A)= T E E 1 T (2.5) In general the binary adjustable-probability random selectors can be extended to have more possible selections by means of simple logical circuits. It is possible to combine two or more binary random selectors in logical switching circuits to obtain probabilities smaller or larger than those readily achieved with the above random selectors. When this is done it is necessary that independent random sources be used for the individual binary random selectors. 2.4 Randomness Generation The randomness generator is responsible for the randomness of a random selector. Three requirements must be met. 1. The randomness source, in conjunction with the question input, must produce an input to the probability control in such a manner that the selections are independent of one another. 2. Similarly, the probability control input must be such that the selections are independent of their applications. 3. If the probability control does not contain a random source as an integral part, then the probability control settings should not be influenced by the choice of a randomness generator. 2.4.1 Satisfaction of Randomness Requirement 3. If g(t) is uniform, then requirement 3 is satisfied. 11

The combination of the question operation and the randomness source will produce some distribution f(t) in the infinite time domain. It is necessary to know whether f(t) produces a uniform distribution for g(t) in the finite time domain. When f(t) is segmented by the probability control, it is transformed to g(t). This technique of segmenting f(t), or for illustration wrapping it around a cylinder, is called the wrap-around effect.1 As an illustration of the wrap-around effect consider a cylinder of circumference T with a plot of f(t) to the same time scale around the cylinder as shown in Fig. 10. The sum of all the ordinates around the cylinder is g(t). If f(t) is of the proper character, then g(t) will approach a uniform ordinate. A measure is required to represent the closeness to a uniform distribution. The fractional peak-to-peak error e is such a measure and is defined as g(t)ax - g(t).6) c = (2.6) g(t) avg Since the area under g(t) = 1, it is readily seen that g(t)avg = 1/T. For convenience the period T will not be used. Instead, the period or circumference of the cylinder will be described as Ka, where K is a constant and a is the standard deviation of f(t). As a further simplification K will be used to represent the period; then comparisons can readily be made between different values of f(t). Thus the defining equation for e is 1. T. G. Birdsall originated this analysis. 12

gr~ ~ ~ ~ t)~~K 3 =2-' X " f (t)= PROBABILITY = K[g(t)max g min] (2.7) DENSITY The quality factor Q is defined as Q = 1/ (2.8) -2or - C _ 2 t — (a) Segmented normal delay Note that Q approaches co as g(t) ap- distribution function. proaches a uniform distribution. 2.4.2 e and Q for Several Distributions. The maximum value of the fractional peak-to-peak error, e, is derived for the following distributions:!f 1. Uniform distribution. 2. Triangular distribution. 3. Exponential distribu- (b) Cylindrical wrap-around ~tion*~. ~representation. 4. Exponential distribution alternated.. Normal distribution. K2 - 6. X2 distribution with N degrees of freedom. See Appendix A for the derivations. The results of these derivations are plotted in Fig. 11, with e as a func- tion of K for a = 1. e c s my be s - (c) Wrap-around effect for a These curves may be somenormal delay distribution function. what deceptive; therefore particular Fig. 10. Probability-density curve attention should be given to the apped around a cylinder. wrapped around a cylinder. 13

10 z LL 3 10 I...-. —.- - z w o 10 - |.... DISTRIBUTI O O z o 0 5 10 CURVE ORIGINAL DISTRIBUTION t) 4 1 5 II....... UNIFORM DISTRIBUTION 103T ---- ---- - ---- - 3 EXPONENTIAL DISTRIBUTION 4....... NENTIAL DISTRIBUTION I-' ALTERNATED 0 5... NORMAL DISTRIBUTION -3 2 010 - 6... X DISTRIBUTION N-I -t:cr~~~ ^ ~t e wJ fN (t)= -e FOR N=10 -j / (N-I)! 0: -4 10 w a. I 0 0 1.0 2.0 3.0 4.0 5.0 - -6 Fig. 11 Peak-to-peak fractional error of the wrap-around probability-density as a function of the wrap-around factor K. 14

following values of a. For curves 1, 2, 3, 5, and 6,a is the calculated standard deviation of f(t). For the case of curve 4 it is not easy to calculate a of f(t). Therefore, in this case the a calculated is for the exponential before alternation. Actually this is more informative because the improvement resulting from alternating the exponential is seen by comparing curves 3 and 4. Curve 6 has only one point shown. A great deal of work is required to calculate Af(t) for this distribution. However, it was deemed desirable to give the reader some idea of the position of this curve since it is a realizable practical distribution. The significance of curve 6 for N = 10 is that it is close to the normal distribution and is readily generated. Note that if N is varied curve 6 will have upper and lower bounds given by curves 3 and 5 when N = 0 and N = oa. Curve 1 is drawn as a continuous curve. Actually e goes to zero when X is equal to an integral multiple of T. Several general statements can be made regarding the magnitude of e with respect to the shape of f(t). For a symmetrical distribution e will be smaller than for a skewed distribution. As the wrap-around duration T is made smaller, e will decrease. 2.4.3 The Satisfaction of Randomness Requirements 1 and 2. To determine that conditions 1 and 2 are satisfied it is necessary to analyze the randomness source, the probability control, and the universe of discourse relative to one another. Now it will be assumed that the wraparound probability-density function g(t) is uniform. The method of analysis will be described by means of a typical example. For other applications the reader will have to carefully form his own analysis procedure. Example: this example is concerned with the design of equipment 15

for a psychophysical experiment. The universe of discourse consists of (a) individuals that are subjects and called observers, (b) a set of signals, and (c) random selection equipment to select signals to present to the observers. The random selector has as a randomness generator a noise source that produces output pulses with the density function of curve 6, Fig. 11. The original source of these pulses is a counter of pulses generated by the decay of a radioactive element. Every tenth pulse is used as an output pulse. The probability control is driven by a stable oscillator, and K = T/a is selected so that g(t) is essentially uniform. There is no known causal relationship between the radiation from the radioactive element and the oscillator, and therefore the selections are independent of one another. There is no causal relationship between the noise source and the observer, and therefore the selections are independent of their application. Thus, in this application all three requirements are satisfied. 2.4.4 Determination of Minimum Allowable Time Between Questions. An important practical design factor is the maximum rate of making selections. This is of course related to f(t). If f(t) has a positive tail that extends to infinity, then an allowable probability of failure to make a selection (Fig. 12, crosshatched region), PF(T1), must be chosen. Consider the f(t) shown in Fig. 12. Then 00 f(t) /\ ^ PF(T1) = f f(t) dt (2.9) FTI where Fig. 12. Probability-density function. 16

PF(T1) = probability of failure to make a selection T1 = minimum time allowed between question pulses. Generally PF(T1) will be chosen to be a very small probability. However, if f(t) has a positive tail that terminates at time T2, then it is possible to make P (T1) = 0, provided T > T2. The selection of a value for PF(T1) will depend upon the application of the random selector. 2.4.5 Random Selector With Random Source as an Integral Part of the Probability Control. When the circuit design is such that the random source and probability control cannot be physically separated, then the random selector falls in this category. Randomness conditions 1 and 2 must be satisfied, and condition 3 does not apply. As an illustration of this type of random selector consider the following exarm;ple. A Gaussian noise source is full-wave rectified, and this voltage waveform is then used in the circuit of Fig. 8 in place of the recurrent sawtooth generator. Photographs of typical samples of full-wave rectified Gaussian noise are shown in Fig. 13. The amplitude density function of this full-wave recti110-~~~~~~~ ~~~~~~~~ )::IIE CO:: L::::::::; O 10 MI LLISECONDS |;: 50i 0 M:IL L ISECON DS Fig. 13. Full-wave rectified gaussian noise. fied noise voltage is 17

2 e 2 2a2 D - = 2 0 <e < o (2.10) e J/2- a where D = probability density of the full-wave rectified Gaussian noise voltage a = standard deviation or RMS noise voltage e = voltage measured from the base line of the noise voltage. This function and the original unrectified density function are plotted in Fig. 14. The probability of selection B, P(B), esi^~~~ ^J^is given by Eq. 2.11,| \ \ -FULL WAVE RECTIFIED 2 \ GAUSSIAN NOISE e 2 20? P(B) = f -- e c de, < es, D~E es (2.11) where / —- GAUSSIAN NOISE P(B) = probability of selecting B e = Schmidt circuit trigger level. Fig. 14. Density functions for rectified and unrectified Gaussian noise. It is important to note two problems with this type of random selector. The probability P(B) is sensitive to (1) changes in the shape of the density function, and (2) changes in the ratio of es to a. To obtain the required independence of selections it is necessary that the noise source be independent of the questioning time and independent of the application. To obtain independence among selections 18

the time between question pulses must exceed a minimum time T3. T3 is determined by the maximum allowable correlation between selections. The correlation function for the noise is then determined by the impulse-response function of the network which determines the spectral distribution of the gaussian noise. 2.5 Predictable Binary-Sequence Generators A predictable binary-sequence generator is a device which produces a sequence of M binary digits in a deterministic manner. If the design and the stored contents or initial conditions of the device are known, then the output is predictable for a person with this information, but may be unpredictable for a person without this information. Examples of possible predictable sequence generators are: 1. Printed random-number tables. 2. Random-number routines for digital computers. 3. Punched paper tape. 4. n-stage shift register with feedback loops. 5. 2n-modulus counters with a suitable output matrix switch. If a shift-register generator or similar sequence generator produces a periodic sequence of period 2n = M, then by proper design it is possible to match the statistics up to the nth order. This means that the probability of a particular subsequence of K digits, for 0 < K < n, is the same as the probability of any other subsequence of K digits. An example is illustrated below: t t t Sequence 011100010111000101110001011100010 Period 2, n = 3 Probability of a K-tuple = P(K) = - for 0 <K < n 2K19 19

By actual count: Sub sequence Number of Subsequences Probability 0 4 0.5 1 4 0.5 00 2 0.25 01 2 0.25 10 2 0.25 11 2 0.25 000 1 0.125 001 1 0.125 010 1 0.125 011 1 0.125 100 1 0.125 101 1 0.125 110 1 0.125 111 1 0.125 0000 0 0.000 0001 1 0.125 0010 1 0.125 0011 0 0.000 0100 0 0.000 0101 1 0.125 0110 0 0.000 0111 1 0.125 1000 1 0.125 1001 0 0.000 1010 0 0.000 1011 1 0.125 1100 1 0.125 1101 0 0.000 1110 1 0.125 111l 0 0.000 etc. 23 For n = 3 there are 2 = 256 different binary sequences, but there are only 16 ways in which the above sequence can be written and still retain the desired properties. The importance of the sequence generator is that by proper design the statistical properties are like those of the truly-random binomal distribution with p = 0.5. It should be noted that this statement is true 20

for 0 < K < n but does not hold for K > n. Thus, for applications in which it is desirable to have repeatable experiments or where the statistics must have precise values in the experiment, this type of device may be used as a random selector. As an example, psychologists often want random sequences in which the statistics up through the fourth are precisely matched. They do not want the fluctuations that result from sample variance, and of course sample variance is present in a truly random sequence. It is extremely important that the use of the sequence generator for a random selector satisfy the randomness requirements given on p. 11. For instance, consider the psychophysical experiment described on p. 16. If a binary sequence generator is used for the random source and if the sequence complexity is such that the observer cannot remember the sequence, then the output will meet the randomness test. Now, if instead, the observer has an identical sequence generator, then he can predict the sequence, and randomness requirement 2 fails. To ensure independence it is possible to randomly change the n code on the sequence generator after every/2 digits for a sequence of length 2n 3. SUMMARY The basic requirements for randomness have been presented. Techniques for building random selectors have been described. However, no circuit details are given. The actual circuits used in N.P. Psytar are to be presented in a subsequent report (Ref. i). 21

APPENDIX A THE DERIVATION OF E AND Q FOR SEVERAL TYPES OF DISTRIBUTION A.1 Derivation of e and Q for Uniform Distribution f(t)t *-r I —___ 0 T X t — Fig. A.1. Uniform distribution. Let X = NT + a o < a < T u = X/2; mean time of the distribution 3 = xf i 1 1 g(t)x -g(t)mn = Ag(t) = (N + 1) - N1 max minn h h, x for a =; K = T; X = 2a4 1 K Q = K K A.2 Derivation of c and Q for Triangular Distribution f(t) = 2 _T- t' ) 23

(t) NT+a 0 T NT+a t — Fig. A.2. Triangular distribution. CO = E(t) = f t f(t) dt -00 CC 2 2 2 2 c2 = E(t-~)2 = f t g(t) dt - -00 Let X = NT + a then [ = A/3 2 2 l1 a = /18 3/2 Assume a = 0 2 (1 t f(t) = 2 (1 ) 0 < t < NT NT NT -2 (1 iT+t') 0 < t' < Tl NT NT l i = O, 1, 2,... NN-l g(t) = Z f(t) i=0 24

2 FNt' 1 N(N-1) NT T N 2 g(t) = T N + 1 - 2 1 <N g(t)x - g(t) n = Ag(t) = 2/NT = 2/X For = 1; = 32; K = T KJE e = K2/X = K 3\I3 Q _ 3 2 2K Using this same procedure, it is possible to evaluate e or Q for the symmetrical triangular distribution. A.3 Derivation of e and Q for Exponential Distribution O T t - Fig. A.3. Exponential distribution. t g(t) = 1/ e ~ Note a = 25

1 t! 00) = (i + )T g(t) = L - e T O < t' < T i=O 0 t'..,.,.... -.... 1 e T l-e 1 - e1 g(t -g(t = g(t)) = For a=; K=T = K or Q = 1/K A.4 Derivation of E and Q for Exponential Distribution Alternated f (t) 0 T t —Fig. A.4. Exponential distribution alternated. 1 -(i+ ) i = 0 1, 2 f(t) e < t' <T t = i2T + t' J 26

-(i + 1 _ 2) 2T i = 0-, 1, 2. f(t) = < t' < T 1 |t = (i+l) 2T - t' where, is the mean for the nonalternated exponential distribution. 2 a is the variance for the nonalternated exponential distribution. g(t) - -L i e2T -(i + 2) (i 0 < t' < T go ( t - 1L - (2T - t) | g(t) = e e( + e i =0 T T t' - -.)- -(1 e 4 + e T T e 4 - e cosh - (T - t')!Ct T sinh 1 1 1 g(t)x't- g(t)mn -= g(t) = ( T tanh - sinh Let [1 =1; a = 1; K = T 1 1 cosh K- 1) = tanh K sinh K sinh K or 1 sinh K K cosh K - 1 27

For small K (neglecting third- and higher-order factors) we obtain E = I2/2 A.5 Derivation of E and Q for Normal Distribution tf(t, T T _ 2 2 Fig. A.5. Normal distribution. t2 f(t) e i-2 [(i - N 2 Ag(t) = 1 + 2 1 e 22 2 t = -- 2 2 2 i=il 2iti i=l 2 For i -0, 1, 2... \K = T J +2 x ((i )2 - I[(i - 2)K 1 2 2 i= This numerical evaluation is most easily obtained by means of tables. 28

A.6 Derivation of c and Q for X. Distribution with 2N Degrees of Freedom f(t) T t — Fig. A.6. X2 distribution with 2N degrees of freedom. tN-1 -t f(t) - ^ e tj = N f(t) = J(N ) Ag(t) is not easy to express for all N and is also difficult to calculate. As N - +c; g(t) approaches the normal distribution. The curve of c as a function of K is not so good as the normal distribution. Note: When N = 1, f(t) is the exponential distribution. If numerical calculation of c is desired for small c, then a digital computer should be used. e as a function of K(a = 1) is plotted for each of the above wrap-around distributions in the text in~Fig. 11. 29

REFERENCES 1. G. A. Roberts, "Spinning Disk Random Selectors," Cooley Electronics Laboratory Technical Memorandum No. 39, The University of Michigan, Ann Arbor, April 1957. 2. R. R. McPherson, "Use of Hewlett-Packard Model 522B Electronic Counter as a'Spinning Disk' Random-Number Selector," Cooley Electronics Laboratory Technical Memorandum No. 29, The University of Michigan, Ann Arbor, July 1956. 5. G. A. Roberts, "N. P. Psytar: Noise Programmed Psychophysical Tester and Recorder," Cooley Electronics Laboratory Technical Memorandum No. 86, The University of Michigan, Ann Arbor, March 1961. 50

DISTRIBUTION LIST COPY NO. COPY NO. 1-2 Commanding Officer, U. S. Army Signal Re- 28 Chief, Bureau of Naval Weapons, Code RRR-E, search and Development Laboratory, Fort Department of the Navy, Washington 25, Monmouth, New Jersey, ATTN: Senior Scien- D. C. tist, Countermeasures Division 29 Chief of Naval Operations, EW Systems 3 Commanding General, U. S. Army Electronic Branch, OP-35, Department of the Navy, Proving Ground, Fort Huachuca, Arizona, Washington 25, D. C. ATTN: Director, Electronic Warfare Department 30 Chief, Bureau of Ships, Code 691C, Department of the Navy, Washington 25, 4 Chief, Research and Development Division, D. C. Office of the Chief Signal Officer, Department of the Army, Washington 25, D. C., 31 Chief, Bureau of Ships, Code 684, DeATTN: SIGEB partment of the Navy, Washington 25, D. C. 5 Commanding Officer, Signal Corps Electronics Research Unit, 9560th USASRU, 32 Chief, Bureau of Naval Weapons, Code P. O. Box 205, Mountain View, California RAAV-33, Department of the Navy, Washington 25, D. C. 6 U. S. Atomic Energy Commission, 1901 Constitution Avenue, N.W., Washington 25, 33 Commander, Naval Ordnance Test Station, D. C., ATTN: Chief Librarian Inyokern, China Lake, California, ATTN: Test Director-Code 30 7 Director, Central Intelligence Agency, 2430 E. Street, N.W., Washington 25, D. C., 34 Director, Naval Research Laboratory, ATTN: OCD Countermeasures Branch, Code 5430, Washington 25, D. C. 8 Signal Corps Liaison Officer, Lincoln Laboratory, Box 73, Lexington 73, Massachu- 35 Director, Naval Research Laboratory, setts, ATTN: Col. Clinton W. Janes Washington 25, D. C., ATTN: Code 2021 9-18 Commander, Armed Services Technical In- 36 Director, Air University Library, Maxformation Agency, Arlington Hall Station, well Air Force Base, Alabama, ATN: CRArlington 12, Virginia 4987 19 Commander, Air Research and Development 37 Commanding Officer-Director, U. S. Naval Command, Andrews Air Force Base, Washing- Electronic Laboratory, San Diego 52, ton 25, D. C., ATTN: RDTC California 20 Directorate of Research and Development, 38 Office of the Chief of Ordnance, DepartUSAF, Washington 25, D. C., ATTN: Chief, ment of the Army, Washington 25, D. C., Electronic Division ATTN: ORDTU 21-22 Commander, Wright Air Development Center, 39 Chief, West Coast Office, U. S. Army Wright Patterson Air Force Base, Ohio, Signal Research and Development LaboraATTN: WCOSI-3 tory, Bldg. 6, 75 S. Grand Avenue, Pasadena 2, California 23 Commander, Wright Air Development Center, Wright-Patterson Air Force Base, Ohio, 40 Commanding Officer, U. S. Naval Ordnance ATTN: WCLGL-7 Laboratory, Silver Springs 19, Maryland 24 Commander, Air Force Cambridge Research 41-42 Chief, U. S. Army Security Agency, Center, L. G. Hanscom Field, Bedford, Arlington Hall Station, Arlington 12, Massachusetts, ATTN: CROTLR-2 Virginia, ATTN: IADEV 25 Commander, Rome Air Development Center, 43 President, U. S. Army Defense Board, Griffiss Air Force Base, New York, ATTN: Headquarters, Fort Bliss, Texas RCSSLD 44 President, U. S. Army Airborne and Elec26 Commander, Air Proving Ground Center, ATTN: tronics Board, Fort Bragg, North Carolina Adj/Technical Report Branch, Eglin Air Force Base, Florida 45 U. S. Army Antiaircraft Artillery and Guided Missile School, Fort Bliss, Texas 27 Commander, Special Weapons Center, Kirtland Air Force Base, Albuquerque, New Mexico 46 Commander, USAF Security Service, San Antonio, Texas, ATTN: CLR 31

DISTRIBUTION LIST (Continued) COPY NO. COPY NO. I I I, 47 Chief, Naval Research, Department of the 61 Commanding Officer, U. S. Naval Air DeNavy, Washington 25, D. C., ATTN: Code velopment Center, Johnsville, Pennsyl931 vania, ATTN: Naval Air Development Center Library 48 Commanding Officer, U. S. Army Security Agency, Operations Center, Fort Huachuca, 62 Commanding Officer, U. S. Army Signal Arizona Research and Development Laboratory, Fort Monmouth, New Jersey, ATTN: U. S. 49 President, U. S. Army Security Agency Marine Corps Liaison Office, Code AO-4C Board, Arlington Hall Station, Arlington 12, Virginia 63 President, U. S. Army Signal Board, Fort Monmouth, New Jersey 50 Operations Research Office, John Hopkins University, 6935 Arlington Road, Bethesda 64-73 Commanding Officer, U. S. Army Signal 14, Maryland, ATTN: U. S. Army Liaison Research and Development Laboratory, Officer Fort Monmouth, New Jersey 51 The John Hopkins University, Radiation ATTN: 1 Copy - Director of Research Laboratory, 1315 St. Paul Street, Balti- 1 Copy - Technical Documents more 2, Maryland, ATTN:Librarian Center ADT/E 1 Copy - Chief, Countermeasures 52 Stanford Electronics Laboratories, Stan- Systems Branch, ford University, Stanford, California, Countermeasures Division ATTN: Applied Electronics Laboratory 1 Copy - Chief, Detection and Document Library Location Branch, Countermeasures Division 53 HRB-Singer, Inc., Science Park, State 1 Copy - Chief, Jamming and DeCollege, Pennsylvania, ATTN: R. A. Evans, ception Branch, Manager, Technical Information Center Countermeasures Division 1 Copy - File Unit No. 2, Mail 54 ITT Laboratories, 500 Washington Avenue, and Records, Nutley 10, New Jersey, ATTN: Mr. L. A. Countermeasures Division DeRosa, Div. R-15 Lab. 1 Copy - Chief, Interference Reduction Branch, 55 The Rand Corporation, 1700 Main Street, Electromagnetic EnvironSanta Monica, California, ATTN: Dr. J. L. ment Division Hult 3 Copies - Chief, Security Division (for retransmittal to BJSM) 56 Stanford Electronics Laboratories, Stanford University, Stanford, California 74 Director, National Security Agency, Fort ATTN: Dr. R. C. Cumming George G. Meade, Maryland, ATTN: TEC 57 Willow Run Laboratories, The University 75 Dr. H. W. Farris, Director, Cooley Elecof Michigan, P. 0. Box 2008, Ann Arbor, tronics Laboratory, The University of Michigan, ATTN: Dr. Boyd Michigan, Ann Arbor, Michigan 58 Stanford Research Institute, Menlo Park, 76-99 Cooley Electronics Laboratory Project California, ATTN: Dr. Cohn File, The University of Michigan, Ann Arbor, Michigan 59-60 Commanding Officer, U. S. Army Signal Missile Support Agency, White Sands Mis- 100 Project File, The University of Michigan sile Range, New Mexico, ATTN: SIGWS-EW Office of Research Administration, Ann and SIGWS-FC Arbor, Michigan Above distribution is effected by Countermeasures Division, Surveillance Department, USASRDL, Evans Area, Belmar, New Jersey. For Further information contact Mr. I. O. Myers, Senior Scientist, Telephone PRospect 5-3000, Ext. 61252. 32

UNIVERSITY OF MICHIGAN 3 9015 03525 0276