LOGIC OF COMPUTERS GROUP Department of Computer and Communication Sciences 2080 Frieze Building The University of Michigan Ann Arbor, Michigan 48104 I^^

T H E IJ N I V E R S I T Y 0 F M I Ch I I G A N COLLEGE OF LI1'ERATURE, S(IElNCEI, AND THE ARIS Computer and Comuninication Sciences Depa rtment A1NALYbib UP A bA~kc MUUtL AND A LUMPED MODEL OF A NETWORK OF NEURONS by Aggarwal, S., Bethke, A.D., and Zeigler, B.P. January 1975 Technical Report No. 165 with assistance from National Science Foundation Grant No* DCR7l-01997

1 Computer simulation models are increasingly being used to study complex systems that have a large number of components and many interrelationships. In general, such models are largely "lumped versions" of a more complicated hypothesized "base model". Zeigler [ 12 ] provided the conceptual foundation for this viewpoint of computer simulation by considering the triple of a base model, which captures all structural beliefs about the real system, the lumped model, which is the "simplified Version" of the base model and is actually to be simulated, and the computer on which the simulation is to be performed. In this paper, we investigate the relationship between a base model and a lumped model. The base model is a probabilistic model of a network of neurons. The lumping procedure yields a lumped model that is deterministic. In the limiting case of a base model that has an infinite number of neurons and neighbors per block, this lumpihg procedure yields an exact structure morphism between the base model and the lumped model. When we consider only a finite number of neurons and neighbors per block, the structure morphism becomes only approximate. In order to investigate to what extent base model behavior could still be predicted by the lumped model, a computer simulation of the two models was implemented. The aim was to show that the lumped model retained a great deal of information about base model behavior. Since the base model was probabilistic, it is clear that this information would be of a statistical nature. Furthermore, behavior on an identical time scale would be highly unlikely. We would hope, however, to be able to predict patterns of long term behavior and perhaps make accurate short term predictions given the base model state. Investigations similar to ours are being carried out by a group at Syracuse University. Harth et al. [ 6 ], Wong and Harth [ 10 ], Anninos et al. [ 1 ], and Csermely et al. [ 4 ] have explored cooperative

phenomena in neural networks and have proposed an approach they call netlet theory. Netlets are simply collections of neurons that have statistically identical properties and are connected in a uniform (though probabilistic) manner. Although the netlet model is a deterministic one, the assumptions involved in the way neurons are interconnected allow one to probabilistically predict the firing level at time t+l given the firing activity of the net at time t. The approach adopted by this group is a detailed analysis of the netlet dynamics using the strong regularity conditions. It has many similarities in our work in that we also attempt a detailed analysis of a block of neurons. However, ours is a probabilistic base model, there are differences in the regularity conditions and we study a lumped model with a view to predicting the base model. A detailed description of our base modellumped model pair follows. Base Model Components NEURONS —elements modeling basic behavior assumed to be characteristic of real brain cells. INPUT'WIRES —sources of external net excitation. Descriptive Variables For each NEURON: RECOVERY-STATE —with range the non-negative integers RECOVERY-STATE = i indicates that i time units have elapsed since the NEURON last fired; thus i=O means the NEURON is now firing. NOISE —with range the real numbers;(when NOISE = r the actual threshold for firing of the NEURON is

3 THRESHOLD (i) + r where THRESHOLD (i) is the threshold value characteristic of RECOVERY-STATE i. NOISE is a random variable generated for each NEURON. STRENGTH —with range the real numbers; STRENGTH = x indicates that the sum total of all NEURON inputs is x. FIRING-STATE —with range {0,1}; 1 means the NEURON is firing (emitting a pulse), 0 that it is not firing. For each INPUT WIRE INPUT-LEVEL —with range the real numbers; INPUT-LEVEL = x indicates that the excitation level on the INPUT-WIRE is x (measured in the same units as STRENGTH). For each BLOCK (designated set of NEURONS) and for each RECOVERY-STATE.i: %*IN*STATE-i —with range the rationals in [0,100]; the percentage of NEURONS in the BLOCK in RECOVERY-STATE i. PARAMETERS: For each BLOCK: NOISE-DISTRIBUTION —the probability distribution for random variable NOISE. THRESHOLD —a real valued function with RECOVERY-STATE as argument, giving the minimum strength needed to fire a NEURON in the absence of NOISE. For each NEURON: NEURON'NEIGHBORS —with range the subsets of NEURONS indicates the NEURONS whose outputs impinge on the NEURON.

4 EXTrERNAL'NEIG(BORS —with ralnge the subsets of INPUTl'-WIRES indicates the external inputs impinging on the NEURON. and for each NEIGHBOR (NEURON or EXTERNAL) of the NEURON: SYNAPS —with range the reals; determines the influence that a NEIGHBOR has on the NEURON: positive indicates facillatory, negative indicates inhibitory. Component Interaction 1. Each NEURON receives inputs from other NEURONS called NEURON. NEIGHBORS as well as from external sources called EXTERNAL' NEIGHBORS, We just refer to NEIGHBORS when the context is clear. 2. The NEURONS are grouped into disjoint sets called BLOCKS. This partitioning of the NEURONS places the following restrictions on the possible NEURON-NEIGHBORS of a NEURON: Suppose that a NEURON a belongs to BLOCK i. a) If a has some NEIGHBORS in BLOCK j (which may equal i) then every NEURON in BLOCK i must have at least one NEIGHBOR in BLOCK j. b) The SYNAPS weight is the same for the outputs of all NEIGHBORS of a from the same BLOCK. The NEIGHBORS of a coming from BLOCK j all have the same "influence" on it, but this may be different from that of the neighbors of a coming from BLOCK k (kij). c) BLOCK j exerts the same "influence" on each NEURON in BLOCK i. We measure the "influence" of BLOCK j on a by the product of the number of a's NEIGHBORS from BLOCK j and their common SYNAPS value. This product is to take the same value for all NEURONS in BLOCK i, so that the "influence" of BLOCK j on each NEURON in BLOCK i is the same.

5 d) The characteristics of all NEURONS in a BLOCK, namely the THRESHOLD function and NOISE probability distribution) are the same. e) The EXTERNAL-NEIGHBORS and their SYNAPS weights are the same for all NEURONS in a BLOCK. 3. Each NEURON operates at each time step as follows: a) the STRENGTH of the input volley is computed by adding together the outputs (which may be 0 or 1) of all the EXTERNALNEIGHBORS and NEURON*NEIGHBORS of a weighted by their respective SYNAPS values. b) If this STRENGTH is great enough) the NEURON will fire, i.e. will put out a 1 for the next time step; otherwise it will put out a 0. The 1/0 output can be thought of as a pulse/no pulse sent out by the NEURON, with only the pulse being able to influence the activities of another NEURON. c) The minimum STRENGTH required to fire is dependent on the RECOVERY STATE of the NEURON and NOISE. Generally, the longer it has been since the NEURON last fired, the easier it is to fire, indeed if it has just fired it may be impossible to fire for some time (this period is called the absolute refractory period). This fact is formalized in the shape of the THRESHOLD function. This function determines for each RECOVERY-STATE value i (time since last fired) a value, THRESHOLD (i) which can be thought of as the minimum strength required if there were no noise. It is also assumed that the NEURON is perturbed by additive noise independently generated at each step for each NEURON from the NOISE probability distribution. Thus if the NEURON is now in RECOVERY*STATE i, the minimum STRENGTH required to fire it is THRESHOLD (i) plus the sampled

6 value of NOISE. d) If the NEURON fires, its RECOVERY-STATE is set to O, otherwise it is incremented by 1. Thus, as implied, it records the time elapsed since the last firing of this NEURON. Lurpd Modet Components BLOCKS —each BLOCK' represents in lumped formf the behavior of a c~rre-sponding BLOCK of base model NEURONS. (' indicates the correspondence). INPUTWIRtES —each INPUT-WIRE' of the lumped model represents a corresponding INPUT.WIRE of the base model. Descriptive VariabZes For each BLOCK': For each i = 0,1,2,...: %.IN-STATE.i' -- with range [0,100]; %.IN-STATE-i' = q predicts that q/]00 of the base model NEURONS in BLOCK are in RECOVERYSTATEDi. Thus %-IN-STATE-i' of BLOCK' corresponds to %.IN. STATE*i of BLOCK. We use the term PROBABILITY IN-STATE*i interchangeably with %.IN-STATEi'. STRENGTH' -- with range the real numbers; STRENGTH' = x' indicates that the sum total of inputs to BLOCK' is x'. For each INPUT WIRE': INPUT LEVEL' —with range the real numbers. PARAMETERS: For each BLOCK': BLOCK' NEIGHBORS -- with range the subsets of BLOCK'S; specifies the BLOCK'S which send input to the BLOCK.

7 EXTERNAL' NEIGHBORS -- with range the subsets of INPUT-WIRES'. and for each NEIGHBOR (BLOCK' or EXTERNAL'): WEIGHT' — with range the real numbers; determines the influence of the NtIGHBOR on the BLOCK'. NOISE-DISTRIBUTION' -- a cummulative density function (c.d.f.). (will be the c.d.f. of NOISE) THRESHOLD' -- a function from the non-negative integers to the reals (to be identical with THRESHOLD). Component Interaction 1. Each BLOCK' receives input from other BLOCK'S (called BLOCKSNEIGHBORS) and external sources (called EXTERNAI NEIGHBORS ). 2. Each BLOCK' at each time step operates as follows: a) The STRENGTH' of the input to the BLOCK' is computed as the weighted sum of all NEIGHBOR (BLOCK' and EXTERNAL') outputs. b) Using this STRENGTH' and the current values of the %*IN-STATE-i' (one for each RECOVERY.STATE i) the next values of the % IN-STATE-i' variables are computed. For each i, the current value of %-IN-STATE-i' is multiplied by 1 - F(x - THRESHOLD' (i)) to obtain the new value of %.IN-STATE-i+l'. Here, F is the NOISE-DISTRIBUTION (i.e., the cumulative distribution function), and x is the value of STRENGTH'. This accounts for the fraction of base model NEURONS in RECOVERY-STATE-i which have not fired (hence go to i+l); the remainder then, are those that do fire (hence go to RECOVERY. STATE'O) and the lumped model reflects this as a sum of all such contributions to obtain the new %.IN.STATEO'. We will discuss the derivation of this lumped transition rule after we define the correspondence between the base and lumped structures.

8 c) The output of BLOCK' is just %o-IN.STATE-O', which, if the lumped model is valid, is the percentage of NEURONS in the BLOCK which are firing i.e. outliptting a ptulse (1 ). (The rest are in the variouls RtiCOV1iRY SlTATS and silnce they output a 0, will not influence any activity at the next time step.) Correspondences between lumped and base model Components Each BLOCK' (lumped model component) corresponds to a BLOCK (partition class) of NEURONS (base model components). Descriptive Variables For each BLOCK' For each i = 0,1,2,... %oIN.STATE-i' = %*IN*STATE-i STRENGTH' = STRENGTH of any NEURON in the corresponding BLOCK (this STRENGTH is the same for each NEURON in a BLOCK by base model postulate 2.) PARAMETERS: INPUT'*WIRES = INPUT-WIRES For each BLOCK': BLOCK' NEIGHBORS = the set of BLOCK's whose corresponding BLOCKs influence NEURONS in the BLOCK, (corresponding to BLOCK'). EXTERNAL''NEIGHBORS = EXTERNAL-NEIGHBORS of any NEURON in the BLOCK (a BLOCK constant according to Postulate 2.) THRESHOLD' = THRESHOLD NOISE-DISTRIBUTION' = NOISE.DISTRIBUTION for aty NEURON in the BLOCK (postulate 2, again).

9 and for each NEIGHBOR (BLOCK' or EXTERNAL) WEIGHT' = the product of the number of corresponding NEIGHBORS and their common SYNAPS values (postulate 2) of any NEURON in BLOCK. for each INPUT WIRE': INPUT-LEVEL' = INPUT-LEVEL A structure morphism is a mapping from the state set of the base model onto the state set of the lumped model that preserves the transition structure of the models when they are viewed as automatons. That is, if 6 is the transition function of the base model, and 6' is the transition function of the lumped model, then ho6 = 6'oh where h is the structure morphism. It can be shown [ 13 ] that if the recovery states of the neurons in the base model are independent and identically distributed initially, then in the limiting case of an infinite number of neurons and neighbors per block, there exists a natural structure morphism from the base model to the lumped model. (If the state set of the base model is the cross product of the recovery states of each neuron, then %'IN-STATE'i is a function of this state set. Mapping % IN-STATE-i to %*IN-STATE.i' defines h). Intuitively, this is because the expected value of input to each neuron is the same (per block, because of an infinite number of neighbors) and the noise distribution will be exactly as NOISE-DISTRIBUTION (because of an infinite number of neurons). We shall now derive the form of lumped component interaction. For each NEURON, the following conditional distribution can be computed: Pi i+l(x) = the probability that the NEURON goes next to RECOVERY STATE i+l (does not fire) given its present RECOVERY STATE=i and its input STRENGTH=x. Now, if F(Z) is the NOISE. DISTRIBUTION then

10 P ii(x) = Prob {x < THRESHOLD(i)+Z} i,i~l = 1 - F(x - THRESHOLD(i)) Also, since a NEURON can only go to RECOVERY'STATEQO if it does not go to i+l, we have Pi (x)= 1 - Pi (x) = F(x - THRESHOLD(i)) This conditional distribution can be presented in the form of a Markov matrix (ignoring the input x = x(t) which depends on the time step). \ 0 01 0 * P= p P 10 0 12 -20 0 0 Let P(t) = (Po(t),Pl(t),...) be the vector o NEURON is in state i at time t. Then we have P(t+l) = P(t) P(x(t)) If we define the lumped model transition behavior as %'IN-STATE'(t+1) = %-IN-STATE'(t)p(x(t)) then we have the structure morphism in the limiting infinite case. We shall use this same transition function for:the lumped model when we investigate the behavior of the base model with a finite number of neurons and neighbors per block. Of course, our mapping from the base model to the lumped model is no longer a structure morphism but is an "approximate" structure morphism. To investigate to what extent the lumped model still predicts base model behavior under this "approximate" structure morphism, we carried out a computer simulation of the two. models. The simulation consisted of a package of Fortran programs that allowed great allowed great flexibility in specifying connection pattemns, initial states and parameter settings. Certain statistics were

11 automatically compiled. More information on the simulation programs can be / found in Appendix 1. In the following, we will consistently use "run" to mean a single simulation of the base or lumped model with a fixed set of parameters and random number seed. An "experiment" will refer to several runs with identical parameters but different random number seeds. A'set of related ekperiments will be called a "series of experiments" or simply a "series". Two restrictions on the base model were imposed in order to simplify the computer implementation. (1) There is a maximum of 20 recovery states possible. A neuron in the highest recovery state that does not fire remains in the same recovery state. In our test runs we used only recovery states 0-6. (2) For all i,j, all neurons in block i have the same number of neighbors in block j with equal synapse weights and these neighbors are randomly chosen from the neurons in block j. Since the EXTERNAL-NEIGHBORS and their SYNAPS weights are the same for all neurons in a BLOCK we will use the term "external input" for the external influence. i

12 First Series of Experiments Connection Pattern The base model consisted of a single block with no connections between neurons in the block. We varied the size of the block using 25, 50, 75, 100, 500 and 1000 neurons. Parameter Settings 1. Background: 0 2. Noise: Gaussian, with mean 0 and standard deviation of 10, 20 and 40. 3. Threshold: Exponential decay 27e where s is the recovery state. Initial State All neurons were initially in recovery state 6. Input segment Input levels of -100, -80,..,80, 100, were tried. The length of the input segment was sufficient to allow a "steady state" to be reached. Not all possible combinations of experiments were conducted. The following table shows the experiments actually done (x's). Size St. Dev 2 5 50 75 100 500 1000 10 x x 20 x x x x x x 40 x In this series of experiments, it is clear that if there were an infinite number of neurons in a block, we would have a structure morphismn from the base model to the lumped model. Since there are no neigh

13 bors, each neurbn receives the same total input. The only difference is that each neuron samples independently from a noise distribution. Let us assume that there is an exact agreement between the probability vector of recovery states in the lumped model and the percent-in-state vector of the base model at time t. Then, at time t+l, the actual pereentin-state vector of the base model can be considered a sample from the distribution predicted by the lumped model. Whether or not the observed differences between the actual percent-instate vectors and the predicted percent-in-size vector are reasonable can be tested using the chi-square statistic. In effect, we would be testing the random number generator. Such a chi-square test would surely given good results if the lumped model was reset at each time step. However, the aim of our investigations is to show that the lumped model can be used to estimate behavior of the base model without of course knowing the base model behavior. Resetting the lumped model correctly would clearly be possible only if base model behavior is known. Without resetting the lumped model, error propagation becomes an added consideration and is discussed ater. Before giving the results of the chi-square tests, it will be instructive to examine the behavior of the lumped model. Because there are no connections between blocks, for a fixed level of external input the lumped model is actually a homogeneous Markov chain with the following transition matrix. P00 l-P0o 0 00 0 PP0 ~ ~-P10 O 0 0 p30 0 0 0 1-p30 0 0 p40 0 0 0 0 -P40 0 P50 0 0 0 0 1-P50 60 0 0 0 0 0 l-P60

14 Let x be the external input and a be the standard deviation of the noise, then Pio = F(x-27ei) where F is a cumulative distribution function (c.d.f.) that is normal with mean 0 and standard deviation a (N(0,a3). We note that the finite transition matrix above is irreducible, aperiodic 1 and positive recurrent. Thus, the markov chain possesses a long run distribution. That is, regardless of the initial probability vector of recovery states, the lumped model will approach the same steady state distribution in the limit. The limiting distributions for various input levels are shown in figures I and 2. The behavior of the base model was qualitatively similar to that of the lumped model although the base model percentages tended to fluctuate about a "steady state". A quantitative analysis wag obtained using the standard chi-square test. For each time step we calculated a chi-square value assuming that the base model percentages were a sample from the theoretical distribution represented by the lumped model probability vector. If the expected number of neurons in a particular recovery state was less than five, then this recovery state was aggregated with another recovery state. Thus, the number of degrees of freedom ranged from 0 to 6. For a given number of degrees of 2 freedom R, the chi-square statistics obtained should be XR distributed. Figures 3 and 4 show the distributions that were actually obtained for each experimental run. The values do in fact appear to be X2 distributed. A second level chi-square analysis was conducted assuming the ypothesis that we were sampling from a chi-square distribution. Figure 5 shows the results of this analysis. See Feller, An Introduction to Probability Theory and Its Applications, Chapter XV,

15 One would not expect the chi-square tests between the lumped and base models to be extremely good because of the problem of error propagation. The chi-square test assumes that sampling is independent from time step to time step. Thus, if at each time step we were to reset the lumped model, we would expect good chi-square results. One way to eliminate most of the correlation from one time step to the next due to not resetting the lumped model is not to sample at every time step. Rather, one might sample say at every fifth time step. We did an analysis sampling at every fifth time step for the case of 100 neurons with a standard deviation of 20. The results obtained were extremely good and are also shown in Figures 3 and 5. We shall now consider the question of error propagation. Zeigler [ 13 ] introduced the important idea of considering the error at a time step as composed of two parts - (1) due to sampling error and (2) due to not resetting the lumped model. The second component of the error can be analyzed solely in terms of lumped model behavior. Figure 6 is a schematic representation of error propagation. Because of the structure of the approximate homomorphism h, ~, and c2 are due to sampling errors. The additional error kE1 is due to not resetting the lumped model. To investigate this additional error, we will describe what happens to two states of the lumped model that differ by a distance e. Assume that r and s are two different probability vectors (states) of the lumped model. That is, r = (r0,...,r6) and s = (s0,...,s6). Let distance be measured by the metric d defined as: d<r,s> = max Iri-si. O<i<6 Let a = r-s. Then, a will be called the error vector. Notice that 6 Z a. = 0. Now let i=0 1

16 i 1 d<rP,sP>' max|(aP)J, where 6 aP = ( ap 0 a 0) a l(l-Po10) ala4(1-p40), a5(l-p50) i~0 +a6 (l-P60)) Now, if there exists k<l, such that for all pairs of lumped model states r,s, d<rP,sP> < kd<r,s> then the error propagation is ultimately bounded. (Zeigler [ 13 ]). Unfortunately, a simple example shows that such a condition cannot be guaranteed. Consider the two probability vectors r = (,8,00,0,00,.l,.1) and s = (.9,.1,0,0,0,o0,). Let the external input be -20 and the standard deviation of the noise be 20. Then p00 =.01, p1 =.07, 20.12, p.14, p =.15, p =.16 P60 =.16. =a - -,1,- 1, Q00 00 1, i1) aP = (.024,-.099,-093,0,0Q,0,.168) Thus, d<r,s> =.1 but d<rP,sP> =.168. In one transition then, we cannot be assured that two lumped model states will converge. An interesting situation is the case of only three different recovery states. Such a situation arises quite naturally if one considers, for example, firing, absolutely refractory, and quiescent as the three recovery states. It can easily be shown that if

17 Poo P01 P02 P = I PlO 1 P12 P20 P21 P22 for O<p.i<l, i,j = 0,1,2, there exists a k<l such that d<rP,sP> < kd<r,s> for all vectors r,s. That is, in one transition the error will be reduced. For a matrix of the type that we have been considering, PO0 1-P00 0 P P=0 ~ P=PlO 0 l-plO P20 ~ 1-P20 this still holds. This first series of tests has shown that analyzing the behavior of the lumped model seems to be a reasonable way to study expected base model behavior. We now turn to another series of experiments that has positive feedback between neurons.

18 Second Series of Experiments Connection Pattern The base model had 1 block. The number of neurons ranged from 50 to 1000 and the number of neighbors varied from 20 to 100. [See below ]. The total input weight [synapse wt: x #nbrs] was held at a constant- value of +100. Parameter Settings 1. Background: -20 2. Noise: Gaussian with mean 0 and standard deviation 10. 3. Threshold: 200e s Initial State All neurons start in state 0. Input Segment The external input was 0. xperiments ^ NBRS Size 100 80 60 s5 30 20 50 x x 100 x x x 200 x 400 x x x x 100 x x 1000 x x x

19 The purpose of this series was to analyze the effect of feedback within a block of neurons. We wish to show that the lumped model is still capable of explaining the behavior of the base model. The first step is to explain the observed behavior of the lumped model. In our experiments, we used a starting state of all neurons in recovery state 0 (i.e. all firing). The subsequent behavior of the lumped model is shown in Figure 7. As can be seen, the lumped model firing level very quickly reaches a constant value and in fact from the output we observed that the lumped model reaches a steady state. Experimentally we tried other initial starting states but in all cases we reached the same longterm steady state. In an attempt to prove that there would only be one steady state we looked at the lumped model as a nonhomogeneous markov chain. The following equations define a non-homogeneous markov chain, and follow immediately'from the transition rule defining the lumped' model. Let S(t) be the row vector representing the state at time t, thus, S(0) is the initial state. Let P(t) = [Pij(t)] be the single step transition matrix. Then, (1) S(t) = S(t-l)P(t-l) 6 i.e. Sk(t) = Si(t-l)pik(t-l) i=O (2) x(t) = KSO(t) where K is the total feedback (call x(t) the input at time t) (3) Pi0(t) = F(x(t)+B-9i) = Fi(x(t)) where B is background level 0. is threshold in recovery state i and F is the c.d.f. associated with the noise. (Note: we use N(0,o) for F)

20 From equations (1), (2), and (3) we get 6 -) pio(t) = F [K ZOS (t-I)p (t-1)] i~ i=O jO Recall that (5) poo(t) 1-00-p^(0....P 0 Po1(t) 0 l-P20(t)... 0 P(t)= p t) 0. -p50 (t) LP60(t) 0 0... l-P60(t) Using the theory of non-homogeneous Markov chains, Paz[ 9 ], we were able to show that the lumped model is a weakly ergodic chain. We would like to show that the model is strongly ergodic but we haven't been able to prove this so far. We shall require the following definitions and propositions from non-homogeneous Markov chain theory, which can be found in Paz [ 9 ]. Let P = [pij] be a stochastic matrix. Then define 6(P) =sup Z.- ) where k+ = max [0,k] u,v. uj vj IIPI = supi Z Ip.. Proposition 06 (P)l Definition A non-homogeneous Markov Chain is an infinite sequence of Markov 0o matrices {Pi.}. such that P. is the matrix of transition probabilities i i=0 i at time t=i.

21 I1 Let tH = P. mn. i i=m Definition A non-homogeneous Markov chain is weakly ergodic if lim 6(H ) = 0, m = 0,1,2,... mn n-oo A non-homogeneous markov chain is strongly ergodic if 3 a constant matrix Q [i.e. all rows equal] such that lim IHmn - Q|I = 0 n-^~ i Theorem (Given without proof) A markov chain is weakly ergodic iff 3 a subdivision of the chain 00 into blocks of matrices {Hi. } such that Z (1- (H. )) diverges.'i J+ j=o j ij Proposition The lumped model is weakly ergodic. Proof: Given the transition matrix, from (5), we have that 6[P(t)] max [l-pkO(t)] 0<k<6 From (3) Pko(t) = F(x(t)+B-Oi)= F(KSo(t)+8-Oi) Now assuming that F is strictly monotone increasing, we have 0 < F(-e-i) < Pk0(t) < F(K+B-e) < 1 Thus PkO(t) is uniformly bounded away from and if we let subdivisions in the previous theorem be just H. P(j) then we have that 1-6(P(t)) ij,ij+l is bounded away from 0 uniformly. Unfortunately, weak ergodicity does not tell us much. It tells us

22 only that there is a sequence of constant matrices {E } such that'1~~~ ~~mn liml IH - E I = 0 mn nm n-ow We would like to sav that there exists {E ) suth that m limI||E - Em| = O i.e. strongly ergodic mn m n-o [It can be shown that E is independent of m] Strong ergodicity would say that the non-homogeneous process reaches a long run distribution (i.e. steady state). Note however, that even strong ergodicity would not allow us to say that the same long run distribution was reached independent of the starting state. This is because our non-homogeneous chain is itself a function of the starting state. One attempt to prove strong ergodicity assuming weak ergodicity was the following. Assuming weak ergodicity, we know that Hln is approximately a constant matrix for n sufficiently large. Strong ergodicity would say that the transition matrix P(n) at time step n, would be one that would map Hn into Hln almost exactly. Can we determine the form of such a matrix and must our non-homogeneous process have such a form after some fixed number of time steps? That is, let the constant matrix that Hn approximates be C. c CP C or C(P-I) = 0 Let [C,...,C6] be a row of C. Then the above equation is equivalent to saying that C is in the null space of P-I.

23 Now P00-o1 -P00 0 0 0.. P-I = P10 -1 1-p.. P20 0 -1 l -P20 0 P60........ -P60 It is immediately clear that since there are 6 linearly independent columns the null space of P-I is 1 dimensional. Consequently, since we are interested in only stochastic vectors, there is at most 1 stochastic vector in the null space of P-I. This brings us back to the obvious problem of showing that S(t) is in the null space of (P-I)(t) for some t. It appears that we must examine the recurrence relations (eq 1,4) to get any further. 6 (1) Sk(t) = z S (t-l)Pik(t-l) i=0 (2) x(t) = KSO(t) (3) PiO(t) = Fi.(x(t)) 6 (4) Pi0(t) = Fi[K Z S (t-)p (t-1)] i=0 j We will attempt to get bounds on pik(t) as t-mc. Assuming the threshold function is a monotone decreasing function (of the recovery state), we have that 0 i FO(x) < F1(x)'... < F6(x) < 1 for all x.

24 Consequen t 1!, Pio() = Fi(KSO(O)) 6 Pio(l) = Fi[K Z S.(O)F (KS (0)] F Fi[KF0(KS0(0))] 6 Pio(2) = Fi[K Z S (l)pj (l)] i=O0 6 > Fi[K Z S ()Fj [KF (KS(0))]] i=O > Fi[KF0[KFo(KSo(O) )]] Inductively, P0(t) > F [(KF )t(KS (O)) ] Similarly, we can get an upper bound P.o(t) < F [(KF 6)(KS (0))] 1i 60 Since F. is monotone, we have either KF (x) > x i *-.(KFi)t(x) > (KFi'(x) > (KFi)t-2 (x)... > KFi(x) > x or t t-l (KFi) () < (KFi) (x)... < KF. (x) < x In either case since we have a bounded monotone sequence, the limit exists as t-+o [may depend on KSo(0)]. Thus litn F [(KFo) (KS ())] < im pio(t) < lim pi(t) t-*ox t-+o t-+~o < im r) r c K i)' (So (o)) ] t-+co

25 We thus have (possibly crude) bounds for the limiting transition probabilities if they exist. (Note that the limit points are fixed points of KF0 or KF6.) An example of calculating such bounds is now presented. Let K = 100, a = 20, 6. = 200e, p = 0, B = -20. lim (KF)t(KS0 ()) = 0 t since this is the only fixed point of KF0. Also lim (KF) (KS (0)) - 100 t+ 6 [see Figure 8].. Fl[0] = F[-20 - e20 = 0 F1[100] =.74 So, 0 < Po10.74 in the limit. Other bounds that we calculated for this situation were 0 s p00 0.02 p60 s 1 We have seen that analytic results for long run distributions and steady states appear extremely difficult to obtain. We now present a heuristic "proof" that the lumped model with the parameters of this series can in fact reach one and only one-steady state independent of the initial state, if it reaches a steady state at all. We assumed that in the steady state, the first six recovery states 0,...,5 had approximately the same probabilities. This is plausible because the threshold is fairly high in states 0,...,5. Therefore, it is highly probable that a neuron that fires will not fire again until it reaches recovery state 6, and consequently, for a steady state, the first six recovery states must have the same probabilities. Moreover, nearly all the neurons that do fire will in fact have been in recovery state six.

26 Therefore, S = [So,SO,SOSoS,So,S6,] e.. S6 1 - 6S and assuming p60S6 = SO we get SO SO 60 = S 1-6S 6 0 Now, (KS -06-3) 100S0-20.5) So /100S -20.5 S [Note: b is the standard normal c.d.f.] \ 1l0 1-6SO is satisfied in the steady state. Graphing each side of the above equation as a function of SO (See figure 9), we find that the only possible solution S0 is very close to the observed value. Since we would like to use the deterministic lumped model to predict the behavior of the probabilistic base model, we need to analyze the possible effects of error propagation. We analyze error propagation by observing what happens to two states in the lumped model that are near each other. The metric that we use could be for example the following. Let r and s be two states of the lumped model then d<r,s> = max Iri-s1l. o 0< i_<6 Instead of using the seven-dimensional vector that we actually have, we attempted to make the problem more tractable by just using two recovery states, 0 and 6. Thus, we mapped the next state transition of the lumped model starting with different probabilities in states 0 and 6. If the probabilities did not add up to 1, we uniformly spread the remaining probability in states 1 through 5. This gave us the "force field" in figure 10. Although not quite accurate since there is not a homomorphism from the 7-dimensional state space to the two dimensional, we still obtained a good understanding of the behavior

27 of the lumped model. The steady state point is clearly locatable. Also, we observe that there are some states which diverge quite significantly. Consequently, if a base model received enough noise to push it far enough away from the steady states then the trajectory of the base model should follow the force field trajectory defined by the lumped model. To see whether this was actually true, we graphed several base model runs. We first simply plotted the firing level of runs with 400 neurons and 50 neighbors. (Figure 11) On the surface, this does not appear to match in any way the lumped model run (Figure 7) which is also a plot of the firing level vs. time. However, we decided to plot the same base runs using the same two dimensional "state space" as the "force field" of the lumped model. This plot (Figure 12) reveals quite clearly that once the base model is displaced from equilibrium (by noise) it tends to follow the "Force field" of the lumped model. We plotted other base model runs using the same two dimensional "state space". (Figures 13, 14, 15, 16, 17). They show the qualitative similarity between base model runs and lumped model runs. We see that when the neighborhood size is reasonably large (Figures 13, 15, 16) the base model spends much of its time near equilibrium. When it is displaced by noise it follows a lumped model trajectory that is not too large. Also, it seems that there is almost a critical neighborhood size in the sense that even with 1000 neurons, if the neighborhood size is small, for example 20, 30 and 50 neighbors (Figure 14) then there is always sufficient noise to kick the base model away from equilibrium. To analyze it more quantitatively, one might want to now make the lumped model probabilistic and see if the probability distribution of the trajectories of the lumped model can be made to approximate those of the base model.

28 The analyses of these two series of experiments has given us encouraging results. The lumped model can be used for general predictions of long term behavior. The lumped model vector field provides a conditional Statistical prediction of future behavior and is reasonably accurate for short term predictions, That is to say, if we know the base model state at a particular time, we have a qualitative idea of future trajectories of the base model fom that point on. Hopefully, in future work, this qualitative feeling can be made more precise by assigning probabilities to the expected trajectories. We notice that the lumped model is useful in predicting base model behavior even when we have reasonably small numbers of neurons in a block, say 100, and when the neighborhood size is above approximately 50 neurons. The use of positive feedback did not lead to unusual behavior of the base model, not predictable by the lumped model. Thus, it is reasonable to expect that blocks of finite numbers of neurons with complex interconnections can be lumped and still retain significant aspects of behavior. We have seen that with a very simple lumped model, we have been able to retain important aspects of the behavior of the extremely complex base model. The simplicity of the lumped model relative t6 the base model was apparent in the computer implementation. The simulation program for the lumped model was far simpler than for the base model. Also, the lumped model simulation used less computer memory and required less computer time per simulated time step. Appendix 2 gives a comparison between the running times of the base model and the lumped model.

Bibliography 1. Anninos, P.A., Beek, B., Csermely, T.J., Harth, E.M. and Pertile, G., "Dynamics of Neural Structures", J. Theor. Biol. 26, 1970, 121-148. 2. Arbib, M.A., "Automata Theory and Control Theory: A Rapprochement", Automatica, 3, pp. 161-189, 1966. 3. Bacon, G.C., "The Decomposition of Stochastic Automata", Inform. & ControZ, 7, 1964, 320-329. 4. Csermely, T.J., Harth, E., and Lewis, N.S., "The Netlet Theory and Cooperative Phenomena in Neural Networks", J. Dynamic Systems, Measurement and ControZ, Sept. 1973. 5. Gray, J.N., and Harrison, M.A., "The Theory of Sequential Relations" Information and Control, 9, 1966, 435-468. 6. Harth, E.M., Csermely, T.J., and Lindsay, R.D., "Brain Functions and Neural Dynamics", J. Theor. Biol., 26, 1970, 93-120. 7. Hartmanis, J. and Steams, R.E., Algebraic Structure Theory of Sequential Machines, Prentice Hall, 1969. 8. Mihram, G.A., Simulation: Statistical Foundations and Methodology, Academic Press, 1972. 9. Paz, A., Introduction to Probabilistic Automata, Academic Press, 1971. 10. Wong, R., and Harth, E., "Stationary States and Transients in Neural Populations", J. Theor. BioZ., 40, 1973, 70-106. 11. Zeigler, B.P., "Modelling and Simulation: Structure Preserving Morphisms for Discrete and Continuous Systems", In: Computers and Automata, Proc. 21st International Conf., Polytechnic Institute of Brooklyn, Brooklyn, N.Y., 1972. 12., "On the Formulation of Problems in Modelling and Simulation within the Framework of Mathematical Systems Theory", Proc. 6th Int. Congress on Cybernetics, Namur, Belgium, 1972. 13. ___, Theory of Modelling and Simulation. (to appear). 29

30: // JOB 111102222030333 // XEQ TD FX // ECR eASE *LIST ALL *NONPROCESS PROGRAM *ONE WCRO INTEGERS *IOCS(TYPEWRITERKEYBOARD, 1443 PRINTER, MAGNETIC TAPE) C MAIN PROGRAM FOR THE BASE PROGRAM C EXTERNAL KBDE* A7E INTEGER OLDST(5,1000) SSWCH~DSWCH,FILOK REAL INEXT', NOISEE NOIS C DECLARATION FOR EXTRA COMMON BLOCK VARIABLES INTEGER STATE(5,1000) C BASIC COMMON BLOCK IhTEGER t,OUTFUVP REAL NO1ST(5~3)~INPU(5) CCMMON NBLKSNUMBR(5),NSTAT,T,U,THPR(5,5), NOIST~INPIR(5), 1 INPR2( 5) tEX INNP530 ),NPU BKGND(5), EIGT(5,5)OUTFVP C EXTRA COMMON BLOCK FOR BASE MODEL CCMMON STATENFILE,KOUNT(5) C C INITIALIZATION ROUTINE LINKS TO THIS ROUTINE, SO INITIALIZATION IS COMPLE C IF SEhSE SWITCH 4 IS LP USE POP7 TELETYPE AS BACKUP CONSOLE IF (SSWCH(4)) 110,110,100 100 CALL SEFIN(A7E) GC TO 115 110 CCNTINUE CALL SEFIN(KBDE) 115 CCNTI.NUE C C GET OUTPUT FILE NUMBER AND INITIALIZE IT WRITE(V,950) 950 FERMAT('ENTER OUTPUT FILE NO. (0 FOR NO OUTPUT FILE)') CALL RDINT(OUTF) IF(OUTF) 975t975,960 960 CALL FOPO(OUTF) IF (FILOK(O)) 965,965,962 962 WRITE(V,963) 963 FCRMAT('OUTPUT FILE WILL NOT OPEN') GC TO 110 965 CCNTIKUE CALL FPUTI(NBLKS) CALL FPUTI(NSTAT) DC 97C 1= 1~NBLKS 970 CALL FPUTI(NUMBR(I)) 975 CCNTINUE CALL PRNTI 5 CCNTINUE C TEST FOR RESETTING PARAMETERS IF (DSWCH(15)) 15,15.13 13 CALL RSETB 15 CCNTINUE C UPDATE STATE VECTOR - OLDSTATE;- STATE DC 10 I=1,NBLKS JFINL = NUMBR(I) DO 10 J=1~JFINL 10 OLDST(I,J)t STATEII.J) C TEST FOR STOPPING COlDITION I F ( DShCH(14 ) ) 30 30,20

31 20 IF(OUTF) 25 25,22 22 IF(FILOK(O)) 23,24,23 23 WRITE(V,27) 27 FCRMAT('ERROR-OUTPUT FILE OVERFLOW') 24 CALL FCLO 25 CALL EXIT 30 CCNTINUE C CCMPLTE NEXT STATE (LOOP OVER ALL BLCCKS) CALL FOPI(NFILE) CALL FGETR(DUMY) CC 1000 I=1.NBLKS INPU(I) = INEXT(I) JFINL = NUMBR(I) LIMIT = KOUNTII) C LCOP OVER NELRONS WITHIN A BLOCK DC 1000 J=1,JFINL NOIS = NOISE(I) VOLLY = 0. C SUM INPUTS TC NEURON (I,J) FROM ITS NEIGHBORS DO 50 K = 1 LIMIT C NEURCh (X,Y) IS ENCODED IN NBRS AS 1001 * X + Y CALL FGETI(NBR) KKI = NBR/1001 KK2 = NBR - KK1*1001 IF(OLDST(KK1,KK2)) 50,40,50 40 VOLLY = VOLLY + WEIGT(I,KK1) 50 CONTINUE C ADD THE EXTERNAL INPUT AND COMPUTE THRESHOLD VOLLY = VOLLY + INPU(I) + BKGND(I) THOL = NOIS + THOLD(IOLDST(IJ)) C CCES THE NEURON FIRE? IF(VOLLY - THOL) 60,70~70 60 STATE(I.J) = OLDST(I.J) + 1 IF (STATE(I J)-NSTAT) 1000,65,65 65 STATE(I J) = NSTAT 1 GO TO 1000 70 STATE(I,J) = 0 1000 CONTINUE CALL PRNT(OLDST) T.=T + 1 GC TQ 5 EID *STOREMC BASE

32 // JC6 111102222030333 // XEQ TD FX // FOR *LIST ALL *NONPROCESS PROGRAM *OtE WCPD INTEGERS FLNCTION DIST(IX,CFACT) C CCMPUTES DISTRIBUTION FUNCTION FOR NOISE FUNCTIONS C REAL GAUSS133) C BASIC COMMON BLOCK I TEGER T OUTFUV.P REAL NDIST(5,3),INPU(5) COMMON NBLKSNUMBRI5 )NSTA, T,U THPR 5,5 ),NOSTINPR15 ) 1 INPR2(5), EXINP(5,30)I, NPU,BKGND (5) WEIGT(5,5),OUTF,V,P OATA GAUSS/.5, *5398,. *579 *6179,.6554, 6915 r 7257. *758,7881 1.8159,.8413, 864*3,8849,.9032.9192,.9 32*,9452,.9554..9641, 2.9713.9773,.9821,.9861.9893,9918,.9938,.9953,.9965,.9974, 3.9981,.9986,999,9.9993 / C NCIST(I,1) INOCATES THE TYE OPF DISTRIBUTibN USED FOR BLOCK I C NOIST(I,1) = 1.0 FOR GAUSSIAN NCIST(I,I.) = 2.0 FOR UNIFORM C NCIST(I,2). NCIST(It,3) ARE THE MEAN AND DEVIATION FOR GAUSSIAN OISTRIBUTI C NCIST(I,2)o NOIST(I,3) ARE THE LOWER ANO UPPER LIMITS FOR UNIFORM OISTRIB C K = NOIST(I 1) CC TC (100,200),K C GAUSSIAN DISTRIBUTION 100 CCNT IUE C CFACT IS THE SECOND CROER CORRECTIOK FACTOR AOOED TO THE VARIANCE 510EV = SCRT(NOIST(I 3)0*2 + CFACT) Z = (X-NOISTtI,2) /STDEV IF (STDEV) 110,110*105 105 CCNT INUE Y = ABS(Z) INDEX = 10.0 * Y + 1*0 IF(INDEX-32 120 120 110 110 DIST = 1.0 GO TO 130 120 CIST.= GAUSS(INDEX) + (GAtSS (INDEX+1 )- GAUSS(I NDEX)) 1 (10.0*Y 1 -INDEX + 1) 130 IF(Z) 135,140,140 135 DIST.= 1*C - LIST 140 RETURN C U NIFO RM DISTRI8BUTION 200 CCNTINUE A = NOIST4I,2) B = NCIST(I3)& If X-A) 210s21t0,220 210 DIST.= 0.0 RETURN 220 IF(X-B) 240,230,230 230 C IST.= 1.0 RETURN 240 D IST =: (X-,AUlEB-Al R ETUR'N E NOl S' T Q E M1 $ S

33 // JCB 111102222030333 // XEQ TD FX // FCR *LIST ALL *ONE WORD INTEGERS *NCIPRCCESS PRCGRAM REAL FUNCTION INEXT(I) C TFIS FUNCTION PROVIDES THE EXTERNAL INPUT C C THE EXTERNAL INPUT MAY BE: PERIODIC C EXPONENTIAL DECAY C CUTOFF AFTER SOME INITIAL INPUTS C PULSE -- VALUES TAKEN FROM KEYBOARD C I TEGER PRD(5) CTCFF(5) C BASIC COMMON BLOCK INTEGER T,OUTF.U,VP REAL hOI ST(5,3) INPU(5) CCMMON NBLKSNUMBR(5),NSTAT,TU,TFPR(5,5),NOISTINPR1(5), 1 INPR2(5),EXINP(5,30),INPUBKGND(5) WEIGT(5,5,.OUTF.V.P ECUIVALENCE ( INPR2(1) PRD(1),CTOFF(1) ) CATA KOUNT/O/ MCD(M,N) = M-M/N*N C C INPR1 I) DETERMINES TYPE 1=PERIODIC 2=EXP DECAY 3:CUTOFF 4=PULSE C EXINF(I.J) IS THE INPUT FOR BLOCK I FOR THE JTH STEP IN CYCLE C K.- INPR1( I) GC TO (1000,2000,3000,4000),K C PERIODIC INPUTS (PERIOD IS'PRD') 1000 CONTINUE IhDEX = MOD(T,PRD(I)) + 1 IhEXT = EXINP(I,INDEX) RETURN C EXPCNENTIAL DECAY 2000 CONTINUE INEXT - EXINP(I,1) EXP(-EXINP( 12) * T) RETURN C CUTOFF (AT T='CTOFF', INPUT GOES TO ZERO) 3000 CCNTINUE IF(T - CTOFF(I)) 3300,3500,3500 3300 IhEXT = EXINP(IT+ ) RETURN 3500 INEXT = 0.0 RETURN C PLLSE INPUT 4000 CCNTINUE IF (T-EXINP (I,3)) 4500,4100,4900 C CLERY FOR NEW INFORMATION 4100 KCUNT = KOUNT + 1 IF(MCC(KCUNT,b )-1 ) 4150,4110,4150 411C WRITE(V,4200) T,I GO TO 4175 4150 WRITE(V,4220) T I 4220 FCRMAT(/.'TIME IS:',I4,' BLOCK',14) 4175 CCNTINUE 4200 FORMAT(/,'TIME IS:',14,' BLOCK',I 4,5X,'ENTER PULSE HEIGHT, PULSE 1 END TIME, NEXT PULSE TIME') CPLL RDFLT(EXINP(I 1))

34 CALL RDFLT(EXINP( 1,2) CALL RDFLT(EXINPI1.3)) GC TC 4000 C TC PULSE CR NOT TC PULSE 4500 IF ( T-EXINP.2 )) 4600.4600,4900 46CO INEXT - EXINP(I,1) RETURN 4900 INEXT = 0.0 RETURN END *STGRLMD INEXT.

35 // JCB 111102222030333 // XEQ TO FX // FCR LUMPD *LIST ALL *NCNPROCESS PROGRAM "ONE WCRD INTEGERS fIICCS(TYPEWRITERKEYBOARD,1443 PRINTER, MAGNETIC TAPE) C INTEGER FIRS, DSWCH, FILOK. REAL CLDST( 5,20). INEXT C DECLARATION FOR EXTRA COMMON BLOCK VARIABLES INTEGER FIRST(5),SOFLG C BASIC COMMON BLOCK INTEGER T,OUTF9UVP REAL NniST(5,3).INPU(5) COMMON NBLKS,NUMBR(5) NSTATT,UTHPR(595),NOIST,INPR1(5), 1 INPR2(5),EXINP(5,30~)INPUBKGND(5),WEIGT(5,5).OUTF.VP C EXTRA COMMON BLOCK FOR LUMPED MODEL CCMMON FIRST, LAST(5), NBRS(25), STATE(5,20),SUMSQ(5.5),SOFLG C C INITIALIZE CALL INITL 5 CCNTINUE C TEST FOR RESETTING PARAMETERS IF (DSWCH(1S)) 15,15,13 13 CALL RSETL 15 CONTINUE C UPDATE THE STATE VECTOR DO 10 I=1,NBLKS DC 10 J=1,NSTAT 10 OLDST(IJ)= STATE(IJ) C TEST FOR STOPPING CONDITION IF(DSWCH(14)) 30,30,20 20 IF(OLTF) 25.25,22 22 IF(FILOK(C)) 23,24,23 23 WRITE(V,27) 27 FCRMAT('ERROR-OUTPUT FILE OVERFLOW') 24 CALL FCLO 25 CALL EXIT 30 CCNTINUE C COMPUTE NEXT STATE DC 10CO I=1.NBLKS CFACT = 0.0 INPU(I) = INEXT(I) VOLLY = INPU(I) + BKGNG(I) FIRS = FIRST(I) LAS = LAST(I) C SUM INPUTS FROM NEIGHBORS BLOCKS DC 50 J= FIRS,LAS K = NBRS(J) IF(SOFLG) 5C.50,40 40 CFACT = CFACT + OLDST(K,1)*(1.0-OLDST(K,1))*SUMSQ(I,K) 50 VOLLY = VOLLY + WEIGT(I.,K) ~ CLDST( Ki ) C COMPUTE NEW PERCENT IN STATES FOR THIS BLOCK SUM=O.O DO 100 K=2.NSTAT PROB = DIST( ITHOLD( I. K'-2) -VOLLY, C FACT ) STATE(I.K) = PROB*OLDST(I,K-1) 10C SUM = SUM + STATE(I,K)

36 PRCB = DiST( I THOLD( I NSTAT-1 )-V[LLY) STATE( INSTAT) a STATE (,NSTAT) + PROB*OLDST( I,NSTAT) SUM = SUM + PRCB*OLDST(!,NSTAT) STATtE(I,1) e 1.0 SUM 10oc ChCTINLE CALL PRNT2(OLOST) T = T + 1 GC TO 5 EhD *STOREMC LUMPO..:

37 // JOB 111102222030333 // XEQ TD FX // FOR *LIST ALL *ONE WCRD INTEGERS *NChPROCESS PROGRAM REAL FUNCTION NOISE(I) C INTEGER RAND C BASIC COMMON BLOCK IhNTEGER T.OUTF,UV,P REAL NOIST(5,3),oINPU(5) CCMMON NBLKSNUMBR(5)~NSTAT,TUTHPR(5,5),NOIST,INPR1(5), 1 INPR2(5),EXINP(5~30).INPUBKGND(5),WEIGT(5.5),OUTFVP C I IS BLCCK NUMBER NCIST(I1l)=1 IS GAUSSIAN, =2 IS UNIFORM C FOR GAUSSIAN DISTRIBUTION: NOIST(I,2)=MEAN NOI ST( I 3)=DEVIATION C FCR LNIFCRM DISTRIBUTION: NOIST(I,2)=LOWR LIM. NOIST(I,,3)=UPPER LIM C K = NLIST(Ivl) GC TO (100,200).K C GAUSSIAN DISTRIBUTION 100 CONTINUE IRAND = C DC 105 J=l,6 1C5 IRAND = IRAND + RAND(2001) ZPAND = IRAND/1000.0 - 6.0 NCISE = NOIST(I,3) *.70711 * ZRAND + NOIST(II2) RETURh C UNIFORM CISTRIBUTION 200 CONTINUE NCISE = RAND(2001) * (NOIST(I,3) - NO I ST (1, 2 ) )/2000.0 + NOIST(,2) RETURN END *STOREMC NOISE

// JCB 111102222030333 // XEQ TD FX // fOR *L LIST ALL *ONE WCRD INTEGERS *NChPRCCESS PROGRAM SLBROUTINE PRNT(OLOST) IITEGER N IS (20)CLDST (51000) C BASIC COMMON BLOCK I TEGER T,OUTF,UVP REAL NOIST(5.3) INPU(5) CtMMCN NBLKSNUMBR(5),NSTAtT,UTTHPR(5,5).NOISTINPR1(5)9 1 INPR2(5),EXINP(S530) INPU,BKGNC(5),WElGT(5,5), UtFV P C PRINtS NLMBER OF NEURONS IN EACH STATE FOR EACH BLOCK WRITE(U,1CO) T 100 FCRMAT(/,/,/,' TIME STEP',17) C FOR EACH BLOCK CALCULATE NUMBER AND PERCENT NEURONS:N EACH STATE DC 500 11~,NBLKS WRITE(U,101) I, INPU(I) 101 FERMA (/,I6'TH BLOCK'1CX,'EXTERNAL INPUT IS',F82) DO 200 J=1,NSTAT 200 NIS(J) a 0 LIMIT = NUMBR(I) DO 300 J=l LIMIT K OLDST(I.J) + 1 250 NIS(K) = NIS(K) + 1 300 CONTINUE DO 450 J=1,NSTAT K = NIS(J) IF(K) 400.4C0,310 310 PCENT ~ K * 100.O/LIMIT Jl= J - 1 WRITE(U,102) J1~K,PCENT 1C2 FORMAT(10X,'NEURONS IN STATE',14,' **,tIS,' OR',F7.1' o 1RCENT') 40C IF(OUTF) 450,450,410 410 CALL FPUTI(K) 450 CCNTINUE 500 CCNTINUE RETURN EMD *STUREMU PRNT i~~~~~~~~~~

39 // JCB 111102222030333 // XEQ TD FX // FCR *LIST ALL *NCNPROCESS PROGRAM *ONE WCRD INTEGERS SLBROUTINE PRNTI C PRINTS OUT INITIAL INFORMATION FOR EITHER THE BASE OR THE LUMPED MODEL EXTERNAL PRINT C BASIC COMMON BLOCK INTEGER TOUTFU,V.P REAL hOIST(5.3),INPU(5) COMMON NBLKS.NUMBR(5),NSTATT~,UTPR(5,5)~NOISTINPR1(5), 1 INPR2(5),EXINP(5,30).INPUBKGND(5),WEICT(5,5),OUTFVP CALL SFOUT(PRINT) CALL CRET DC 2010 1= 1,NBLKS 2010 WRITE(L,2002) INUMBR(I) 2CC2 FORMAT(/' BLOCK',12,' t-AS',I15,' NEURONS') DC 2070 I=1.NBLKS WRITE(U,2061) I 2061 FORMAT(/*/,' ThRESHOLD FUNCTION FOR BLOCK',13) IF (THPR( I,)-1.0) 2063.2063,2065 2063 WRITE(U,2064) (THPR(I,J),JJ=25) I 2064 FORMAT('+',35X,'IS LINEAR WITH' 4F8.3,' AS THE RESTH~ MAXTH, A 1RP. RRP, RESPECTIVELY') GO TO 2070 2065 WRITE(U,2066) (THPR( IJ),J=2,5) 2066 FORMAT('+',35X.'IS EXPONENTIAL WITH',4F8.3,' AS THE RESTH, M, 1K. CUTOFF, RESPECTIVELY') 2070 CONTINUE DO 2090 1=1.NBLKS WRITE(L,2071) I 2071 FORMAT(/,/,' NCISE FUNCTION FOR BLOCK',13) IF (NCIST(l 1)-1.0) 2072,2072,2080 2072 WRITE(U,2073) (NOIST(I,J). J=2,3) 2073 FCRMAT('+',30X.'IS GAUSSIAN WITH MEAN'~F8.3~' AND DEVIATION', 1 F 8.3) GO TO 2090 208C WRIT E (U,2081) (NOIST(I,J), J= 2 3) 2081 FCRMAT('+',30X,'IS UNIFORM WITH LOWER BOUND',F8.3,' AND UPPER 1 BOUND', F8.3) 2090 CONTINUE DC 2110 1=1,NBLKS WRITE(L.2095) I 2095 FORMAT(//,' EXTERNAL INPUT TO BLOCK',13) KK = INPR2(I) K = INPR1(I) GO TO (2096,2100,2105.2115).K 2096 WR.ITE(U,2097) KK~ (EXINP(I.J), J=1,KK) 2097 FCRMAT('+',30X.'IS PERIODIC WITH PERIOD',I4,' AND VALUES', 1 6F8.3./, (30X,12F8.3) GO TO 2110 2100 WRITE(U,2101 ) (EXINP(I,J). J=-l,2) 2101 FORMAT('+',30X,'IS EXPONENTIAL WITH M='.F8.3,' AND K=',F8.3) GO TO 2110 2105 WRITE(U,2106) KK, (EXINP(I~J), J=1,KK) 2106 FORMATC'+',30X.'IS CUTOFF WITH LENGTH',14,' AND VALUES',6F8.3, 1 /,(30X,12F8.3))

40 GO TO 2110 2115 WR1TE(U,2116) EXINP(I,3) 2116 FORMAT('+',30X,'IS PULSE WITH FIRST PULSE AT T=',F5.0) 2110 CCKTINUE DC 2130 I-1,NBLKS WRITE(Ut2121) I, BKGND(I) 2121 FCRMAT(/' BACKGROUND FOR BLOCK'13, IS I'F8.3) 2130 CENTINUE IF(jUTF) 2200.2200.2150 215C WRITE(U,2170) OUTF 2170 FCRMAI(/' OUTPUT FROM THIS RUN IS IN FILE',15) 2200 CCNTINUE CALL EOPAG CALL CRET RETURN E ND *STLREMC PRNTI -~~~~~~~~~~~~~~~~~~~~~~~~~~~S

41 // JC6 111102222030333 // XEQ TD FX // FOR *LIST ALL *NChPRCCESS PROGRAM *ONE WORD INTEGERS SLBROTI NE PRNT2(CLDST) C PRINTS NUMBER AND PERCENTAGE OF NEURChS IN EACH STATE FOR EACH BLOCK C REAL OLDST( 5t20) C BASIC COMMON BLOCK I.NTEGER T OUTF,U,V,P REAL NOIST(5,3) INPU(5) CCMMON NBLKS,NUMBR(5),NSTAT, l,u,THPR(5,5),NOISTINPR1(5), 1 INPR2(5),EXINP(5.30),INPU,BKGND(5),WEIGT(5,5),OUTF.V.P C WRITE(U,1CO) T 100 FRMAT(/,//,/,'OTIME STEP',17) DC 50C I=1,NBLKS WRITE(UI101) I, INPU(I) 1C1 FORMAT(/ I 6,'TH BLOCK',1 X'EXTERNAL INPUT IS',F8.2) DC 250 J=1,NSTAT ENMBR = OLDST(I.J) * NUMBR(I) IF (OLDST(IJ)) 200,200,150 150 PCENT = OLDST(I,J) * 100. --- J1 = J-1 WRITE(U,102) J1,ENMER, PCENT 102 FORMAT(1OX'NEURONS IN STATE',14,' o*'oF8.2,' OR'F7..1, 1' PERCENT') 200 IF(CUTF) 250,250.210 210 CALL FPUTR(OLDST(I J)) 250 CONTINUE 500 CONTINUE RETURN REMD -STCREMC PRNT2

42 // JCB!1110222203033 // XEQ 7D FX / / FR HI 5T ALL;NCFPRCCE5S PRGRAM *ONE WORD INTEGERS FtNCTION THPNDLOiRCVRY) IhTtGER RCVRY C BASIC COMMON LODCK IhTEGER T?,OUTFUVP REAL hOI$T(5,3),jIPU(5) CCMMCh N BLK,5 U WUMB ) NTATT,U. T PR ( 5. NOI 5T, INPR! S) 1 INP:R2 ),EXINP(,30D) NPU,BKGND(5),WEIGT1 ~,5,OUTF,, P C THPR( is) 1,0 INDICATES A LINEAR DECAY C ThPR( 1,).=,0 IDICATf$ AN EXPONENTIAL DECAY M EXP(-K#gRCVRY) IF (TIPR(Il,-1.Q) ~,5,ilO C THPR(I,2) 1$ 2 R STIN~ THREWSHOLC T l PR(I,3) 15 MtA)(IMUM THRE$HOLD C ThPR(I,) 15 LENGTH CF ARP THPR(I I.53 I5 LENGTH OF RRP 5 CCNTINUE C TEST kHETHER IN ARP 0RP N.DT IF (RCVRY-THPR(, 1)) 10!20,20 C IN ARP 10 TIOLD;s,03: RETURN C I. RRF OR PAST IRP? 20 IF (RCVRY-THPR( I,4)rTPNPR. 1,5) ) 30,40,40 C Ih RRP, SC COMPUTE THRE$HOLO AS LINEAR FUNCTION OF RCVRY 30 THOLD THPRII,3) (THPR(1 3)-THPR(I2) ) /ThPR(I,5) I (RCVRYRTHPR (. ) ) RETURN C PAST RRP R65TIN< THISNCILQ 40 TFOLD = THPRl,2) RETURN C C TFPR(I,2) 15 RfETING TH0fKSHL;Lo TkPR,(I,)) S$ M, C TPR(I.4) i$ ^, THPRIJ,5) I5 CUTOFF 100 IF (RCVRY-THPI P,4)) 105l50,150 105 TIOLD = THPRI,3 )#XP(-THPR(I,4)*RCVRY) R ETURN 150 TFOLD = THPR 1,2) RETURN END *STCREMC THOL1O

APPENDIX 1 This appendix contains listings of some of the programs used in the computer simulation of the base and lumped models. We do not include the initialization routines which set up the parameter values and neuron connection pattern. Some of the routines called from our programs are part of the Logic of Computers Group program library. These supporting routines are primarily input/output routines. For further information, contact the authors. The Logic of Computers Group is a research facility within the Department of Computer and Communication Sciences of the University of Michigan. Its computer facilities include an IBM 1800 with disk bulk storage interfaced to a DEC PDP7 with graphic CRT display. 43

APPENDIX 2 Running times for base model in second series of experiments. Values given are average length of time required per simulated time step (in seconds). - Neurons NBRS -25 50 100 200 400 800 1 3.7 4.2 5.1 6.4 9.6 16.15 10 3.75 4.25 5.55 8.2 13.4 23.75 20 3.85 5.05 6.75 10.4 17.8 32.55 40 --- 5.55 8.8 14.9 26.5 49.55 80 - --- 12.95 23.6 42.95 - For Comparison, the lumped model requires only 2.75 seconds per timestep -- regardless of the number of neurons or neighbors. We would expect the running time of the base model to be of the form T = K+N(an+b) since the main loop of the program is: Loop over all neurons (N) Computer threshold and external input Loop over neighbors of this neuron (n) Sum up inputs Compare input and threshold. In fact, K = 3.2, a =.001, b =.015 gives a rather good fit. See Figures 18 and 19. 44

I^Mt> MDEL, A4,P ^ ^,, <^~~~~~~~~~~~~~~~~l??-r So S6~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ -' oo 09 I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L>Q

LMAPEb MvFoOi S7C4 NoF^ I i^/iw 6f 22? ^s ~ ~~~~~~~~~~~... Q~ Jillt4 0 -- lo -bo oLio -lo O 20 4P bo0Lr

Theoretical Distribution of Chi-Square Values 99 95 90 75 50 5 Actual Distribution of Chi-Square Values Size St. Dev. Sample Size Deg. Freedom 25 20 93 2 99 96 88 74 60 8 50 20 30 2 100 97 90 73 37 3 75 20 30 2 100 97 97 80 47 3 100 20 44 2 198 95 91 73 36 7 500 20 42 2 95 93 88 74 62 2 1000 20 42 2 100 98 90 83 48 5 25 10 12 2 100 100 92 83 58 8 100 10 36 2 100 97 94 80 50 3 25 40 96 2 100 95 90 79 49 4 100 20 174 2 99 95 91 74 48 6 (Last row uses samples from every 5th time step) FIGURE 3

Theoretical Distribution of Chi-Square Values 99 95 90 75 SO 5 Size St. Dev. Sample Size Deg. Freedom Actual Pistribution of Chi-Squae Values 25 20 66 1 100 97 88 73 53 0 50 20 66 4 100 97 92 79 4& 0 75 20 51 4 100 t00 100 84 61 10 100 20' 54 5 96 94 90 68 39 7 500 20 105 6 100 97 93 76 40 1 1000 20 105 6- 98 94 88 73 57 7.25 10 57 3 10 100 5 82 60 16 l00 10 48 5 98 96 90 77 54 4 25 40 32 3 100 94 78 78 59 3 FIGURE 4

49 Second Level Chi-Square Analysis Using 1st Level Chi-Square From 2nd Level Deg. % Confidence Chi-Square Freedom of Acceptance Size St. Dev. Deg. Freedom Sample Size 25 20 2 93 8.27 5 10 50 20 2 30 2.65 3 30 75 20 2 30 2.30 3 50 100 20 2 44 4.93 3 10 500 20 2 42 4.88 3 10 1000 20 2 42 3.72 3 25 25 10 2 12 0.17 2 90 100 10 2 36 1.58 3 50 25 40 2 96 2.54 5 75 25 20 1 66 2.05 3 50 50 20 4 66 3.43 3 30 75 20 4 51 1.38 3 70 100 20 5 54 4.63 3 20 500 20 6 105 7.67 5 10 1000 20 6 105 5.27 5 30 25 10 3 57 1.80 3 50 100 10 5 48 0.56 3 90. 25 40 3 32 8.33 3 2.5 25 20 2 174 5 90 (Last row uses samples from every fifth time step) FIGURE 5

BASEC _ 3 - F- tit R.E G L UMPEb _EDYI r *~ ~f^^1%'C0) rtP, ~~A I C- EQC- ~ ~~ b,,_y V FIrURE 6 50

L\RIG LEVEL C RECUCfl'/;rnrrE 0L)MP rb 30 lZo_ ^ + - -^ -'I Ad — ) 7 ^ "^ r:"'*?:T" -1 ^ <r - I- __I-. _ a lo r7 0_ 3_ c 2.r I; - -

I'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~t s~~~~~~~~~~~~a 0 X feC A ti.>~~~~~~~~~~~~~~~~~~~~~~~~~:p i, ~~1s f a\~~~~~~~~~~~~~~~~~~~~~.5 ^- _.. 1~~~~~~~~~~~~~~4 I;,, t~ \ 1a 4^~ \~~t h Nb ttt+y'-tS'-'-t' In t - d l Jp'.t 3>4shY4 \~~~~~~~~~~~~~~~~~l~ Y~~~~~~~~~~p~~~- oc::r \~~~~~~~~~~ki r^ ^*^'" " c ^s ", "*^ \^ ~^'* *y <-^ ^. v, "^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1 ^ ^^^**-^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~S ^ ^ ^ ^ ^ -*- -. ^ - - - ^ - - j i - - ^ - - ^ i - ^-.,.,...,..^..,. ^-^^^^^S^^^ ^^^^^^t~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i " %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~3 ~~r% sxh~,p bi ~ cs -li~i

C. / "o C~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ U.'~ ~ ~ ~ ~ ~~~~~~~~~~Z 4- jwi>Li Nt-p ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-

F IGURE 10 ~" 1 \1 80 -^ lo Z Sto 4Co 6o 7 0 (' 54~00~~~, o 70 InI So 54

7; rx c s u s^W -.. h -r h \t~~ ~P Y. ^ 7~~~~~~~~~~~~~~i * *. ^ * ^_________________^ - ^ " j

56 i3ASE 54.. gr sI/ ~TV R~~fRc. sr< o s. P4rcr, sC, 9D[ \t 4 pi^rEeEt?Ruod$ 30 \ 1m 43 0 54 D 7v

13A5F~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ l\ 4 o<- /vE o" y^ON / 0 o E-fd Q P^ V< /?^~~~~~~~~~~~~~ic. sr o vs^ 9~ec sT 6 ~o "S. r7c T 0 S \cr5i *1 ~~~~~~~~~~~2 DJFrgFRfTT uN.S'I Fg)t^ 3 1'0 2^^^^ 2~~~~~~~~~~~~~~~~~~~~^

GoP Z58 / it Id o vsI s 70 9N E Joou / \\ * NEu/40NS, 3o NClJMCsz oS /0 240 |I& VL4UE q ~~~~20~ N ^ Jb ^ So ^~~~~~~~~~~~0

59 \oo NEC/wR oS Co o k^Neo S Re. sr o Vs. Rc4~ 6'I I' 0 Z6 30 3o:'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~,!,

s6 60 -'S too BpQ Nf/P&S Roc. d 0 S. 6C IH.i.0 R -:.sI o. fff" yrv 90 to Go \ O5 3o 30 o \ I F^IGR4E 16 L o {o 20 3Q So so 60 70

1i 61 e 2 NV EUPONS so, NE T/&/\I r~0 IVE, \ \2 PDiffCP..Cr pfIQA 4icj'1,) ^I4LsfZE K? 10?o 30 4u ^ <

so L. / -,~ % o- - - /.....,-~m,, /. o 10 20 q0 LO S 60 70 to N EBAL OPF hINE16ORS

0oQ ooL o0. 0S0 $ oO7 oS oo2.o\ QS 1 [$......!!!- - -I - — I C..-." as/ [.......... —" / a..+- - -5..,,.2 / / 2 /^ /oh nrr /^/ /- Io'09,i..7q.T

UNIVERSITY OF MICHIGAN 3 Il90111111111 lii1111115 02493 0706 3 9015 02493 0706