THE U N IV E R S I T O Y F 0 M C H E C. A N COLLEGE OF LITERATURE, SCIENCE AND THE ARTS Department of Communication Sciences Technical Report AN EXPERIMENTAL SbTUDY OF THEN FORMATION AND DEI:ELOPMENT OF HEBBIAN CELL-ASSEMBLIES BCr MEANS OF A NEURAL NETWORK SIMULATION Marion Finley, jr. ORA Project 08333 supported byo DEPARTMENT OF HEALTH, EDUCATION, AND WELFARE PUBLIC HEALTH SERVICE NATIONAL INSTITUTES OF HEALTH CRANT NOo GM-12236-03 BETHESDA, MAR-YLAND administered thrc'lgh~ OFFICE OF RESEARCH ADMINISTRATION N ARBOR March 1967

This report was also a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The University of Michigan, 1967.

ACKNOWLEDGEMENTS I first of all wish to express my profound gratitude to Professors John H,.Holland and J. W. Crichton for virtually constant moral support and technical assistance. It was Professor Holland who stimulated me into developing the basic steady-state calculus of Chapter 4 and Professor Crichton who assisted me in much of the arduous programming involved, Moreover, Professor Crichton, by his deep knowledge of the Aristotelian writings on the intellect, added a dimension to my understanding that previously had been lacking, To Professor Edward L WalLs-r I. am indebted for my first contact with neurophysiology, To Professor Anatol Rapoport I owe much of my early interest in neural networks, leading to this particular study. Finally, to Professor Henry HR Swain, I am grateful for what appears to be a genuine and enthusiastic interest in a scientific study such as this" It would be a gross injustic not to mention mvy feelings of gratitude and appreciation to Professor Arthur W. Burks for his many good and wise counsels as well as his active financial patronage of my research conducted at the Logic of Computers Group., To the members and staff of that worthy group I extend my cordial appreciations, especially to Mrs, Kuniko Misawa, whose excellent handiwork is displayed hereinD This work was supported by the National Institutes of Health While I was with the Logic of Computers Group at the University of MIichigan1 I am particularly grateful to the National Institutes of Health for their generous allocation of funds for computer time, Finally, mere words cannot express my admiration for my wife, Lorol, for bearing with me through the often discouraging days of this work, iii

TABLE OF CONTENTS Page ACKNOWLEDGEMENTS................. iv LIST OF FIGURES......................iii LIST OF TABLES........................ xii ABSTRACT.......................... a xiii 1. INTRODUCTION a a a ~ a a ao 0 oa a o a a o0 0 6 a I o 4 1 1, 1 STATEMENT OF THE PROBLEM,, O o o o 1 1 2 BASIC PREMISES MAD'thEORY: RELATION TO NEURO= PHYSIOLOGICAL FACT a.. o 0 0, 0 o o 0 o 0 2 1,3 THE CORTICAL NEURON AND SYSTEMS OF CORTICAL NEURONS *o 0 $4 16361 Structure.,6, * 0. 0 a 0o o 0 o o o d0 0 5 103o2 Input and Output Threshold o o. O o. o o 6 1.33 Synapses 0 o, * a * a. * *. o * o. 8 1i3o4 Fatigue, Spontaneous Firing *. * *. * * 0, 9 1,3$5 Systems of Neurons. o o o.. o... d 10 1,4 PREVIOUS NEURAL NET STUDIES o o o o. o o o o o 11 lS SCOPE OF THE STUDY. a o o o. o o. o o o o o 14 2- FORMAL DESCRIPTION OF THE MODELS 0 a o a 0 0 0 0 0 a o a o 0 16 201 INTRODUCTION 0 a o a 0 a 0 0 o a 0 0 0 0 0 0 a 0 0 o 16 2,2 THE NETWORK EQUATIONS O o, 0 o0 0.... o. 16 2.3 THE NETWORK FUNCTIONS R, F, AND S.. a o o o a. o 22 2,3,1 Control of Firing Rate. *.. *,, * * 22 2o32- The Threshold Function,... oe.,.. I 23 2o3,3 The Fatigue Function O,..,, O O d 0 d 25 2o,34 The Synapse Value Function...., * *. O. 27 2,4 NOTE ON THE SIMJLATION PROGRAM.. 29 Symbols Used:in Section,2o2 0 o o o o o o 32 3. CORRELATION EXPERIMENTS, CYCLE-LESS CASE, 0 * o o. o o 34 301 INTRODUCTION,o, o o o o o0 o o o o o o o o o 0 34 302 CORRELATION a a o o o o o o, 0 o o o 0 o 0o o 4 0 o 36 303 EXPERIMENTAL CONFIGURATIONS FOR CORRELATION STUDY 0o 0a 0 37 3.3.1 Overview o o # o d o0 a a a 4 a a 0 o a o 0 0 37 3,3t2 Network Structure o o o o o o o 0 * * * a a 4 * 39 384 SUMMARY OF EXPERIMENTS o 6 Oa a ~ a ~ a a a a o o ~ a 41 3,4,1 Experimental Hypothesis, o 0. d 0 0. 0 ~ ~ 0 41 3,4.2 Some Theoretical Considerations 0 * o o * o * 43 3,4,3 Summary of Experimental Results 0, ~. * * o, ~ 44 V

TABLE OF CONTENTS (Continued) 4, NETWORKS WITH CYCLES T, o C a C C O E 0 o o O o d a 47 4,1 FORWORD 3, o, o o ~ I o o e o o o o a 0 o o o a 0 I o 47 4,2 DISTRIBUTIONS OF CONNECTIONS o o a 0 o o d O O O O O 0 49 4,2 1 Networks with Uniform Random Distributions of Connections O o o, a O o d O O o O O 0 5s0 4 2,2 Networks with Distance-Bias 0 d o o 0 o a 0 59 462d3 Brief Contrast of Uniform and Distance. Bias Distribution 3o, o o o o o 0 o 0 o a ~ 62 4,3 STIMULUS-FREE BEHAVIOR IN NETWORKS WITH CYCLES o 0 0 62 4,3,1 Steady-State Behavior o o o o 0 o 63 4,3,2 The Threshold Curve and Steady-State Analysis 0 o 69 4o3,3 Negative Feedback o o o o o o o o o 0 80 40304 Networks with Distance-Bias o. o o o 96 4 3,5 Synapse-Level Drift o, o o o o 110 4d4 NETWORKS UNDER A SINGLE PERIODIC STIMULUS 0 o o 0 o 110 4,4,1 Stimulus and Stability o, O O o o o o o 110 4,4,2 Periodic Stimulus o 0, o o o o o 113 4,4c3 Stability Calculations 0o,, o o o o o, 118 4,404 Distance-Bias o0 0 0 0 1 o o 0 o o o o 0 119 4~5 NETWORKS UNDER TWO ALTERNATING PERIODIC STIEMULI o0 0 o. 121 4,5,1 Alternating Cell-Assemblies o o, o o 0 121 4,532 Alternating Periodic Stimuli,, Q o O 0 o 0 o o 125 4, 6 SUb'IMARY o o o o o0 0 a o o o o 0 o o o o o 128 Symbols and Terms Introduced in Chapter 4 I o o, o o o 130 5 METHODOLOGY OF EXPERIMENTS I o o 0 d 0 o 0 0o d 0 0 o o o o0 135 5 o INTRODUCTION 0o 3 o o o o o 0 o of o o o o 0 0 135 5, 2 IETHODOLOGY a, o o o o o3 o o o 137 6, STIMULUS-FREE BEHAVIOR IN NETWORKS WITH CYCLES o o a o o o o0 141 61l EXPERIMENTAL OBJECTIVES AND PROCEDURES 0o o o o o o 0 0 141 60 l1 Objectives O o, o o O 0 O O a o o o 3 0o o Oo o o 141 6a132 Outline of General Experimental Procedure 0 0 0 o 142 6 Ia3 Hypothesis O 3 0 0 o.... O 0 O 0 0 0 0, 0 146 6a2 NETWORKS WITH UNIFORM RANDOM DISTRIBUTIONS OF CONNECTIONS o o o o o o 0 o o 0 0 0 0 0 0 0 0 o 00 o 147 6a2al Series I - Networks with Positive Connections Only o 147 6,2o2 Series II - Networks with Negative Feedback O 166 6 2 3 Conclusions o 0o 0 0 0 O O O0 0 0 0 0 o O O o d 173 6 3 NETWORKS WITH DISTANCE-BIAS 0o o o o o 0 o o o o a o o 0 177 6,3, 1 Introduction d 0 0 o o o o o o o 177 603a2 Series I. Familiarization o d o 3 o 0 o 184 6o3,3 Networks with Negative Feedback, No Fatigue 0 o 6 188 6~3~4 Networks with Negative Feedback, Fatigue Present, 196 vi

TABLE OF CONTENTS (Concluded) 6,4 SUMMARY OF EXPERIMENTAL RESULTS o o o o o o o a a a o a 6 202 70 NETWORKS UNDER PERIODIC STIMULI 8 o o O o 8 o a a a o 6 0 206 7,1 INTORUDCTION AND SURVEY OF RESULTS o. * 6 8 8 o 6 a 206 7~2 EXPERIMENTAL OBJECTIVES AND PROCEDURES o e o o o o ao, 208 7o201 Objectives..o o 8 o 8 o l i o 8 o o 208 72082 Experimental Procedure o o o o o o o o o o 211 7 23 Hypothesis o o o o o o o o o o o 213 7o3 NETWORKS WITH UNIFORM RANDOM DISTRIBUTIONS OF. CONNECTIONS 6 o o o o o o o 215 783,1 Series I -Networks with Positive Connections Only o, o 8 o. o, o 8 o d 8 8 0 215 70302 Series II - Networks with Negative Feedback o 8 8 221 78383 Conclusions 8 a o 8o o o o o o 8 a 8 o 8 8 o 221 704 NETWORKS WITH DISTANCE-BIAS 8 o a d a o 8 O o o O 8 224 78481 Networks with Fatigue Inoperative o a a o o 8 o 229 76482 Networks with Fatigue Operative 8, 8, o o 8 o 8 240 7,4o3 Conclusions o 8 o 8 o I o o 8 o 8 o b 8 O o O O 242 7 05 CELL-ASSEMBLY EXPERIMENTS Co o o o o a o o o 8 8 o d a d 242 7d,51 An Adequate Network o o o o 6 o - 8 o 8 8 o 243 7,5 2 Single Periodic Stimulus Series 8 a o 8 8 8 8 8 a 245 78583 Analysis and Control Experiments o O a o 8 o 8 8 o 248 7a5A4 Alternating Periodic Stimuli Sequence a 8 o 8 8, 265 705o5 Conclusions o o o 8 o o o a o o 8 J o 8 o o d 274 APPENDIX- A NOTE ON THE GENERATION OF CONNECTIONS SCHEMES, o 276 18 Random Number Generation 0 o 8 o 8 8 o o o o o 8 8 o 8 8 276 2, Tests of Distributions o o o o, 277 BIBLIOGRAPHY o o 0 8 8 8 8 8 8 8 o o 8 8 o o o o o o 8 O a o a 283 vii

LIST OF FIGURES Figure 3-1 Sample Correlation Experiment.. o. l, o d, 46 Figure 4,1 Graph of (t) (a) vis-a-vis V(r) (b) J o,. 76 Figure 42 Graph of %t) with the Mk(t) (a), vis-a-vis V(r), (b) 77 Figure 4,3 Stylized Graph Indicating Relationship of Mk(t) and Nk(t) vis-a-vis &Nt) o o 78 Figure 4o4 Threshold Curve for Sample Calculation I a I o d 78 Figure 4,5 Illustration of the Effect of Negative Connections o d 85 Figure 4e6 Variation of P* and P2 as a Function of RP(t) 93 1 2an Figure 4,7 Variation of P2 as Function of RO(t) d d d J o d 94 Figure 408 An Example of a Boundary Problem o e o 98 Figure 4,9 Quasi-Torus o o o o 0 99 Figure 4o10 Properties of the Quasi=Torus, o o o 100 Figure 4o11 Anomalous Steady=State in a Network with Distance-Bias 102 Figure 4412 Distance Measure on the Quasi-Torus, 0,, O O o 0 104 Figure 4413 Example of a Disk Distribution 0 a c c, o o O o o 106 Figure 4,14 Estimates of Expected Number of Connections Received by a Neuron i from Simple Geometric Structures,, 108 Figure 4415 Non-Localization Property of Networks with Uniform Random Distribution of Connections a o a d o d a d o 119 Figure 4,16 Example of an Input Set E0 in a Network &1_with Distance-Bias o o o o o 0 d O o o a 120 Figure 4,17 Alternating Stimulus Envelopes for Z0 and EZ0 o o 127 Figure 6,1 Summary of Results for Basic Experiment I and Variants 149 Figure 6 2 Additional Variations of Basic Experiment I o o 151 Figure 603 General Form of the Threshold Curve V(r) d O O 153 Figure 6,4 Summary of Results for Basic Experiments II, d a 154 Figure 6.5 Variations of Basic Experiment II, Variants (2) and (4) 158 viii1

LIST OF FIGURES (Continued) Figure 6,6 Variation of Basic Experiment II, Variant (5) 4 o, o 159 Figure 6,7 Threshold Curves with Dips d a a o o a a o d o d a 162 Figure 6~8 X=Distribution for Experiments of Figure 6.9 od, a 163 Figure 6.9 Threshold Curves with Dips Experiments, First Series 6 164 Figure 6o10 Threshold Curves with Dips Experiments, Second Series d 165 Figure 6,11 Negative Feedback Experiments 1 o d o o ao, o,o, 168 Figure 6412 Negative Feedback Experiments 2 o o o o o o 170 Figure 6.13 Negative Feedback Experiments 3 0, o o 174 Figure 6o14 Spread of Excitation Experiment (Summary) o o o 179 Figure 6.15 Spread of Excitation (Summary, ~ o O O o o o d, 180 Figure 6.16 Basic Experiment III o o o o o o o i, 0 186 Figure 6.17 Firing Patterns for t = 1, 2, 3 of Basic Experiment III 187 Figure 6.18 Experiment 1 (Section 6.3) a,, o 0, o o o o o o o o 189 Figure 6.19 Experiment 2 (Section 63) o o o o o o o o. 191 Figure 6.20 Experiment 3 (Section 6.3) 0 a o o d o. o a a 193 Figure 6.21 Experiment 4 (Section 6.3) o o o o o o 194 Figure 6.22 Experiment 5 (Section 6,3) o o o o o 195 Figure 6.23 Fatigue Curve O(Z), Tables A1(L), A2(Z) for Experiments 6 and 7 (Section 6~3) o 0 o a a o 197 Figure 6d24 Experiment 6 (Section 6.3) o o a o O o o o 0 0 d 199 Figure 6025 Experiment 7 (Section 6,3) 0 o o 0, o, 0 O o 0 d 0 200 Figure 6a26 AI1(t) and A2(Z) Tables for Experiments 8 and 9 (Section 6.3) a o o o o o o o o 0 a o o o 0 201 Figure 6.27 Experiment 8 (Section 6.3) 0 0 0 o 0 0 0 0 0 o 203 Figure 6~28 Experiment 9 (Section 6.3) o o o o o o o o 204 ix

LIST OF FIGURES (Continued) Figure 741 Single Periodic Stimulus Experiment o o o o o 6, 209 Figure 7,2 Typical Stimulus Envelope for Alternating Periodic Stimulus Experiments o o 8 o o o o o o o o oa o * 214 Figure 783 Experiment 1 (Section 7,3) - Fatigue Inoperative o o o 216 Figure 784 Firing Patterns for Experiment 1 (Section 7o3), t a 451462 o o o, o 8, o o o o o o a 8 o 0 0 6 217 Figure 705 Experiment 2 (Section 703) — Fatigue Present o * 1, 0 219 Figure 786 Experiment 3 (Section 7,3) - Fatigue Inoperative o, o 222 Figure 707 Overlapping Paths in Experiment 3 (Section 7~3) o o o o 223 Figure 708 Experiment 1 (Section 7e4) Fatigue Inoperative, 225 Figure 709 Experiment 2 (Section 704) — Fatigue Inoperative 0 O o 226 Figure 7O10 Experiment 3 (Section 7,4) — Fatigue Inoperative o e o 230 Figure 7,11 Experiment 4 (Section 7.4) Fatigue Inoperative,8 o, 232 Figure 7,12 Experiment 6 (Section 784) — Fatigue Operative o o o 0 235 Figure 7,13 Input Set E0 of Experiment 7 o o o o o o o 237 Figure 7o14 Experiment 7 (Section 7~4) - Fatigue Operative, o o 0 238 Figure 7.15 Input Set E0 for Variant of Experiment 7 o o 8 o o o 1 241 Figure 7,16 Example of a Closed Chain within a Cycle C(ZO)o o 244 Figure 7417 Selected EEG's for the CelloAssembly Experiment (Section 7o5o2) c o o o 8 o o o o o o o o o o 246 Figure 7,18 5(t)'s for Cell-Assembly Experiment (Section 7,5,2) 8 i 251 Figure 7T19 ff(t)Vs for Cell-Assembly Experiment (Section 78582) 0 0 252 Figure 7e20 9(t)'s for Cell-Assembly Experiment (Section 7,5o2) o e 253 Figure 7021 9(t)9s for Cell-Assembly Experiment (Section 70502) o o 254 Figure 7,22 Terminal 0(t) for Cell-Assembly Experiment O 0 o o. 255 Figure 7.23 Sample of Fatigue Distributions for,First Control Experiment 0 8 ~ o o o 8 0 80 8 O 8 O 8 O o 0 o O 8 257 x

LIST OF FIGURES (Concluded) Figure 7024 EEG's for First, Second, and Third Control Experiments o 259 Figure 7o25 J(t)Ys for Second and Third Control Experiments (Section 70503) eo e o 8, o a o o d o a a o 260 Figure 7,26 EEG for Fourth Control Experiment, d o d 262 Figure 7o27 g(t)'s for Fourth Control Experiment O O O o o o 264 Figure 7028 V(r) for Fifth Control Experiment o, o o 266 Figure 7029 EEG for Fifth Control Experiment (V(r) with Negative Entries O O O O O O o o O O o o o o o a o, o o o 267 Figure 7e30 q(t)'s for Fifth Control Experiment O o o o o o o o 268 Figure 731 Z and * for Alternating Periodic Stimuli Sequence 0 270 Figure 7e32 Part of C(%o) for Alternating Periodic Stimuli Sequence 270 Figure 7 33 Fragments of an Embryonic CC({) o,, o o 0 271 Figure 7a34 Development of Cross-Inhibiting Connections between C(zO) and (the Partial) C(Z.) o o o o, o 0 o 0d 00 272 Appendix Figure 1 "Petri-Plate" Sample of Connections in a Neighborhood C 280 xi

LIST OF TABLES Page Table 4e1 Sample Steady-State Calculation o o o o o o o 79 Table 4~2 Sample Steady-State Calculation (continued) o 8 o o 82 Table 4,2 Sample Steady-State Calculation (concluded) o.. o.. 83 Table 4,3 Sample Steady-State Calculation for Periodic Stimulus Case o o o o, o ~ o o o o o a o o ao o o o o o ao o 85 Appendix Table 1 2Test Applied to a Network (Uniform Random Case) Table 1 X 2Test Applied to a Network (Uniform Random Case) o o 280 Table 2 X.Test Applied to a Network (Uniform Random Case) 8 281 Table 3 X2 Tests Applied to Subsets of o o o o o 281 2 Table 4 X -Test Applied to the CR of Figure 282 xii

ABSTRACT The primary objective of this study is tat derive a structural and dynamic characterization of Hebbian cell-assemblies in terms of a particular class of models of neural networks'i Within these models, Hebb's postulate of synapse-growth occupies a pivotal position, The networks of the given class of models may, together with any appropriate environments, be simulated by means of a digital computer program,,oqseqftently? hypotheses about the behavior of such networks can be subjected to a rigorous test0 The simulated networks, then, are to be used to test the formation and development of cell-assemblies, The advantages of the simulation are two-fold- (1) in some cases, it allows difficult mathematical calculations to be by-passed (e~g,, the distancebias case below), (2) it allows "rolling-back" (via use of auxiliary storage devices such as disk, magnetic tape, etc,) to an earlier point in an experiment, modifying some parameter, then continuing from that point on, A number of subsidiary goals immediately became apparent, however, First of all, it was found necessary to characterize stable, steady-state behavior of a network Next, the role of negative connections in such. a network needed clarification, Finally, the problem of guaranteeing localization of certain neural events arose, To meet these goals, a steady-state stability calculus relating the essential network parameters.: N. (number of neurons in c, the threshold curves and p (density of connections in) was worked out0 This was done first for the case that positive equal connections only are present in A1 This calculus was then modified to include the case that positive and negative connections (inhibitory connections) are present in ~ The inhibitory connections are shown quantitatively to be essential to ensure x{Li

the negative feedback necessary to sustain steady-state, Again, relationships for the "mix" of positive versus negative connections and the relationship of these quantities to the threshold curve are given, Finally, modification of this calculus to include a distance-bias on the connection density P was considered, Unfortunately, even for the relatively simple distance-bias selected, the calculations are quite difw ficulti It was found, however, that calculations from the preceding case (distance-bias absent) could be used as crude initial approximations, A series of experiments were performed (using the above calculations as a guide in setting the network parameters) with networks of progressively greater complexity. A number of stable networks are exhibited1 Then, using one of these stable networks with N = 400, p = 55 (this o dccomposible into positive and negative components), R = 6 (distance-bias radius), simple closed cycles (candidates for cell-assemblies) were formed as a consequence of appropriate training stimuli, An imbalance in the fatigue mechanism of the model was uncovered at this point, This was corrected adequately in the current work, but points to the need for mode ifying the steady-state calculus to include this mechanism, In the concluding experiment, an embryonic cross-inhibiting pair of such closed cycles was formed by applying alternating periodic stimuli to two disjoint input areas of le Hebb's basic theory (especially the synapse-growth law) is thus vindicated in terms of the given modelse There certainly appears to be every reason to expect the more advanced portions of his theory (eg. phase sequences) to be put to the test using the larger and faster computer hardware emerging today~ xiv

1' INTRODUCTION 1 1 STATlMENT OF TtiE PROBL-I A class of models of neural networks is given purporting to represent, admittedly in an approximate fashion, a fragment of the mammalian cortex, A model may be visualized in an environment together with appropriate sensory and motor apparati, This allows, for example, detection of objects and movement in the environment, The main problem is to deter= mine whether the models presented have the capacity to learn, in the sense that' as a consequence of feedback from the environment to the model, certain internal changes occur in the model with a resulting (eventual) improvement in behavior,, The given class of neural network models has at least one distinctive feature~ it is interpreted directly into a computer program: This results in a rigorous expression of (the particular interpretation of) the class of models, from which any specific model is obtained merely by specification of certain parameters. Inasmuch as any program is a formal expression of certain formal operations9 analogous to the specification of a list of functions used in the definition of partial recursive functions9 some of the advantages found in the study of formal systems are present, On the other hand, there also is the advantage that any analytically derived property of the models may be subjected to a well-defined test in the interpretation afforded by the computer program, Because of the relative ease with which operations of the models are interpreted into sets of digital computer operations,, the computer simulation of such models is lifted out of the realm of a mere program= ming application0 In a sense9 the program itself is a model0 Study it 1

in..e, its behavior - and you are studying the model, 1.2 BASIC PREMISES AND THEORYo RELATION 7TO NEUROPHYSIOLOGICAL FACT The original source for the specification of this class of neural net models and of the neural as well as behavioral processes involved in learning stems back to the theory which was developed by D, O, Hebb [ 9], later modified somewhat by Po Mo Milner [10]o The theory, which integrates knowledge of neural events, taking place in time intervals of up to a hundred milliseconds or so, with behavioral events9 taking place in time intervals of seconds on up, has as its basis the proposed mechanism of the cell-assembly, Informally characterized, this is a system of cortical (association layer) neurons which are capable of acting as a closed autonomous functional unit for a brief period of time. These neurons are anatomically diffuse, but functionally connected~ The functional unity of the cell-assembly results from the initial existence of proper inter-connections among the neurons of the system together with a particular (ile,,, selective) sequence of cortical events forcing these neurons to act briefly as a unit, This, in turn9 results in a growth of synaptic strength at the connections such that after a period of time the assembly may be activated by appropriate excitatory stimuli,, The cell-assembly is a hypothetical structure; its physiological existence has not been demonstrated- on the other hand, the concept does not conflict with current neurophysiological knowledgeo Moreover, the formation of a cell=assembly rests upon three main premises, (a) the initial existence of the proper inter-connections among the neurons of the system, (b) an initial selective sequence of cortical events that forces the neu= rons of the system to act briefly as a unit, and (c) a law of growth in

synaptic strength between neurons. This latter premise is taken by Hlebb as his basic neurophysiological premise. Stated more fully, it reads; When an axion of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that AMs effi..)ency as one of the cells firing B? is increased [ 9];While there is evidence that is very suggestive, the validity of this hypothe>i.s has not yet been demonstrated neurophysiologically; again, it does not conflict with known properties of neurons, It was conclusively demonstrated shortly after the appearance of Hebb~s book (for example, Ezcles ( 61) that some neurons send out inhibitory as well as excitatory connections- Milner!0O] argues effectively for the inclusion of inhibi= tory connections, subject to the same synapse growth law (c) and his suggestion is adopted here, It should be noted here, that many properties of cortical neurons are inferred from the known properties of peripheral neurons, There seems to be no reason, at this time at least9 for not doing this as it may be some time before techniques are evolved allowing the fine, detailed study on the cortical neuron that has been carried out on neurons in the spinal ganglia, etc0 This is obviously one area where new knowledge will be of the greatest interest in the study of models such as the one developed here, There is one other premise which, although not explicit in the above formulation of the cell-assemply, is in some respects the most important of all, That is that the system of neurons under consideration be suffi= ciently large and the inter=connections among these neurons be sufficient= ly dense such that the probabilities of existence of "the proper inter= connections"' in premise (a) above be of a magnitude such that (:ell-assem= blies may actually come into existence HIere the evidence from neuro=

4 anatomy is encouraging- the human cortex has of the order of 10 neurons; (peripheral) neurons have been observed with approximately 1500 synaptic endings on them (ioe,9: 1500 input lines), Moreover, a given cortical neuron (association layer) seems to send out connections to all points in the spherical region surrounding it with radius approximately one millimeter, Hebb's theory is in some respects a stimulus-response theory, where "response" does not necessarily mean immediate (muscular) response, This is reflected most strongly in premise (b), where the "'initial selective sequence of cortical events" refers to the "'priming" of the initial skeletal pathway assumed in (a) by massive "training" stimuli together with the stimulus which alone is to activate the assembly later on, The massive "'training" stimuli may result from a sensation, e;,g,,, hunger, from some environmental feedback. the action of other, already established assemblies, etc o Referring back to the statement of the problem given above, the main problem then reduces to that of testing the role of the cell-assembly in learning i —. e. Hebb's theory - via the digital computer simulation of the models involved, One of the objects of this study is to give in terms of the model - a precise characterization of the formation and development of cell assemblies in some rudimentary learning process, It is hoped that this will serve as the basis of a more detailed and profound study in the future, It will be seen. however9 that just to achieve this modest goal, several fundamentally difficult problems must be solved. A final observation on the character of cell=assemblies and phase sequences of cell assemblies is in order; That ise that they allow one

to discuss learning and associated problems at a "!molar" level (as Hebb puts it) - ie, 9 in terms of aggregates of neurons, their statistical properties, etco - just as, for example, in statistical mechanics one works with aggregates of point masses, with little if any attention being paid to the individual bodies of the system_, 1,3 THE.CORTICAL NEURON AND SYSTEMS OF CORTICAL NEURONS The advent of the micro-electrode and associated probing techniques in the last fifteen years have allowed physiologists to determine electrical properties of neurons from direct inter-cellular readings, Consequently9 a wealth of knowledge has been gained about the electrical behavior of neurons, axonal propagation of pulses, etc0 Most of this knowledge has been gleaned from studies on non-cortical neurons, eg0,, neurons in the spinal ganglia, etc. A good, though slightly outdated, account of this is given in Eccles [ 6]> It is assumed that the properties of non= cortical neurons carry over to those of the cortex, Histologically9 the cortical neuron is a neuron; while direct electrical studies on the cortex are hard to interpret, they tend to support this assumption, It is manifestly impossible to simulate the real neuron in all its complexity, In fact, even if it were possible to do so, it would probably be unnecessary, as some of the properties of the neuron most likely are unessential to the problem at hand, As in any science, simplifying assumptions have to be made, albeit with great care, while trying to retain the most essential properties of the object described, The neuron to be simulated in this study is described in the following sections, 13ol Structure The gross-structure of the physiological neuron is as follows0

The main part of the organ is the cell body or soma, S, which sends out one fiber called the axon, A, which may later branch out quite profusely, A number of axons from other cells impinge on the soma of the body, sometimes on extensions of the soma which often are quite profuse called the dendrites9 of the given cell, The point of contact of an in= coming (afferent) axon with the soma or dendrites is the synapse and is usually characterized by a nodal swelling or button-like ending, There is a very narrow gap between this ending and the cell body9 called the synaptic ga Neurons have been observed with of the order of 1500 synaps tic endings on their soma. A given incoming axon may make contact several times with a given soma. The afferent or incoming axons are9 in effect, input lines; the axon sent out from the soma, an output line., Thus the neuron is a multiple input, single output device, There are neurons of different structure than this9 but their use in the nervous system seems to be specialized and not of relevance here (ego., bipolar neurons in the optic nerve), It should be noted that in the cortex there are neurons with very complex dendritic branching and small - if any - axons as well as neurons with dendritic branching and quite long axonse 1,3.,2 Input and Output. Threshold The axon of a neuron is capable of transmitting a pulse of electric potential (called the action potential) with no significant decrease in amplitude throughout its length,, The pulse originates in the soma of the nerve cell as a consequence of input-pulses on the incoming fibers (synapses) to the cell and spreads down the cellos axon to its various endings. A cell is said to fire when it sends out such a pulse

7 The neuron (and its axon) is a threshold device: As a result of summation of its inputs (at the synapses), depending on the length of the time interval since the last firing, it either fires completely or not at allo "'Firing completely" means that the amplitude of the outgoing pulse is independent of the magnitude of the input pulses, The total input to the cell at a given time is determined by the number of impulses present at the synapses at that time and the level of activity (recall hypothesis (c), 1o2 above) at these synapses, Actually summation of this potential activity over a brief interval of time probe ably takes place, The inputs thus sum, in a fashion as yet unknow, spatially and temporally, In the model, the inputs (see below) are added0 If the summed stimuli exceed the threshold at that time, the neuron fires if not, it does not fire, Once the neuron fires, it cannot be made to fire again for a period of time called the absolute refractory period, After that period of time9 it maintains a high threshold which gradually decreases to its quiz escent or resting value, The time interval, after the absolute refractory period, required for recovery to the quiescent state is called the relative refractory period, Thus, the neuron has the following threshold characteristic: absolute refractory period

The ti nle intervai f:,nce the I a.it..fiii,-?, oi uic.e neulrom; "., calc le til, ec )ve'._ state, In the model, time is quantized. t = 0, 1 2, where a unit of time corresponds approximately to one millisecond, A neuron fires at time t + 1 depending upon (1) whether it fired at time t,o If it did, then it cannot be made to fire until time t + k, where k, a positive integer and a parameter of the system, represents the absolute refractory period, (2) whether the sum of the inputs exceeds the threhold at time t + 1 If so, it fires at t + 1; otherwise it remains refractory, (3) a spontaneous firing mechanism which is explaintted below, 13- 3 Synapses The exact nature of transmission across the synaptic gap and summation of the incoming pulses is as yet unknown. Here, it is assumed that each input line has an associated synapse level, X, This synapse level in turn is used to determine the synapse value, S(X), for that line, usua-. ly by a table giving the value of S(X) for each value of X, If there are n n active input lines, then the total input at time t is i Si (t) where i=l i Si(t) is the synapse value corresponding to the i-th line at time t, Notice that in general there will be negative values of the synapse values: these correspond to inhibitory connections, According to the hypothesis (c), Section 1,2, the synapse levels are subject to a synapse-growth law as follows: suppose there is a synapse from neuron A to neuron B — ice., neuron A sends, via its axon, one connection to neuron B, Then, if A fires at time t and B fires at t + 1,

9 the synapse level from A to B. XABr is increased by a uniform amount 6x, If A fires at t and B does not fire at t + 1, XB is decreased by 6x; otherwise no change in XAB is made.- symbolically, A(t) & B(t+l) X XAB X+ AB + 6X A(t) & B(t+l) XAB AAB 6X A ranges in value from 0 to a maximum0.In addition to the law stated above, there is a probabilistic mechanism in the model that serves to "'slow- down"'the A change, Essentially, if A is to be changed (ioe,, either A(t) i B(t+l) or A(t) & B(t+l) ), then a probability particular to that level is consulted. if it exceeds a certain amount, then the change takes place, otherwise no change occurs, This mechanism can be used to bias the direction of synapse-level change, 1o3a4 Fatigueq Spontaneous Firing In addition to the threshold function, there is a long term mechanism which delays full-recovery, called fatigue, The evidence for this from neuro-physiology, in the case of peripheral neurons, is fairly definite. The fatigue function and its implementation will be discussed at length in a later chapter, The effect of fatigue is one of the subgoals of this study, as is that of spontaneous firingo There is also fairly good evi= dence that cortical neurons fire spontaneously (see, for example, Sharpless, SO Ko and Halpern, Lo Mo [12]), In the model this is defined as follows~ if the recovery state of a neuron exceeds a certain value, then the neuron fires with a certain probability, Spontaneous firing, though not used in this study, may act as a form of drive if it is a func= tion of some "'reward" or "'punishment', etco0 ieo, a non-specific global disturbance, As the mechanisms of fatigue and spontaneous firing can be

defined very exactly in the model, their effects can be studied under tightly controlled conditions, 1o3,5 Systems of Neurons The mammalian cortex consists of several layers of neurons of different structure. The outer layers for example, consists of neurons with axons which spread out horizontally over large distances; the inner layers consist of neurons with very complex axonal branching in the immediate vicinity of the cell; axons from within the cortex and perhaps those from subcortical structures descend up through all the layers and back down again, probably with complex branching along the way, etc, (see, eg,, Eccles, ibid.o ppo 229=331) Moreover, there are regions of the cortex, into which sensory input is projected (e.g,o the visual cortex) and other regions from which motor control is effected., These features can be simulated to some degree in the model, First of all, a neighborhood relationship for a group of neurons may be defined that determines the neurons to which the neurons of the given group are connected and the density of connections sent out by these neurons. This neighborhood relationship thus permits structuring several layers of neurons with different connections for the different layers as well as inter-layer connection, For example, in the figure below, layer 1 may have very dense local connections, similarly for layer 2, while layer 3 may be more diffuse, neurons sending out connections over greater dis= tances; layer 1 may connect to layer 2 in an approximate one-one fashionI while layer 2 may send out diffuse connections to layer 3, etc0

11 / / layer I'. -— 7 layer 2 / d layer 3 / From the discussion so far9 it is evident that there are many para= meters and functions that can be varied in the given class of models, threshold function, fatigue, spontaneous firing. neighborhood relation, density of interconnections, etc, Moreover, the relationship between the various possible choices may be complicated and subtle, Hence, the great value of the simulation approach, hypotheses, such as those des= cribed in l05o, concerning such relationships can be tested - hypotheses whose validity (in the models) simply may not be rigorously demonstrable a priori, lo4 PREVIOUS NEURAL NET STUDIES This study is not the first in its field, Rochester9 Holland9 et al, [11] experimented first with a "'discrete=pulse model," using a simulation program for the IBM 701 then with an "'FM model," using a simulation pro. gram for the IBM 704, In the first case, they exhibited "diffuse reverberation9"' a phenomenon somewhat akin to the sustained activity discovered in isolated cat cortex by Burns [1 ], but could not demonstrate any tendency on the part of the neurons to form cell-assemblies, While the "diffuse reverberation,"' in the authors' eyes, might serve as a mechanism for short term memory, they felt that additional structure must be imposed upon the net to allow formation of cell~assemblies, They conferred with Milner and followed his suggestion [10] of introducing negative synapse

12 values into their model. At the same time, taking advantage of thOe arger and faster IBM 704 computer, they reprogrammed their model so that the detal ed firing history of the neurons was lost, being replaced by a fret quency of firing for each neuron, This frequency varied with the time, hence the term "'FM model," They simulated a net of 512 neurons with six inputs each. In their experiments with this model they observed the for?mation of cell-assembly=like structures, ite,,, sets of neurons such that within each set the connections between the neurons had large, excitatory synapse values while between the various sets themselves the interconner:tions had large inhibitory synapse values- They also observed phenomena somewhat like the fractionation and recruitment of neurons, as required by Hebb's theory On the other hand, the cell-assemply-like structures they observed could not arouse one another, as IHebb's theory again requires., ~hat I~s, their model was too environment-dependent, in later studies with this model, Holland and Rochester demonstrated binary learning (Holland - personal communication), However, for a varicty of reasons, the project was abandoned and not resumed by any of its originators It was continued, however, at the Logic of Computers Group at the University of Michigan, under the supervision of John H. Holland.. by J. W Crichton [ 5], Crichton and Holland [ 5] proposed a new method of simulating neural nets which took advantage of the increased storage of the IRiM 704 computer and which would allow simulation of up to 2000 neurons with about 150 in' puts per neuron. This gives rise to the so-called "variable-atom" mode l:n which all neurons with the same characteristics (i.e,. firing history threshold. fatigue, etc ) are lumped together into an "atom." Computation of the number of active inputs to a neuron is performed by reference to

13 approprate Poisson tables, This model was never simulated on the IBXM 704, The availability of an IBM 709 computer, a machine representing a considerable advancement over the IBM 704,possessing much improved input=output equipment and pro= cedures and new powerful operation codesa caused a major change in plans and the model was to be reprogrammed for the IBM 709, taking advantage of its new features, Crichton was joined by Finley at this point Crichton and Finley modified the model, programming it for simula= tion on the IBM 709 [4 ] Early experiments with this model revealed the distressing fact that the model was not capable of sustained activity such as Burns observed [1 ], Stimulated 9'slabs"' would not maintain activ= ity indefinitely, but in fact died down rather rapidly, Marked epileptic behavior resulted - that isa intense activity alternated with low activ= ity, leading rather quickly to'"death,S i eo, no activity at all, No modi — fication of network parameters seemed to produce a cure for this behavior and we were forced to re-appraise the whole model, This lead to the discovery that the statistical techniques used in the model contained a fatal flaw, basically that it would not allow a small number of neurons to pro: duce a sufficient stimulus to fire a single neuron, Several modifications of the original technique were tried with little success, This forced us back to basic principles and led to the implementation of a new technique aimed at introducing greater statistical disuniformity into the modelUnfortunately, this model too was discovered by the current author to be defe$ tive, The variable atom concept was discarded, being replaced by a straightforward neuronhby-neuron simulation0 The latter method, although perhaps slower and more memory consuming than the former, lends itself quite well to detailed statistical analysis,0

I 5 Scope of This Study In Chapter 2.; a detailed formal description of the class of models is given. This is followed in Chapter 3 by a summary account of some simple experiments involving networks without cycles (ice,,, no feedback). The object of these experiments was to determine how well the behavior of a single isolated neuron of the model would correlate with various patterns of inputs presented to it. In reality, this chapter summarizes results obtained in the early stages of this work. In some respects it is not that relevant to the hard core of the later stages^ It is included here primarily for historical reasons- A detailed description may be found in Finley [ 7), In Chapter 4. a comprehensive discussion is given of the analysis and operation of networks with cycles (ife,., feedback present) This is easily the most important chapter of this work. The discussion proceeds from the simplest types of networks with cycles (uniform random distribution case) to more complex networks (distance-bias), The important concept of 9'steady-state behaviors is developed, with an analysis that may be used to determine the threshold curve parameters in terms of other basic network parameters, The role of the simulation as a means of bypassing tedious (if not impossible) mathematical calculations is repeatedly empha= sized, Finally, an attempt is made to characterize simple cell-assemblies that might arise as a consequence of certain training stimuli (single periodic stimuli and alternating periodic stimuli), The discussion of Chapter 4 is a semiformal and heuristic exposition of certain hypothesis, The remaining chapters deal with the experimental verification of these hypotheses Chapter 5 is a descriptive essay on the experimental methodology

15 peculiar to experiments involving networks with cycles,( Chapter 6 is devoted exclusively to stimulus-free or steady-state behavior of networks with cycles, Chapter 7 to networks with periodic stimuliThe chief results of Chapter 6 center about successfully producing stable, steady-state behavior in networks with a sufficiently complex cyclic structure that cell-assemblies might be expected to form when patterned external training stimuli are introduced into them~ This means that the networks may contain sufficient information capacity to allow cell assembly formation (development of learned responses) as a result of "training" stimuli from the environment, For appropriate setting of certain network parameters9 this is not too hard to accomplish, Chapter 7 contains two main results, One is the formation of a simple cell —assembly (closed cycle), The essential role of the synapse-growth law is clearly brought out here0 The second result is the partial forma= tion of competing cell-assemblies=partial, since somputer funds dictated cessation of experimentation~ Partial though the evidence is, it seems to suggest that a more complete experiment would be entirely successful Several appendices are included that are concerned with some of the behind-the-scenes aspects of the simulation, e0g,, random number genera= tion, testing statistical distributions, etc, In point of fact - as is perhaps true of all experimental work - perhaps 90% of the effort -in volved in this work was concerned with "behind=the-scenes"' problems9 such as programming special-purpose I/O routines9 statistical tests9 reprogram= ming the network simulation routine, etc. Such tasks are the analog in simulation of the design and checksout of physical apparati ins, say~ Physics or Chemistry~

2 FORMAL DESCRIPTION OF THE MODELS 2 1, I INTRODUCTION In this chapter, the structure and operation of models of the class being considered are defined formally The notions of run and experiment are clarified and, using the network equations, the abstract prototype for all experiments is given, Recursive equations are given for the various network functions, such as threshold, fatigue, etc- Following this, in the next section, an attempt is made to clarify the role of the various functions and to display possible functional forms for them, though no attempt is made at this point to give formal derivations, Finally, a note is given on the network simulation program, followed by a reference list of symbols used in this chapter. 2;,2 THE NETWORK EQUATIONS A neural network', of the class of models considered in this study. consists of a set of N elements called neurons with a set of specified directed connections between these neurons. "'Directed" implies, for example, that neuron A may send a connection to neuron B, but not converselye. i?.e, there is a connection A-to=-B, but not B=to-A Such a connection is referred to as the output of A, the input to B. A neuron of the model may have many inputs, but it always has only one output. This output; however,, may branch and go to several neurons, including the source neuron, as inputs or go to the environment, All that is external to the network itself but which influences, and is influenced by, the network, is called the environmenta, In general the environment will supply input to selected neurons of the network and receive output from selected neurons. Included in the concept of environment would be, for example, reflex mechanisms, a simulated biological environment, a human observer, 16

17 etc Time is quantized in these models, t = 0, 1, 2, 3,, e At any time t, the state of the network, S(t), is determined by the functions (see below) performed by the model; likewise the state of the environment. E(t), is determined,. From S(t) plus the input, I(t), to the network at t from the environment is determined the state at time t + 1 S(t+l)0 Also, S(t) determines the output at t to the environment, 0(t), Symbolically9 S(t+l) = F (S(t) I(t)) (t = O, 1, 2,, ) where F is the state-transition function for the network, (In general, F is far too complicated to define explicitly, however it is defined implicitly by the network equations given below~) Likewise, E(t+l), the state of the environment of time t + 19 is determined by E(t) and O(t); E(t+l) = FE(E(t),O(t)) (t = O 1, 2, oo,)o Since I(t) = g(E(t))q for some function g, then S(t+l) = F (S(t),I(t)) F [S(t),g[FE(E(t),0(t))]]O This is a recursive equation for S(t) S(O) and E(O) form the initial conditions for the network and the environment respectively, Given S(O) and E(O), and a starting signal, the network and environment proceed automatically over the time steps t = 0, 1, 2_ Cc until a stopping condition0 determined in the environment, is reached, Notice that the cycle net= work + environment + network, forms a closed feedback loop, The procedure of running the system -<network, environment>, given S(O) and E(O), from t = 0 or t = t0o( 0) down to a terminal time step tf will be called a run,, The sequence of outputs 0(0) (or O(t)).,, (tf) form the behavior of the network,, However, the term "behavior" will be used in the broader

18 sense of reaction of the network to the environment, The specification of a network-environment pair, the initial conditions, and a set of hypotheses about the behavior of the network constitutes an ex eriment Thus, the abstract prototype of all experiments has the following structure (Given- Behavioral Hypotheses, S(0O)D E(O)) Start Compute E(t) = FE(E(t-l)a0(tl)) | Compute S(t) = F~(S(t-l),I(t-l)) Does stopping criterion Yes hold? Yes Stop No As mentioned9 the state-transition function is too complicated to be defined explicitly and must be defined implicitly, This is done as as follows, At any time t, a neuron may fire or not fire: If it fires, it puts a 1 at its output, if not - a O0 The set of neurons that fire at time t, together with input from the environment, will determine the set that fire at t + 1, The condition for the firing of the i-th neuron at time t + 1 is given as a recursion relative to the real-valued functions R, F9 S, and I which in turn are defined relative to recursions on ri(t), Li(t) and Xji(t) by the functions V, %, St and I. Once these functions are given, then the behavior of the network is determined for all t from the initial states, This condition is Ti(t)> [&d(t+l) = I if and only if [Ri(t) + Fi(t)=f $Sji(t)j(t) + Iltt)] ij~ 1a 2 7,

19 where 6,(t) = 1 means "'neuron i fired at t."' Thus, T says that neuron i fired at t + I if and only if the condition Ri(t) + Fi(t) j Sj(t)j (t + i(t) holds Ri(t) and F i(t) are the threshold and fatigue values of neuron i at time t respectively, Sji(t) is the weight or synapse value of the directed connection from neuron j to neuron i at time t, For neurons j which do not send connections to neuron i9 S j may be considered as equal to zero, Ii(t) is input to neuron i at t from the environment; it will be referred to as the pre-stimulus to neuron i, R9 FD S9 and I are all real numbers; R and F > 09 S and I either > or < 0O Negative values of S are called inhibitor inputs, positive values are called excitatoryn They are defined recursively as follows; Ri(t) = V(ri(t)) where V, the threshold function, is a real-valued function of ri(t); ri(t) is the recovery-state of neuron i at t defined as follows0 if &5i(t) = 1 ri(t) ri(t1) + I if 6i(t) = 0 rma if 6 (t) 0 & ri(t-l) = r or r m-= For ri(t) = 0, ra V(ri(t)) = ra is the absolute refractory period; i e,, if 6i(t) = 1, then neuron i cannot fire again until t + ra Note that the function V is the same over all neurons of the net, Fi(t) = O(qi(t)) where 4, the fatigue function. is a real=valued function of Li(t); zi(t) is the fatigue-level of neuron i at t defined as follows;

( Qi(ttl) + A2 if 6d(t) = 0 maif 6i(t) = 0 and Zi(t=l) = Zmax (i~t) max 1 2i(t-1l) A1 if 6if (t) 1 ILZ if 6 (t) 1 and Zi(t-l) = Q min 1 mi where AI > A2 > 0,' 1 and A2 are extremely important parameters, deter= mined from the system background firing rate, fb, by the relation A2 Fb f b ) A1 + A2 N Sji(t) = mjiS(X ji(t)) where S is the synapse=value function, taking positive, negative, and zero values, mji is the multiplicity of the connection j + i, while Xji (t) is the synapse-level of the connection j + i at time to Fb is the expected number of neurons of 0 firing at t when1 is operating in steady-state (see Chapter 4), It is defined as follows ( Xj(tl) + 1 iff 6j (t=l) = I and 6i(t) = 1 and pi(t) > U(XAji.(tl)) X.i(t) = X (t1) 1 iff 6j(tl) = 1 and 6(t) = 0 and pi(t) > D(A ji(tl)) Aji(t-l) otherwise, pi(t) is a number drawn randomly and independently for all i and t from the open interval (0,1)> U(X) and D(X) are the probabilities of change up and change down of synapse-levels respectively; notice that U and D in general vary with Xk, If = X max then U(A) = O0 if X = Xmin then max min' D(X) = 0., The condition Pi(t) > U(Xji(t-l)) says simply that Xji(t=l) is incremented by 1 with probability U(Aji(t-l)) at t. As with A1 and A29 U and D are extremely important quantities, and relate to the system background firing rate fb as follows; f Dk F-=b 1i(for all X)

the law for ircrmesntieng or decrement.ing i- the implementation in the models of tHebDo> synapse-growth law The multiplicity m j of the connection j * i determines the density of the connection ri. = 0., 1o 2. mji = 0 corresponds to the case of no connection from i to i Specification of the set of mji s for all i, j determines the connection scheme for a given network Thus, with these recursive definitions in mind, the flow-chart given above representing the abstract prototype of all experiments takes on the following more specific form: (Given: Behavioral Hypotheses. ri(0), i (0), X.i(0) for all i9j = 19 2 1. 2, N) Start __.1 [i:I,.. (1) iompute R (t) = V(ri(t-l)) Fi(t=i ) -: i (t=1l)) For j - 1. - N(1), i (t:1) mjiS( ( t-)-l ) S rt (t- 1 (t(J; D.c::'''..'r mne Iil (t —' 1)' -~-nii~t-L] i, _____ ____ ____. -?tt) Qo(Does I s,S..) =it 1 i1e,) + A 1 ~ ~~ 1 l~~ ~ — 1 1-'_0,. ~I~ji~ A.. A t-l) + I (i (tf+ A P(t ) I f P. | L (t(l) if Pj(t=) +1 if P)(t.., yes It.t +_ l.

22 In this diagram, the notation "AS-B" means that the value of A is replaced by the value of B; "i = 1, oc, N(1)" means that the computation from the occurrence of this statement down to the point B is first done for i = 1, then repeated for i = 2, i = 3, o,. down to i = Ne (This is just a "loop"' on the index i in increments of 1.) P1(t) is the condition for incrementation of Xji(t) given earlier, P2(t) that for decrementation, 2.o3 THE NETWORK FUNCTIONS R, F, AND S In the preceding section, a formal characterization of the functions Rp F, and S was given, with no attention being paid to their specific anal lytic forms. As was mentioned in the Introduction, the study of these forms is a subgoal of this paper, since prior to this there has not been a rigorous demonstration for any one of these functions assuming a given functional form, Since these functions may be specified as one wills, they in fact are parameters of the network, Given values of these parameters and values of N =', mji9 ipj = 1, ooo, N, a specific network is determined 2,341 Control of Firing Rate From the network equations Ti(t) one can see that the function of the threshold value Vi(t) of a neuron, as modified by the additive quantity Fi(t), is to determine whether or not neuron i of the network fires at t, If the combined input to neuron i is at least as great as the product of Ri(t) and Fi(t), then it fires, otherwise it does not, The function V, which determines R, then controls the firing rate of the neurons of the network L Immediately after neuron i fires, V is infinite and i cannot fire. After a few time steps (ra the absolute refractory period), it'recovers" slightly, that is a very large input stimulus can cause it

23 to fire, after a few more, less stimulus is required, down to the point where if it has not yet fired, a minimal stimulus is required to cause'it to fire,. This point is called the resting or quiescent value of V, Vq The function 4 which determines F. (t) modulates the control of V in the sense that if the firing rate of neuron i is high, then 0 is large9 hence larger stimulus is required to cause i to fire, If the firing rate is low, the magnitude of ~ is small (close to 1) and less stimulus9 depending upon the value of V, is required0 2 3 2 The Threshold Function From 2.2 one sees that the threshold value, Ri(t), of neuron i is that value which corresponds to the recovery state r. of neuron i; that is9 r = the number of time steps since neuron i fired, Each neuron i of the network has associated with it a value of r.i depending on its immedi= ate firing history, Thus, if 6i(t) = 1 (i,eo,, neuron i fired at time t), then ri(t) = 0O if 6i(t=10) = 19 and 6i(t=9) = 0.. 6.(t9-l) = 09 6i(t) = O (i,e.. neuron i fired at t=10 and did not fire again up to and including time t), then ri(t) = 10, Each time neuron i fires, ro is set to zero, Each time it fails to fire, it is incremented by 1, ihe,, ri(t) = ri(t-4) 4 1 ri has a maximum value rm further incrementation fails to change it ~ i. e, rm + 1 is the same as r ~ The function V(ri(t)) which gives the value Ri(t) is the same for all neurons of~L Because. at any given time t, these neurons may have distinct values of r(t), they will usually have distinct threshold values,The absolute refractory period or period of infinite threshold is rao That is9 if 6i(t) = 1 (neuron i fires at t)9 then i cannot fire again until t + ra (until r1 = r),, The total number of time steps to

24 quiescence., that is,, the resting values of threshold, is r. Thus, if q neuron i fires at t, it is fully recovered (has reached the resting value) at t + r, In general in this work ra = 3 r3 = 16, and r = 64, There are three important aspects of the threshold curve, The first is its value at r = r a the second is its quiescent value - ieo Vq = V(rq), and the third is its functional form (ioe, exponential, quadratic, linear, etc,) especially in the midirecovery range r= ra + (r - r )/2 A procedure for determining the form of V(r) will a q a t e given in Chapter 4, Note that the reciprocal of the recovery state for a given neuron i, 1/r.i averaged in some appropriate fashion, will correspond to the firing-rate of neuron i, For example, if a neuron fires on the average once every five times, its "average" recovery is r 5 and its firing rate = 1/5 = l/r, The threshold curve9 then, has the following form, where V = the maximum value (for r = r ) V = the quiescent value (r = r ); a q q V V(r) V. (threshold) r = r r r Q a The functional form of this curve, the quantities Vm and Vq as well as the initial values of r for each neuron of the net, will be specified for each experiment, The quantity Vq is important because it defines the least amount of input stimulus (synapse-value) which may fire

25 the neuron, 2,3o3 The Fatigue Function As already mentioned, the fatigue value Fi(t) serves to modulate the threshold value Ri(t) of neuron i and hence modulates the firing rate of i, The desired effect of the fatigue function is as followsc given the neuron in a fully recovered state, that is, the threshold value is near Vq and the fatigue value is O, Suppose inputs are presented to the neuron so as to cause it to fire at a fairly high rate (above the back= ground rate fb), Then9 gradually over a period of a large number of con= secutive time steps the fatigue value, ioe,, O()9 of the neuron increases in such a fashion as to force the firing rate of the neuron to drop down to fb on even lower, keeping it there as long as the given inputs are present, Suppose next the inputs themselves drop off so that at the most they would cause the neuron to fire at fbo Then9 the fatigue value, 4(Q) decreases slowly back to 0 so as to preserve approximately the average firing rate of fb" Intense activity of the neuron9 that is, firing at near maximal rates, produces more abrupt increases in 0, whereas sudden drop-off in activity9 that is9 firing at very low rates (<fb) produces a more abrupt decreases in ~o The fatigue value F i (t) of neuron i is determined by the fatigue func= tion 0 from the fatigue level li(t) of neuron i at time to Fi(t) -= (Zi(t)l The function ~ is the same for all neurons of the network' Similar remarks for the variation in threshold values among the neurons of the network apply to the fatigue values as well, The fatigue value is used9 as has been indicated9 as an additive factor of the threshold value for the given neuron~ 4 is a monotonically decreasing function of Z with, > 1o The larger the X, the larger the sum R + Fo Thus9 neuron i may be

26 fully recovered, ri(t) = rq, and Ri = V(rq(t)) = Vq, but $ may be so large that Ri(t) + Fi(t) - Vq + (2 i(t)) is much greater than V. Fatigue is rendered ineffective by setting ~(Q) = 0 for all Z. Then Ri(t) + Fi(t) always equals Ri(t). Note that the fatigue value has no effect on the absolute refractory period (4 + - = -). The quantity Z for a given neuron varies incrementally from 0 to ZQ with 1/Zm as the smallest possible increment. The manner of variation is the following: Suppose the neuron has fatigue level Z0 at time t. Then, if the neuron fires at r, Z0 is decremented by a quantity A1, i.e., 0 + 0 A. If it does not fire at t, then it is incremented by a quantity A2 i.e., Q0 + o + 2'. This is illustrated below: neuron fired at t neuron failed to fire 2~~~~-'~- At t Z011 ZU ~O 2A2 In general, A1 > 0, A2 > 0, and A1 > A2. Decrementation below 0 and incrementation above,m have no effect, i.e., 0 - A1 = 0, m + A2 = Zm. A1 and A2 are extremely important numbers since in terms of them is expressed a crucial parameter of the net, namely the firing rate at which a neuron experiences no net change in fatigue level. If a neuron is firing at this rate - call it fb - then over an interval of length T time steps, say, there is no net change in the Z for that neuron. Recalling that fbT is the expected number of times a neuron will fire in the given interval and (l-fb)T the expected number of non-firings, this means that AlfbT - A2(1-fb)T = 0.

27 Solving for fb gives A2 Fb b A1 + A2 N This quantity, fb already mentioned9 is called the background firing rate or the 9'nominal system average,o' It will be treated in detail later one, Note that given fb one can determine A1 and A2 (up to a constant multiple k > 0 which may be chosen as 1) and9 conversely, given A1 and A2, fb is uniquely determined fb plays an important role in Crichton's theory [ 2], The functional form of the fatigue curve, the numbers A1 and A2 (or, fb-) as well as the initial value of 2 for each neuron of the net, will be specified for each experiement, The form of the curve is clearly of the greatest importance since it9 together with the numbers A1 and A2, determine the recovery rate of a neuron as well as its fatiguing rate, The desired properties of this curve have been outlined above, The rationale for these will be given in the next chapter, In general, A1 and A2 are functions of 2, This allows greater flexibility in creating (Z)}, For example, 4 may be made into a hysteresis function, In contrast with the case of the threshold, no convenient analysis relating 4(2) to the other network parameters N =,lPp (see Chapter 4). V(r), etc, was developed in the current work, See Chapter 7 for a further discussion of this. 2,d34 The Synapse Value Function Suppose a neuron j sends one directed connection to another neuron i, From 103393 recall that to each such directed connection at time t is associated a positive number9 the synapse levelD Xji(t) o Just as with the recovery states and fatigue levels, X is used to determine a value9

28 the synapse value, S, by means of a functional relationship. A has a range from 0 to Am It is incremented according to the synapse=growth law as follows; suppose the connection j - i has the synapse level X; Then, if j fired at t - 1 and i fired at t9 A0 + X0 + 1, with probability X 0 0 U(A.) Otherwise, Ag + -?- i, e ~ no changer If X0 f 0, no further decrementation is allowed; if \0 Am no further incrementation is allowed, The statement'A0 -* X + 1 (i0 l) with probability U('0 )D(Xo) i" means that if A0 is to be increased (decreased) - depending upon whether j fired at t - A and i at t, etc.- then the incrementation takes place with probability U(0,(D(o)): Note that U(km) = 0 and D(0) = O, The in= crementations with probability U(XO) or decrementations with probability D(X ) form independent trials9 e,,g,, if the synapse level from j to i is AO and that from k to Z also equals 0o9 where both neurons j and k fired at t 1 i, and both i and Z fired at t (hence X i and AkZ are both candidates for incrementation), then the probability U(A0) is consulted indes pendently in each case. The numbers U and I) are of great importance, especially in light of the theory developed by Crichton mentioned above, Like the numbers A1I and A2 of the preceding section9 U and D are related to the nominal system average9 fb. The reason is quite simple' Assume that the rate of change up of a synapse is proportional to U. say = kU9 likewise that the amount of change down is proportional to D9 say = kD fb is again defined as that firing rate for which no net change in A between A and B will occur, assuming for the moment that neurons A and B are firing randomly and independently at the rate fb. If this is the case9 then fb will represent the probability that the firing of A at t a 1 is fI:.lowed by the firing of B at t ("success"'), likewise f (lft) is the probability that a firing

29 of A at t - 1 is followed by a non=firing of B at t ("failure"'), fb9' is the expected number of "'successes" over a time interval of length T, fb(lfb)T the expected number of "failures," kUf T is the expected net increase in the interval of length T; kDfb (lfb)T the net decrease, By assumption, the difference of these is zero and Uf = D(l-fb)fb or D b b U +D Recall that the firing or non=firing of a neuron is determined by a comparison of the sum of the synapse values on the active inputs (that is, those connections coming from neurons which fired the preceding time step) with the sum R + F (which is infinite if the neuron has fired at one of the previous r I time steps)> If this sum is less than R + F the neuron does not fire9 otherwise it does,! No restriction has been placed on synapse values, However9 synapse values for small X's are assumed to be negative, large XAs - positive., The negative synapse values for active input lines correspond to inhibitQry inputs to the neuron; S is assumed to be some monotonic increasing function of X, Since U and D are both functions of XI it is possible to vary the rate of change of S(X) over the range of X, This feature may be introduced to tend to "trap"' values of a (hence S(X)) in certain ranges. For example, it might be desirable to bias the change of X away from values corresponding to S(X) = 0 + c (E small, positive), etc. 2A4 NOTE ON THE SIMULATION PROGRAM A diagram representing the operation of the network, given an environment, the initial conditions, and the behavioral hypothesis was given

3 earlier,. A program was written for the I3M 7090 computer which simulates the operations indicated in that diagram This program consists of four basic parts;' (1) the lists whif::i describe the state of the network at each time step, The!sts are a block of reference information for (2) below and in turn consist of two parts; (a) a permanent part which is never changed in the course of a run, and (b) a volatile part which may change; (2) the networkyrgram which computes at each time step the various functions required by the model. referring to the lists for parameter values and making appvtopriate changes to the lists; (3) the executive and environment routine, a supervisory program which perfonnrms two functiorns (a) it monitors pertinent network parameters, running time of the program, etc,, and handles the appropriate output editing and (b) simulates the environment of the model i- ie., computes input and output functions, making any necessary changes to lists; (4) u t editin~ and other special=purpose routines, usually ancillary to the executive routine,, The network program seldom ever will be varied0 the executive and environment routines will vary from experiment to experiment and often from run to run Parameters in the lists will vary from run to run in general, while those lists particular to a given experiment will vary from experiment to experiment, It is the lists that determine the structure of a given net i,e.,, neuron inter-connections, density of connections etc, Note that the executive routine contains provisions for experimenter intervention in an experimental run, The experimenter9 while watching a real=time display of selected functions of the network, may at any time change the display, modify parameters, store the entire state of the system for future back-up purposes, etc, Diagrams giving the overall 9;rructure of the program and the flow of

31 control are given below-c Structure of Program Lists (Storage) F II l'~ ~~~~~ l | ___Reference Paths Networkam Program Control Paths Executive and Environment Ancillary Routines Routines Flow of Control Start Executive and Environment An illary Routines Routines Network Program

-3 Symbols Used in Section 2,2 _ a neural network the number' of neurons in S(t) state of the network at time t E(t) state of the environment at time t I(t) input to the network from the environment at time t O(t) output from the network to the environment at time t FN FE state transition function of the network and the environ= ment. respectively 6. (t) 1 the statement "neuron i fired at time t" T (t) the condition for 6 (t) = 1 Ri.(t) threshold=value of neuron i at time t Fi(t) fatigue-value of neuron i at time t Si (t) synapse=value of the connection from neuron j to neuron i at time't Ii(t) input to neuron i at time t from the environment r. (t) recovery state of neuron i at time t 91 (t) fatigue-level of neuron i at itime t Xji(t) synapse-level of the connection from neuron j to neuron i -at time t V(ri(t)) threshold function, gives Ri(t) as a function of ri(t), Ri(t) = V(ri(t)) (Qi (t)) fatigue function' gives Fi(t) as a function of i (t),, Fi (t) = ( i(t)) S(X..(t)) synapse-value function9 gives Sji(t) as a function of X);(t)9 S.f(t) = S(AX.(t))

33 Pi(t) random number associated with neuron i at time t fatigue-level change if 6i (t) = 1 2 fatigue-level change if 6i(t) = 0 fb nominal system frequency or average background frequency Fb expected number of neurons of CI firing at t U(bji(t)) U(X.ji(t)) probability of change fo r synapse-level X Ajj(t) D (X. j(t)) probability of change down for synapse=level Xji (t) mi.. multiplicity of the connection from neuron j to neuron i $ymblIS Used in the Flow-Diaram i = l,,oo0 N(l) "loop" to B N times, starting at i = 19 incrementing i by 1 each time; i eo,, first i = l, then i = 2, 3? o~ etc, through i = N A + B replace the value of A by the value of B P1 (t) the condition for incrementing X..ji (t) P2(t) the condition for decrementing A ji (t) d~~ ~ ~ ~ ~ ~ ~ ~~~~~~~3

3o CORRELATION EXPERIMENTS9 CYCLE-LESS CASE 3, 1 INTRODUCTION In the implementation of Hebb's theory, several questions may be isolated in an attempt to elucidate the nature of the cell-assembly, Perhaps the first of these concerns identification of cell-assemblies, that is, in terms of the given models, what are the criteria for cell= assembly-ness? This question is aimed at a static, structural condition and may be paraphrased as followsn suppose a model is given in which it is suspected that cell-assemblies have formed, How, then, are they iden= tified? The second question (which, causally speaking9 should be first) is concerned with the formation of cell-assemblies0 ioe,, in terms of the given models, how does a cell-assembly like structure come into existence? This question is aimed at dynamic, structural changes and goes hand-in-hand with a third- what are the stability conditions, in the given models9 for cell=assemblies? To make this last question more meaningful, the informal description of cellassembly given in 1,2 is augmented as followso One may regard a cell-assembly as a union of a large number of reverberatory circuits (in the Lorente de No sense of the term), any several of which may be active for a very brief period of time and interrelated so that while any one of the circuits may be rapidly extin= guished (within 1/100th of a second in the physiological situation)9 yet for a much greater period of time (several seconds or longer) the structure as a whole is active in the sense that at least one of the component circuits is active, That is, within a given cell=assembly there are a number of alternate pathways which perform the same function, There= fore9 the stability question for such a structure is absolutely crucial. This character of the cell-assembly accounts for the fact that the loss 34

35 or damage of part of a fully developed cell-assembly need not impair its overall function, thus for the seemingly small effect in some cases of brain damage upon learning ability and memory, (This is part of Hebb's dual trace memory mechanism and accompanies his postulate of synapse growth since the reverberatory activity would assist to retain memory temporarily while at the same time it would facilitate the long=run growth changes necessary for permanent memory (see [ 9 pages 60-78, in particular, p. 62), The cell-assembly is, therefore, not strictly dependent upon individual neurons for its functioning, For its growth and development, it does depend upon the synapse-growth law (Hebb's neurophysiological postulate) and upon the availability of neurons which can be'recruited" to the assembly when they "cooperate" with it and likewise which can be dropped out of the assembly (fractionation) when they fall into disuse~ The ability or non-ability of the models to allow recruitment of neurons to an assembly or fractionation of neurons away from it9 then poses a fourth question which is taken as the starting point of this study- do the neurons of the models have the ability to be recruited into an assembly when presented with the same input patterns and, dually9 to fall away through disuse V This question leads9 as shall- be shown in the next section9 to simple networks which are extremely useful for studying the behavior of single neurons and small groups of neuronso Crichton, in the appendix to his thesis [ 2] has discussed the stability of cell-assembly-like structures, called by him "'semi-autonomous subsystems ", " Some results of his analysis will be used later Questions of stability and structure of cell-assemblies are considered in Chapter 4n The results of this chapter form the basis of the design of the U(X) and

36 D(X) tables used in the experiments of Chapters 6 and 7T 3, 2 CORRELATION The behavior of a neuron of the model depends upon its input history (which includes synapse value changes on'the input lines) and upon its internal state changes (threshold, fatigue),. To determine the response of a given neuron to a particular input patte'rn, one has to take into considc eration the effect of this pattern upon the internal state changes of the neuron and the relationship of this pattern to any other inputs the neuron may have, Basically, therefore, the behavior of a neuron may be regarded as being determined by some function over the totality of its inputs, Consider now a situation in which recruitment might occur, Let C be an uncommitted neuron of the system and suppose it is presented with a patterned input from a source A of neurons, (A might be, for example, a set of neurons of area 17, reflecting a direct sensory input from the retina.) Lump all the other inputs to C into a group B0 Now it might be that A directly affects a system of neurons D, which may form part of a cell-assemblyf"l The synapse values from the neurons of A to C will be9 by assumption9 low initially0 A X

37 Likewise, the synapse values from A to D are assumed to be high, Iff as a result of repeated application of the input from A, the synapse values from A to C rise and become high, then the neuron C is a good candidate for recruitment into the cell assembly AI Whether it is recruited or not depends, of course, upon its relationship to other neurons of the system, specifically, upon how iAt's output affects some of the successor neurons to D, It may merely continue to operate in parallel to the assem= bly &C In fact C could become part of a system of neurons which would tend to suppress, via inhibitory connections; an antagonistic assembly V In any case, the question of when C would "correlate"' with A in its firing arises, Here "'correlate"' means that the synapse values of A to C are high and that C tends to follow the same firing pattern as do the neurons of A, ThereforeD whether C correlates with A or not depends critically upon the relationship between the firing patterns of A and Bo The conditions under which correlation occurs are essential to the formation of cell-assemblies, as shown by the experiments of Chapter 76 3o3 EXPERIMENTAL CONFIGURATIONS FOR CORRELATION STUDY 3o3ol Overview The general configuration of neurons that is to serve as the basis for the first part of this study is the following A_ OC Ag ( —

38 A and B are sets of neuron9 C is a single neuron, Each neuron of A and B sends a connection to C- There are no other connections between neurons of A and B and C - i,e, no cycles, The neurons of A and B are assumed to be driven from stimulus sources Al and B,. From the patterns on the input lines A to C and B to C and the initial states of C, the output OC may be determined, The sizes of the sets A and B, the particular patterns which they supply to C, the initial states, the net parameters - all these are to be specified by the particular experiment at hand,. Thus, A and B may consist of a single neuron each or A may have N neurons and B have none, etc, One can readily see then how it is possible to study the behavior of A as a function of a wide range of possible inputs and at the same time study the response of C'in isolation," as it were, given different settings of the basic net parameters,' Amodel situation of concern in this chapter is that in which group A essentially provides'back-ground noise" to C, while group B provides patterned inputs of various types, One example of this is the case in which the neurons of B fire within a periodic envelope as follows, Input - Stimulus I o neurons of neurons of B firing B quiescent Questions such as what are the lengths of the "on"' and the "off" periods in relation to neuron parameters, what are suitable firing rates of the neurons of B in the "'on' and the "'off"' periods, etc,, immediately arise and become of the greatest importance. The next step would be to have

39 both A and B providing similar patterns such as this but out of phase, then to ask how C depends upon the phase difference, etc, 3, 3G2 Network Structure The models of interest consist of N = 2M + 1 neurons (where N is the size of the network), The N neurons are partitioned into two groups of M neurons each and one group of one neuron, The former two groups will be designated by A and B respectively, the single neuron by C Each neuron of A and B respectively sends exactly one directed connection to neuron C, C. therefore, has 2M inputs, The output of C goes to the..;.. vironment, The environment provides the neurons of A and B with inputs of the following typeO Letting al, O D0 be the neurons of A and,+.OCD a~~,2M be those of B, then to each a. is associated a probabilis= tic stimulus X (t)o At time to independently of X ({t+k) for all 1 1 k + -1 t2+2, o0o and with probability fa 9 X (t) = 1; with probability 1 f, X (t) = 0. If X (t) = 0, neuron a. is not effected, If 1 1 1 X (t) = 19 a is provided with an input stimulus (I (t) in the network a. 1 equations T (t)) which is always greater than R (t) + F (t) unless, of 1 1 course, a1 is absolutely refractory (i,eo if 6 (t=1) = 1 or 6 (t=2) = 1) a. has no other inputs. Notice that the probability f approximates the actual firing rate of a.o f 0 T is the expected number of firings of ai over a time interval of length T, Specification of the probabilistic vector Xa (t)9 i = 19 oo,D 2M. then determines the "'vector"' of firing 1 frequencies f of the neurons a which comprise the total input set to 1C1 neuron CO In any particular experiment, the vector X a (t) must be specified in complete detail~ The connection-scheme, complete with the input vector X (t), has

40 the following form X (t ) A X, (t) Xa+l (t)X 2M(t) The distinction between A and B is only for the purpose of allowing two subvectors of X (t), i = 19.... 2M to be applied, ieC,. X (t) o Doo X (t) and Xa (t) o oD Xa (t), (Note- This network is 1 aMM+I 2M obtained by specifying the ma i s 19 --... 2M9 to be lus and all others to be zero out of the set of N2 possible interconnections within the given set of N neurons, ) 3,,3o3 Network Functions, Initial Conditions, Environment The threshold9 fatigue9 and synapse-value functions together with the parameters associated with them, such as A1,, A2 U and D. etc,, were varied with the experiments performed( The initial conditions comprise specification of the following valuesg le, X ((0)9 i = 19 o 2M 1 2, ra (0)o i = 19 o9 2M and rC(O) 3. 2Q (~0) i = 1 o,, 2M and rC (O) 40 I (0), i = 1, t. 2M 1

41 The I (0) s are assumed to be all equal and constant over all time, and O. 1 so large that except when the a. are absolutely refractory, they always cause a. to fire when X = lo Thus) the initial values r (0) and Qa (0) are not so important, Yet the initial values of rC and lC clearly are important for, for example, if ZC(0) is at the minimuma then neuron C starts out fully fatigued and may fail to respond to initial inputs for some period, whereas if it is fully rested, that is ZC (0 is near the maximum, then C will most likely respond to the initial inputsThe function of the environment in these experiments is, at each time step, to operate the probabilistic vector X (t)9 i = I, 0,o~ 2M and to observe the output of neuron Co 304 Summary of Experiments 34,ol Experimental Hypothesis Consider the network of the preceding section0 Relabel the 2M neurons of A and B as Al. A2 o9oo 9AM and B1a B29 o oo BM respectively, The corresponding stimulation rates will be fB and fA respectively for i = 19 2, o o Mo In general9 the neurons of each group will be fired at the same rate, ie.e,. fA fA and f f for i = 19 2 M 1 1 Divide the interval (09 0) into subintervals Uk of length Q [2kZ.9(2k+l)L] (k = 09 19 29...) and the complementary subintervals9 Uk = [(2k+l)Z,9(2k+2)] (k = 09 19 29 ) Rates fA or fB fBO fB fB fA B B UO U1 U2

42 In intervals Uk, f > fA in intervals U f < fA The average of fB over one cycle (interval of lenth 24) is, in general, fB/2 = fA = fbe The neurons of A are stimulated then randomly and independently at rate fA fb the neurons of B are stimulated randomly and independently at rate fB > fA for t c Uk9 k = 0n 1, c.4 and at rate fB~ < fA in the intervals U*, k = 09 1, The intervals Uk are called the "high" or "on" periods of stimulation of B; the Uk -the "low" or "off" periods, Correlation Hypothesis For appropriate selections of the network functions V, 4, and S together with certain initial conditions, neuron C will tend to correlate with group B in the sense that as t becomes sufficiently large9 XB (t) >> X (t) and for i = 19,,o M1, 6B.(t) = 1 implies that 6 (t+l) = 1 aoao 6B.(t) = 0 implies that 6C(t+l) = 0 a,a, where fB > fA for t in the intervals Uk, fB < fA otherwise. fA = fb is the background rate of the system, X-C and XAC represent averages of the XA C and XB-C respectively, "a-a,;" means "almost always" i,e. with very high probability9 but not probability one, This leaves room for occasional occurrences of the complementary events 6B. (t) = and 6C(t+l) = 9O 6B (t) = 1 and 6C(t+l) = o 1 The hypothesis merely states that group B eventually gains control over neuron C while group A loses cdntrol., Group B may be regarded as the information bearing lines9 group A a continuous source of background noise0 Hopefully9 C will correlate with the information bearing source and not with the noisy one, For example, over the rapid staccatto of

43 a pneumatic hammer operating out-of-doors, a human being might well hear a periodic knocking on the door, 3L,402 Some Theoretical Considerations In the appendix to his thesis [ 2], Crichton discusses the stability of systems of neurons which he calls "'semi-autonomous subsystems,"' These are networks of neurons which may correspond in a limited way to the cell= assemblies of Hebbes theoryo In his development, he makes a number of assumptions9 two of which are relevant to the experiments of this chapter~ (1) the neurons of the system fire aperiodically, randomly and independently of one another, and (2) all neurons tend in their firing to a common average rate fbc This fb he calls the "nominal system averageI' From his arguments he derives some bounds on the threshold curve (to be discussed later) and some important relationships between the fatigue increments, A1I and A2, and the probabilities of synapse-level change, U(X) and D(X), These have already been used in Chapter 20 The gist of his argument is this; that the role of the fatigue function must be to drive the neurons of the system to the frequency fb thus9 if a neuron falls below fb in its firing rate, then the fatigue should decrease so as to bring the rate back down to fb Firing at the rate of ib, there is no net change in fatigue, This last condition implies that fb = A2/(A1+A2) since then Tfba2 = T(l-fb)Al must be zero, where T is the length of the time=inter= val under consideration (see 2,3o3), Similarly, the condition for no net change in synapseslevel becomes fb D(X)/(U(X)+D(X))o One further relation that he gives is useful- Consider two neurons A and C with a connection going from A to C, where A and C fire aperiodic cally at the rates fA and fC respectively, The expected rate of increase in XAC per time step is fA o fCA and the expected rate of decrease is

44 fA(l=fc), Recalling that D/(U+D) = fb from which U/D = (l=fb)/fb. it follows that U = K(l=fb)9 D = Kfb for some constant K > 0^ U and D correspond to the rate of increase and the rate of decrease of a connec'. tion and fAfC K(l=fb) to the expected rate of increase in XAC per time step, fA(lbfc)Kfb to the rate of decrease in XAC per time step,, Therefore, the expected net rate of increase in XAC per time step -is fAfCK(lfb) = fA(l=fc)Kfb = KfA(fcfb) (F) This is positive ie, XAC is increasing. if fC > fb (fA f and fb are all assumed positive or zero), negative, iea., XAC is decreasing. if fC < fb and zero if fA = 0 or fC = fb This relation (F) Crichton gives as the fundamental formula for trends in synapse=levels. These relationships provided very useful guides and were used in the correlation experiments, However, a few points should be notedo (1) In the current experiments, the assumption of independence of firing of the neurons does not hold, As N increases9 however, one would expect it to become more plausible, The validity of Crichton's analysis there= fore increases with N in the present situation- (2) Although his theory yields fruitful relations between A1, A29 U(X) and D(A) and is useful in analyzing trends in synapse=levels, yet, beyond the bound mentioned it says nothing about the form of the threshold, fatigue, and synapse=value functions, The analysis of Chapter 4 does yield information about the former - 3A4:3 Summary of Experimental Results Just the general conclusions obtained together with one prototype experiment will be mentioned here. For further details the reader is referred to Finley [ 7] The hypothesis was successfully demonstrated for a variety of

functions V. ~, and S (U(A) and u-!)), Several observations are noteworthy, however; (a) The rate of decay of fatigue was, in general, too fast, It appeared that a slight hysteresis effect would be desirable, This would allow "trapping" neurons in lower fatigue states (higher fatigue values), For all the correlation experiments, A1 and A2 were not functions of I, as they were defined in Chapter 2 The correlation experiments directly inspired the modification to make them so., (b) The synapseslevels required a fairly strong positive bias (reflected on the values of U(X) and D(X)) to allow proper growth of the A C s The setting of this "bias" (via setting U and D) is very critical, B.iC Below a certain setting, the synapse levels AB.C tended rather rapidly to 1 values giving negative synapse values, while the AAC become larger 1 (S(XA.C) more excitatory) - the very opposite behavior predicted by the hypothesis. In this case, the recruitment of neurons, an essential property of the networks to guarantee cell —assembly formation, would be impossible (see Chapter 7), Once again, inclusion of a "trapping" feature seemed desirab le,: (c) The results seemed excessively dependent upon the form of the threshold curve. The results of one of the correlation experiments are summarized in Figure 3.1 together with the network functions and parameters At this point it was decided that, rather than pursue experiments with these cycle-less networks any further, it would be of greater profit to introduce cycles into the networks and increase the network sizes substan-t tially. Immediately a wide range of problems arise of a far more difficult and subtle nature than any encountered in the correlation experiments, The remaining chapters of this work are devoted to networks with cycles.

Figure 3*1, Sample Correlation Experiment. For this experiment, M = 16, f - 1/6, fA = 1/13, U = 0.5402, D = 0.04502, ~1 = 3/4, A2 = 1/16. The curves Vtr), O(Z), and S(X) used are given below. Results: After 10O0001time steps, the firing rate of C approximated fB; SA = S(C ) tended to 2 SB = 1 S(e ) V(r) 0 X 40 30 20 0 _ 7 X'7x C-X-X-X-XXXX4 0 2 4 6 8 10 12 14 16 r 0(I) 6 - 3-_ 0 2 4 6 8 10 12 14 16 18 20 22 24 26 I 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30Q S(x) 30 lo 0 O D 6 8 -10 12 14 16 -10 -20

4 NETWhORKS WITH CYCLES 4.1 Forword Consider a network,) in which the output of a neuron may act as input to several neurons, perhaps cycling back as input to itself. In general, such a closed path in which the terminal and initial vertices coincide will entail a chain of k intervenining neurons. where r a k = K The lower bound is determined by the absolute recovery a period, since an output of a neuron i c. will have no effect if it re= turns to i in fewer than ra time steps The upper bound is determined by the density of connections in t and the size of 1, N = The intention is to introduce into t Q sufficient cyclic complexity that it will allow formation of closed, self-reexciting chains of neurons Such chains of neurons, sometimes called reverberatory circuits, were suggested by Lorente de No as a mechanism for memory. This suggestion was partially adopted by Hebb [ 9'] to explain the formation and operation of the cell-assembly (Hebb's "'dual trace" mechanism, see Chapter 3, Section 3.1) However, once such complexity is present, several major problems have to be resolved before cel lassembBy formation can be studied: These prob — lems center about the notion of stability, more specifically of stable. background behavior of.Y The first part of this chapter (Sections 4 2 and 4 3) is devoted to the development of this concept.. Some problems that arise on this development ares (1) The choice of a network density function p p(r) giving the expected number of connections received by a neuron of l within a disk of radius r In the first case considered (uniform random distribution case),., r = ~o In the second (distance-=bias case), 0 < r < ~, the proper choice of r is a subproblem (2) Given 47

the density function o: p(r), develop an analysis relating the threshold curve and other network parameters of (i so that stabie-steady-stable behavior is guaranteed, (3) Modify this analysis to determine the effects on stable behavior of external stimuli, It will be seen that a fairly complete steady-state calculus can be developed for the case P is not a function of distance, This calculus covers the case in which negative (inhibitory).connections are present in <i~ o In fact, it is later shown that such connections are absolutely necessary to ensure the desired stability conditions. For the distance-bias case, development of a steady=state calculus presents formidable obstacles, However, the uniform random calculus may be applied fruitfully as an approximation, In both cases, ultimate "'proof' rests on the simulation.: The latter part of the chapter (Sections 4.,5 and 4:6) is concerned with (1) development of the concept of cell-assembly, (2) structural identification of a cell —1assembly, and (3) dynamic characterization of cellassemblies. Finally, an attempt is made to explore the interaction of two cel l-assembliesThe developments of this chapter are semiformal and heuristic in nature, In fact, they constitute something of an existential hypothesis of the form- "There exist networks (' (with certain parameters specified) with such-and=such properties "' The existence proof is then thrown back onto the simulation, While the conceptual developments may be heuristic. the simulation is not., As mentioned in Chapter 1, Section 1~1, a computer program itself constitutes a type of formal system, allowing rigorous testing of hypothesis such as the one above. At the end of the chapter (Section 4!6)9 a brief summary of the main

49 results is giveno A list of terms and symbols introduced in this chapter is appended at the end of the summary, Chapters 6 and 7 contain the ex= perimental verification of the claims of this chapter0 4,2 Distributions of Connections In the notation of Chapter 3, Section 3~2, networks l1 of N neurons shall be considered in which for any pair of neurons (j,i), the multipli= city mji of the connection from neuron j to neuron i io,e, j + i, is assigned initially by some rules The set {mji | i, j = l1 2, ooo, N} of all such m.j Os constitutes a distribution of connections over the network0 If mji = 0, there is no connection j + i; if mj. = 1 there is exactly one connection j + i; if m.i = k, neuron j sends exactly k connections to neuron i, etco Exactly how this assignment is carried out is to be discussed-in the next few sections, for the moment it is only assumed that it can be done0 What is important is that an asslignment scheme can be devised that will introduce a great variety of cyclic structure into the models, especially if the total number of connections and the number of neurons of ( are largeo As, in general, different neurons will receive and emit different numbers of connections, some averaging process must be used to estimate the count of the number of connections a neuron receives or emitso this will constitute the expected number of connections received or emitted by a given neuron0 As will be evident shortly, these two quantities will be identical, allowing definition of an extremely important parameter of connections received or emitted by a neuron of the network, p will be called the network densit arameter or simply the density0 It is now necessary to examine the two theoretical models of networks

with cycles used in this study together with the associated connectionassignment schemes, then to contrast briefly the resulting networks. The first models to be examined are those of networks with uniform random distributions of connections, in which any neuron of / has the same probability of being connected to any other neuron in C) & The second class of models are those of networks with a distance bias on the distribution of connections, allowing neurons to be connected or notpdepending upon the distance between them., It will be seen that the latter class of models offers some definite advantages for this work, as well as great difficulties for attempts at analytic study, Note that as in the case of the models of Chapter 39 the problem of assigning the initial Itate of the networks must be examined very carefully, i,e,, the initial assign= ments of the X. jsi, the ri.s the i.s and likewise the selection of the network functions V9 S. and 0, 4.2,..1 Networks with Uniform Random Distributions of Connections In this section the class of networks with uniform random distributions of connections are characterized, From this characterization an efficient algorithm is extracted for evaluating the distribution (mji} for any particular network )Y. Note that a network with uniform random distribution of connections conforms to one's intuitive picture of a randomly connected network of neurons., The Underlying Model (Unodel) Suppose a network,)6L of N neurons is given and a total of M connections are given to distribute over l; Let the ordered pairs of neu — rons of, 9 i.e. the elements of;(l be numbered from 1 to 2 Associate with the pair of index v a counter C which is set to zero initiallye Now perform M trials as follows; select a number v at random

51 with probability 1/N2 from the given set and increment the counter C by 1 each time v is selected, The number in the counter CV corresponding to the ordered pair (j9i) is interpreted as the number of connections neuron j sends to neuron i, This is a sampling with replacement model where essentially a random draw of an ordered pair out of an urn of N2 such pairs is made9 assigning a connection and putting it backs After M such samplings9 the distribution of the number of connections assigned to pairs is binomial, ie,, the probability Pk that a given pair receive exactly k connections is given by the binomial probability Bk(M91/N2) = ( ))( 2 k(1'2 M k which, if the mean of the distribution N N2 P0 = M/N2 is moderate in magnitude, while M is large and 1/N2 is small, Po P ok may be approximated by the Poisson probability Pk( o) = e Variant of Umnodel This model may be viewed somewhat differently as follows~.scan through the list of ordered pairs, letting Pv denote the number of pairs and F the number of connections remaining after processing the vSth pair as described below, For v = 1 (the first pair), Pl = M~ and F1 = M Now, for each v9 make F.v throws with probability 1/P of'success" where success means that a connection is assigned to pair v,, Then reduce F by the number of connections assigned C and proceed on to the next pair,, V V + I with Fv+1 = Fv C Now, it is easy to see by induction that the expected value of the CD E(C ), equals p0 = M/N For v = 1D C1 is by definition M/N2. For v = t observe that the expectation of CQ 1 is F / F /N2 (l-1) = po by the induction hypothesis, Now F F 2 _ (2 - F Ft F =F R~l %1[N2 (~)] F 11 F I[N2 ~] 2 11 iR l2 2 2 p (Na - c )N (=lN1) N.= (-=1) 0 and E(CQ) = FL/N2 c Q = po, Therefore, again po connections are expected

52 to be assigned per pair v and the M connections are expected to be distris buted binomially over the network (or, to a very good approximations Poisson): The advantage to this variant of the urmodel is that the random sampling of neuron pairs has been replaced by a systematic scansion of the ordered pairs of e However, per pair v, F samplings must be done, These considerations lead to the following~ Approximation to the Urmodel Again scan the ordered pairs, however this time performing M experiments per pair with probability 1/N2 of success (connection is assigned). The expected number of connections assigned per pair is still M/N2 however, the total number MI of connections is a random variable whose expectation is M, i,, e., E(M') = M. Once more an expected binomical distribution of connections over neurons results, The procedure is readily translated into a reasonably efficient programmable algorithm with the mean p0 of the connections a neuron expects to receive from another neuron as parameter, Only Pk(Po) (= e. 0.i that are numerically significant are calculated, A random number generator is used to simulate drawing a number, n, at random from the unit interval 0 g n g 1, selecting 0, 1,.9 k connections depending upon whether PO(Po), po(Po) + Pl(Po)o. Po(PO) + Pl(PO) + C + Pk(PO) exceed n or not (see Chapter 5, Sections 5, 1 and 5 2)Flow Charts To bring out more graphically the structure of the three different algorithms outlined above9 the corresponding flow charts are given in the next few pages The same conventions are observed as in the flow chart given in Chapter 2 Section 2, 2 representing the prototype of all experiments, qv...

53 Flowchart 4%1 Algorithm for the Urmodel, The ordered pairs (j4i) E. L,'~ are numbered from v = 1 to N For each v, the counter C9 contains at the termination of the algorithm the number of times the vMth connection was selected, The C Us are assured to have been set to zeroes initially, Start 19= l p 0i 0 M Select a v e {19 29 oA, N2} with replacement with probability 1 /N2 C = C + 1 Sto stop

54 Flowchart 42 A orithm for Variant of Urmodelo Ordering and indexing of connections as in Flowchart 4.1, Note that E(C ) = po for all v e {1, 2,._, N2} (see text for proof)., The sequence {1, 2, ~, v} is a variable length sequence beginning with {1, 2, o,, N2} and ending with {1.} The C%'s are set to zero initiallyo Start 1 = N2 F1 = M 1 = 2, C cFl Select an. n {1, 2, 0o, v} with replacement iwith probability 1/P Is n =v?H no yes c -c + 1 F = Fv P v = P 1 +Sto

55 Flowchart 4.3 Ideal Algorithm for Connection-Assignment Procedure, v and CV are as in Flowcharts 4,1 and 402, The actual number of connections assigned is a random variable MI with E(M') = Mo See text, The CV s are set to zeroes initially, Start 9 = 1, 2, 0o o N2.,. = 1, 2,0 0, M Select an n e {19 2,9 ooo N2} with replacement with probability A 1/N no TIs n = v? yes C =C +1 V V stop

56 Flowchart 44 Imlementation of Subprocedure A of Flowchart 4 3. The ideal selection procedure of Flowchart 4.,3 is replaced by this procedure, See text for thedefinitions of the PK(PO). In any given case, a K must anyKgiven cases a max be specified, assuming that the p (p0) are negligible for t > Kmax Kmax Consequently, for the algorithm to work, z PQ(Po) must equal unity. Drawing of the random number N, e (041) is affected by the generator of Appendix- B (For a given pair (ji) e ~,: whose index is V) Draw a random number n from (041) 0, 19 2,Zho K0 Is n H PhlP) ~O no

5 7 The Network Densit p In terms of the above discussion, define the quantity p = Nop00 that is, the expected number of connections a neuron receives from the whole net, as the network dens i paraeter or simply the densit By symmetry this is also the expected number of connections a neuron emits to the entire network, Uniform Random Distributions By now it is clear that, whichever of the above procedures be adopted, in no way is any subset of neurons of favored or not favoredo The probabilities and expectations apply uniformly over all neurons of I! An= other-way of pIhrasing this is that-there is no'-bias on the density parameter p, Were this not the cases that is, if the definition of the assignment scheme included some definite criterion for including or excluding certain neurons of.i (from the point of view of being connected to a given neuron) then in some sense a neighborhood structure would have been ima posed over the network, resulting in some form of geometry over (\'o This idea is explored in the next section, but first a few more observations about the uniform case are in orders Connections between Subsets of (j Of great interest in the sequel will be questions of the following type~ given two subsets A and B of'' what is the expected number of connections A receives from B and conversely? This is easily resolved in the present situations Denote the cardinality of a set S by S and assume that A and B are giveno Let X(A+B) and X(B*A) denote respectively the expected number of connections B receives from A and conversely, The expected number of connections sent out to the network from A is Ap, where p is the network density. Consequently, the expected number of

connections that ain arbitrarxLiv:,:icked neuron of (ij rec+elves I < N since. is a random network fncretoreo the expected number of connect tions received by the entire subset B is (Ap/N)B minus the overlap (APoBp/N)A (as these would be counted twice otherwise) and X(A-+B) = /N) (ABN)A [B Similarly, (I) Bp X(B-A) =- [AAr B] A second and related question now is also easily answered~ suppose A and B are subsets of neurons randanly and independently distributed over ~t 9their exact magnitudes not known, but where A and B represent the expected sizes of A and B respectively, What form do X(A+B) and X(B+-A) take now? Clearly, the basic formulas (I) above still holds now the cardinality of the intersection A(IB may be estimated by (AB/N )N since the probability that a neuron of lie in both A and B is (A/N)(B /N) AB/N and N such neurons exist, Therefore X(A+B) - A(B-A B)p/N AB A XO(A) (BU=R) =- X(A)B( 1Similarly,, (II) X (B-A) = A(B)A(l B/N) where X(S) is the expected number of connections that an arbitrarily chosen neuron of receives from SQ, 1 Comment about Notation and Derivations Perhaps the obvious has been belabored in this section, especially in deriving formulas (I) and (II) which are, after all, rather trivially deduced from elementary probability theory, but these procedures tend very often to be taken for granted0 Hence, it was the intention here to

59 make them explicit so that the principles underlying later calculations may be quite clear, It also is of interest to contrast the marked simplicity of the uniform distribution case with the distance-bias, case to be discussed in the next section0 An apology is definitely in order to the mathematically sophisticated reader for the mixing of symbol fonts to denote objects that belong to the same class of objects, e.go, A9 B, etco to denote sets., However, the guiding maxim has been that one, familiar to all programmers, that goes, "use symbols that are convenient and have mnemonic value,' In this paper9 the intention is to apply mathematical analysis where possible and to the extent that it is fruitful, not on developing a mathematical theory per se. 4,2,2 Networks with Distance-Bias In the previous section9 networks were considered with connections distributed in a very simple way with the result that certain neurons may not be isolated a priori and the claim made that they do not receive or transmit connections from or to certain other neurons of the network In a word, there is no provision for neighborhoodness in those models,, Suppose then that a network$| has a function d = d(j i) defined over it so that for any two pairs of neurons (j,i), d is the distance between j and i, Furthermore, assume that the connections are assigned to neuron pairs (j i) with probabilities p(j.i) that are functions of the distance d(j9i) o If the p(j i) are, say, inverse square functions of the d(j,i), C(d) = K/d(jgi) 2 then the density of connections is the greatest for small d(j,i) and the smallest for large d(j,i)o Considering the univer= sality of inverse square laws in the physical sciences, this does not

a) seem to be an unnatural assumption Some special assumptions must be made as d(j i) tends to zero., of course,- buoac as shall be seen later (Section 4 3 4), this presents no real problem although it does it does pose some interesting questions about the density of connections in actual cortic..l networks in the immediate vicinity of a neuron, perhaps to be tested by future neuronphysiological experiments The distribution yielding o(d) = K/d(j,i) does not necessarily insulate distal parts of (,\< it merely lowers the probabilities of two neurons in separate, but distall3 areas being connected, The favoritism excluded in the case of uniform distributions is therefore allowedv neighborhoodness is defined. In general, p(d) will be a monotone decreasing function of d9 tending to 0 as d-+. One possibility, to be discussed in more detail in Section 4.3,!4,~ is to set (d) = o = constant for d i R for isome fixed R and p(d) = 0 for d > R, using the basic technique of Section 4:~2.,1 for determining the connections received by a neuron within the neighborhoox with radius R, This gives a distribution that is "'flat' - ioe, uniform and random in the neighborhood d _ R of a given neuron and zero elsewhere The advantage to such a distribution is that il may be regarded from the point of view of Section 4 2, 1 in sufficiently small regions- Clearly, this procedure may be generalized to that in which the distribution is uniform in the neighborhood d I R with density pl, uniform in the neigh= borhood R < d < R2 with density p02 uniform in the neighborhood RI 2 d _ R3 with density p3, etc_ The latter type of distribution may be used as an approximation to the inverse-square distribution by appro priate settings of the disk d c R and the annuli Rk.l c d' Rk, k i, 2. together with the corresponding densities p0,

61 Connections between Subsets of, Just as in the case of networks with uniform random distributions of connections, it is necessary to calculate the expected number of con= nections9 X(B+A) that a subset A of i receives from another subset B, It will be seen that this is no longer such a simple chore, Pass over for the moment to the continuous case as an approximation to the discrete networks of the model. Consider,'1 to be a closed and bounded subset of the real plane E2 and let d be the usual metric over E2' Of course, the discrete case may always be re=obtalned by appropriate choice of grid size, Let A and B be subsets oft! and assume the inter= section AhrB = O A and B are of arbitrary shapes, The density function p(d) now takes the form p(w,z), the expected number of connections received at point w from a unit of area about point z, Then, if w C A, z c B, the expected number of connections received from an element of area about the point Z e B by an element of area about the point w is A A(w) p (w z) AB(z) and from all of B is, AA(w) p (wsz)dB(z) and, therefore, all of A expects to receive-,X(B+A) = dA(w) (woz)dB(z)o (III) w'A zCB For arbitrary configurations of A and B9 such an integral will be tedious to evaluate0 In fact, integrals of form (III) are similar to those arising in physics in attempting to evaluate the electrostatic force field about a charged body of arbitrary shape, the difficulties encountered in their calculations are expected to be similar, Approximations to (III) shall be considered later (Section 4~304) for particularly simple forms of A and Bo MoreoverD the simple disk~type distribution will be seen

62 to be perfectly adequate for the present study, thereby avoiding some of the computational problems of the inverse-square type of distribution 4,2,3 Brief Contrast of Uniform and Distance=Bias Distribution It has already been remarked that in the case of networks with uniform distributions of connections, there is no notion of neighborhoodness except in the trivial sense that any neuron of a network,t is equally like to be a neighbor of any other neuron: This means that no localization of phenomena may be expected if the assignment scheme is reasonably random. Yet. with all the physiological results of Burns 1 ] and others in mind, as well as the considerations of Hebb's theory, localization is intuitively exactly the sort of thing required in the models, Therefore, rather naturally the distance-bias case arises an which activity may occur in one part of the network without immediately affecting another (distal) part, It will be seen9 both experimentally and theoretically, in the sequel, just how important this consideration really is, Because of their inherent simplicity, however, networks with uniform distributions are chosen as the starting point, The results obtained will be used as a basis for analyzing networks with distance-bias of the simple disk type outlined above, 4~3 Stimulus=Free Behavior in Networks with Cycles In Chapter 39 the response of certain networks to various types of stimulus patterns was studied, As there was no feedback among the neurons of those networks, they were completely stimulus dependent and there was no question of stimulus=independent activity In the case of networks with cycles, howevere whichever of the two basic types discussed above be chosen9 it is entirely possible that a network9 once certain neurons have

63 been made to fire at t = 09 maintain an activity independently of external stimuli for a large number of consecutive time stepso the number of neurons firing at time t, F(t), does not become zero until t becomes very large, This, of course is due to the feedback present in such models, The following considerations illustrate the nature of this feedback together with several other important factors of interest at this timeo 4, 3 1 Steady-State Behavior The sequence of firings of neurons of.. F(0), F(1)D F(2), oo~ F(t),9 F(t+l)t i, will be called the behavior of! that occurs as a consequence of F(O) neurons being caused to fire at t = 0 It is necessary to determine how this behavior depends upon the initial set of neurons fired, F09 and the network parameters ri9 lip i = 19 o0oD Ng9 Aji iX j = 19 oo 9 N together with the network functions V9 Sp and 0, Exam le of a Simple Cyc1e Suppose F0 consists solely of one neuron i and also that the assign= ment of connections over (9 is such that there is a cycle ii2+i3oi Likli l of neurons of o Assume, for simplicity, that these neurons have no other inputs. What happens after stimulation of iI depends critically upon the parameters and functions listed above. For example. if X. is such that the synapse value from i1 to i2 is less than the 112 1 2 effective threshold of i29 then i2 will not fire at t = 1 and F(l) will, of course, be zero. Likewise for i2 to i3, ikl to ik On the other hand, if the synapse-values and effective threshold values are appropriately set9 then the firing of i1 at t = 0 will cause i2 to fire at. t = 19 ~0 ~ i9kl firing at t = k 2 will cause ik to fire at t - k = 1:

64 The firing of ik at t - k - I now will cause i1 to fire at t = k provided that the effective threshold of 11 has recovered sufficiently from the firing at t = - O, Similarly, i will cause i2 to fire at t = k + 1 if i2 has recovered sufficiently9 etc0 Hence: for appropriate initial settings of the recovery and fatigue states, the synapselevels, and the functions Ve SD and 0, a "'pulse"' may be caused to circulate around the closed cycle ii2+i 2. +ik i If the length k of the cycle is equal or greater than the fatigue period., it will continue to circulate indefinitely, but if k is short with respect to the fatigue period (the usual case), the fatigue values of the neurons will increase until at some point the firing of one of the neurons is suppressed and the pulse is extinguished. This simple example illustrates the basic ideas very well; the choice of network parameters and functions must be made judiciously to insure that pulses may circulate about in a network with cycles, Cycles of Subsets Now let i be a network with density p (which may be a function of distance) and D > 1. There may then be many cycles involving any particu= lar neuron and the situation is not as simple as above~ However, the same general conclusions may be drawn as followso Let F(t) denote the subset of neurons of l). that fire at time t, where F(O) is the set of neurons caused to be fired at t = 0: Let Sa be the subset of neurons of that receive one or more connections from F(O), F(1)2 S1 Whether or not a neuron ovi 1 fires. ie, is a neuron of F(1) as well: depends upon its effective threshold and the total number of inputs it receives from F(O). Likewise9 if Sk+l is the subset of neurons of (i that receive one or more connections from F(k), then a neuron in Sk+l fires. iLeo9 is in F(k+l). as determined by

65 its effective threshold and total input, Therefore, the following general process is obtained: F (O) + S F(1),~_. F11) -+' S2 F (2) S3 F(3) - 0o. where the notation F(k) Sk+ means that Sk+l is the subset of, receiving connections from F(k), The successive Sois need not be mutually disjoint- in general9 Skt\ Sj O for jk = 0, 19 29 oo,0 However, since a neuron cannot fire in its absolute: refractory period, the F(i)'s will be disjoint up to at least the rail~th successorF(i),A F(i+l) = 0 = F(i) A F(i+2) = 000 = F(i)t\ F(i+ra-l) for i = 19 29 0oo 9 but it may occur that F(i) \ F(j) #0 for j' i + ra' To avoid F(t) going to zero as t increases9 the initial choice of the Xji~s9 rigs, etc, must be made very carefully, Assuming such a choice is possible (see next section), the successor sets S. will exhaust { after some number k0 time stepso Likewise9 tracing the successor sets F(i), after a certain number kl time steps9 every neuron of would be expected to occur in some F(i) o This does not mean that an exact cycle of subsets F(0) + F(1) + F(2) -+.,oF(k) + F(O) will be obtained, since9 for example, F(L) may overlap F(O) for some O < t, < k and the neurons in accessive F(i)9s become very shuffled. However9 it may be that after F(k) fires,

a large part of the original F(0) fires plus some new neurons, say F (0) Likewise, F(O)' may cause a set F(1)' to fire where F(1)l r F(Il)' * 0, etc A quasi.ccyclic firing of subsets would be obtainedh F(O) + F(1) -+ (,, F(k) ~ F(O)* + F(1) +. (;I F(k)' F(0)"' F-rl"',, Exactly how far this quasi-cyclic sequence deviates from the purely cyclic one depends upon the size of (sand the density p9 the network parameters and function settings, in particular the threshold curve and the distribution of positive versus negative connections in Implication of a Stable. SteadY=State Behavior These considerations point to the possibility of a stable, steady-. state behavior, that is, a sequence 1O) 0), F(1),;. F(t),.- where the expected values of the F-(t) are the same fior all time, i.e, E(F(t)) = constant = Fb for all t Each neuron of will then fire at an expected rate fb, the background rate of the network, Motivation for Considerin Steady-State Behavior The concept of stable, steady-state behavior is intimately connected with the basic objective of this study, namely, the formation and develop-= ment of cell=assemblies, A cell-assembly is to come into existences as a result of appropriately applied stimulus to the network9 via the mecha — nism of the synapse-level growth law. A cell-assembly may be regarded as a learned response to the given stimulus. The precise physical identification of a cell-assembly is a difficult matter and forms the goal of Sections 4,49 4 5, and 4 6 below. For now it is sufficient to note that a given cell-assembly is identified structurally by means of conditions on the synapse-levels of connections among the neurons of the assembly as well as between the neurons of the assembly and the remainder of the networkg letting C be the neurons of the assembly, then the structure of C

67 is determined by a set TC of conditions on Xo. for all ji c C and on Xkl for k c Ce 1 g C, or k K C and 1 E C, k A _ _ Since permanent learning of the model resides in the synapse-levels, the primary condition on steady-state behavior is that it not perturb existing synapse-levels so that the conditions TC no longer obtain, therefore possibly disrupting an existing cell-assembly, nor that it give rise to any new cell-assemblies,, In other words, steady-state should act to pre= serve the status quo of the network, structurally speaking, This is essentially the reason behind the derivation of the equation fb = D(X)/[U(X) + D(X)] in Section 2,3o4 which relates the nominal system average or background firing rate fb of [! to the probabilities U(X) and D(A), Since the firing rate of a neuron is determined by the threshold and fatigue functions, these two functions must be adjusted to preserve an overall firing rate of fbo Thus9 for a network to operate at steady=state with a background firing rate fb 9 conditions are imposed on all the network functions V, %, and S. The next question is- what should Fb be? The answer to this brings out yet another facet of steady=state behavior, It is clear that Fb cannot be Oo Suppose a network >l has been quiescent for several hundred time steps, say. F(t) = 0 for t = 0, 1 0o o0 200o All neurons will be completely recovered with respect to threshold and fatigue; for all iE I ri = rm and 1 = For simplicity9 assume all synapse= 1 max 1 ma1

68 levels are set to a common value A0 so that S(X?0) is moderately positive and that Vq = S(X0) (quiescent value of threshold), If a stimulus is presented at t = 201 to a subset 0 of causing all neurons of E0 to fire, then every neuron of j.0 that receives connections from E0 will fire at t = 202, Denote the set of such neurons by E1~ In general9 I1 will be larger than ZO0 Likewise, 1 will cause a set E2 of neurons to fire at t - 203 i Again 2 will belarger than El& etc, The sets Ei9 i = 09 1D 2,,,, will continue to grow in cardinality until (, 0 E1 2 Eo is exhausted: there exists a k0 such that 0= 1 = 0' k and a kl, ko < kl such that ~k 1 > > ko + = ko+l 0 0 0 k1 The numbers k0 and kl depend upon N, p, the threshold curve and E0o Generally, kl rmax This is an extreme case: the same principle holds true, however, if the synapse-levels vary over a range of values (provided there are some positive values9 of course), The presence of negative connections merely tends to increase kI. Therefore, the following general principle may be stated, steady= state behavior in a network I must be such that large groups of highly recovered neurons do not come into existence0 Otherwise, the possibility of a violent oscillation or a series of such oscillations in F(t) exists. Worse still, such oscillations9 as the example shows are usually fata; F(t) goes to 0o, These violent oscillations are undesirable from many points of view first of all9 they may be fatal. Secondly, if they arise as a consequence

69 of application of stimulus to the network (presumably operating up to that time in steady state) thev could act to make insensitive to future stimulus. in this case,' is deprived of opportunities for further learning, The condition of violent oscillations in a network 4( will be referred to as "epilepsy' in analogy with the neuro-pathological condition found in human beings, It is interesting to note the similarities of the sequences {F(t) } with an electroencephalogram of a patient undergoing an epileptic seizure To summarize; the functions of steady-state behavior are two; (1) to preserve the status quo of any existing structures and, likewise, not to give rise to any new structures in the network) (2) to preserve a distribution of neurons over recovery states so that large groups of highly recovered neurons do not evolve. From these, certain conclusions may be drawn, to be made more precise in the sequel; (3) Fb (=E(F(t)) must not be too small, (4) F(t) cannot vary too far from Fbi (5) a single application of a moderate sized stimulus to a network operating in steady= state must not produce epilepsyIt will be seen that the effect of stimulus upon a network operating in steady=state is similar to modulation of a carrier wave by an informa= tion bearing signal in AM radio transmission, Notice that the symbol F(t) is now used to denote the set of neurons firing at, now for the cardinality of this set (strictly, F(t)). Since it will always be clear from context which is meant, only F(t) will be written from now on, 4 3,2 The Threshold Curve and Steady-State Analysis The purpose of this section is to display the role of the threshold

curve in determining steady=state behavior.- A simple computational scheme is given whereby the behavior of a given network with uniform random diss tribution of connections may be predicted as a function of the size N. density p, distribution of synapse values, the form of the threshold curve V(r) and the number of neurons firing at t = 0, F (0), This scheme allows computation of the expected value of F(t+l) as a function of F(t), t = 0O 1, 29... While it is tedious to perform the calculations of the F(t)'s by hand for more than about twenty time steps, yet within this relatively short time span, tendencies toward oscillatory or stable behavior are readily discerned, Using the network parameter values thus obtained for a stable case9 the actual behavior of the corresponding model may be tested and compared with the predicted behavior,, Gross deviations of the experimentally obtained behavior from the predicted would indicate either statistical anomalies in the model or the malfunction of some network function. An example of the first case would be a skewing in the connection distribution, In the second case, the calculations might indicate that a given network should be stable, but the experimentally obtained behavior might develop fatal oscillations after, say, several thousand times steps. While a statistical deficiency in the model might explain this, the long period of stable behavior suggests the cumulative effect of some network function such as fatigue or the synapse-level growth law that has little effect on behavior over relatively short time intervals, but might well have a deleterious effect over longer intervals, if not properly adjusted initially. The scheme is presented first in its simplest form in which the distribution of synapse-values over connections is uniform, i.eo all the Sj s are equal It is then modified to comprise progressively more

71 complicated distributions, The importance of negative synapse=values as a source of negative feedback is emphasized, This scheme is modified in Section 4,0304 to treat networks with distance-biaso Consider a network 1 of size N and density p, p not a function of the distance. Let V(r) be the threshold function of iH/ where ra and r are the absolute refractory period and the quiescent value of the recovery respectively, Let the initial assignments of the Xji(0) s be Xji(O) = X0 = constant for all jai = 1D 2 39 oO.,o N such that S.. (0) = S(X..(O0)) S (0) = YO where V0 > 0o The threshold curve is expressed in terms of V0 ~ V(ri) = m(ri)Vo0 i = 19 29) oeo No The fatigue function is assumed to be constant for the moment9 4(i (t)) ~ 0 for i = i 000oo No V(ri) is then the effective threshold of neuron i, The quiescent value Vq will be taken as VO9 V(rq) = V00 In other words9 a fully recovered neuron may be fired by a single synapse of level X0O Define Rr(t) as the set of neurons of \ that have recovery state r at time t; R0(t) will be the set of neurons that fire at t, R0(t) = F(tIo The set R will be regarded as the union of all the R rs for r -> r since q a neuron need not fire when it recovers to rq Rr R R o R q rq q m The set {(t) 5 {R (t) r = 09 19 O0OD rq} is the distribution of neurons over recovery states at time to Given ~0}3 ~one may computer successively the expected values of c(1)9 4(2) 00 09 (t)9 (t+),. O as follows. Suppose R0(t) neurons fire at time to The expected number of neurons of,9 Nk(t) that receive at least k connections, k > 0, from R0(t) may be computed from the equation0 Ro(t) Nk(t) = NIk (X (t)) where X(t) -= =v P o

72 X(t) is the expected number of connections received by a neuron of from Ro(t)C&I and Tk(X(t)) is the probability that such a neuron receive at least k connections from R0(t), i = 0 19 2, ee 7 = i t) _ _(__ k (t),et] e k=k Q From the threshold curve and R(t), for 2 = 1, 2, oo9 one can determine the subsets Mi(t) dM of neurons requiring at least k connections to fire, The probability that a neuron lie in Nk(t) and Mk(t)3 i:e, that it receive at least k connections from R (t) and require at least k connections to fire may be approximated for large N by Nk(t) Mk(t) Nk(t)Mk(t) Nk(t) Mk(t) N 0 - -1 2=k (X (t)) Mk(t) N N Then, the expected set of neurons firing at t + 1 is given as R0(t+l) = _ Nk(t). Mk(t) k=1 and the new R Vs, r _ 19 are given by r R (t+l) R (t) (t) r (expected values) where R*(t) is the set of neurons of R (t) that are expected to fire at t, r r R*(t) = R (t) e.-Nk(t) In the preceding section it was noted that in- steady-state behavior, the expected value of F(t) is constant for all t. F(t) = constant = Fb Gross deviation in F(t) from Fb tend to produce fatal oscillations, The problem is to design the threshold curve so that such deviations do

73 not occur under conditions of no stimulus to the network, This is now readily done using the scheme developed above Suppose the initial distribution ~(0) is such that Ro(O) = Fb = R (O~ 0) Rr (0) = Rr q m Now make the following assumptions: (1) the parameters p and Rr (0) are chosen such that q R(0) Fb rk (X(O))Rr (0) = Fb X (O) = p = q Notice the set R (t) is identical to MI(t)> q (2) the threshold curve is such that Rk = 0 for r = r r +1 r =1 k(r) = values determined by k(r) a a q the curve, From (1) it follows that R(1) = k(X(O))Rr (0) = Fb, q Moreover, Ro(1) fb X(1):p = p = (0) From (2) it follows that Rr(l) Fb9 r = 1 2, t rq =1 and thlat r (1) Rr (0) q q Therefore9 by (1) again, Ro(2): k(A(l))Rr (1) = Fb and by (2), R (2) Fb r= 1, 29, r qwith R (2) = R (1) = R (0), q q q

74 By inductions it is clear that for t = 0, 1 2l Otp R0(t) M: Fb (expected number of neurons firing at t) and Rr(t) = F r = 1) 2, r =1 r b q where Rr (t) = Rr (O)0 r r q q To summarize, pick a threshold curve with the property that the only fireable neurons at time t will be those in r (t), Choose p. N, and Rr (0) so that precisely Fb neurons of Rr (t) are expected to fire at rq b rq time t, Figures 4,o1 402, and 4o3 illustrate the concepts of this section, There is an interesting consequence of this analysis; since the expectations of RO(t) Rl(t) 9,0., Rr l(t) are equal to Fb then r x F, - N m to irom Figure 4 1, t is seen that a neuron is expected to fire in the recovery range r r m< rm The expected firing rate is N/Fb rm It is convenient to regard this expectation as another variable r' and take N/Fb =.rq where r relates to the actual limit on recovery by the bounds b q q r <r < rm The purpose behind this is that it allows greater freedom q q m is setting the bounds rq and rm used in determining Rr (t) = MI (t) Therefore, in place of the quality above, the following shall be used for relating Fb and No N q Fb This analysis is based on the computation of expected values~ As such, it proved a very effective guide to correct setting of network

75 parameters. In practice, of course) there is a variation in the R (t)9 for r = 0,..o. rq, and neurons of M2(t), M3(t), oo may fire~ Within certain limits in the variance of F(t), no harm will result, It is important to note, however, that no mechanism has yet been presented for damping out significant transient deviations of F(t) from Fbo In other words9 to obtain steady-state with the techniques given thus far, the network parameters have to be very precisely tuned with little room for variation. This will be the subject of the following Section 4D3o3. A sample steady=state calculation followso In general it was" found sufficient to perform the calculations for about thirty time steps, that is, if the calculations to that point had not produced violent oscilla= tions, then the simulated network would be stable, Sample Stability Calculation This calculation is given in some detail in order that the techniques used in Chapters 5 and 6 may be perfectly clear0 Consider the threshold curve of Figure 4,4 ra =3 r = 16 rm = 19 and r rm and V0 = 1o ~ a q m q m0 There are eight distinct sets Mk(t), k = 1, 2, oUsD 80 The network is assumed to contain 400 neurons, N = 400o Since there are twenty basic units of recovery (all the recovery classes are expected to have equal cardinality in steady-state), Fb will be chosen as N 400 Fb -= 2~=. 20, b 20 rq For this network, p = 6 and the number of neurons that are forced to fire at t = 0 is twenty, F(O) = 20, These neurons are assumed to come from R (0)o r q The calculations are presented in Table 4,1 They are carried out to t = 30o Notice that a few neurons enter R0 from M2, the predominant part, of course, c'oming from M1 The values of lk(X(t) are taken from

76 standard tables of the cumulative Poisson distribution, For the range of X (t) involved here, irk(X(t)) is negligible for k 2 2: Rrtt, Kr(t) Rr (t) (E(Rt) = Fb)__ neurons /"'"/, of this set (t) become 2,.,.rart+lr fired neurons of Rr (t) become / / // fired at t (a) 0 1 2, r r r+l r1 r a q q m V(r) = m(r)V0 KV0 vo, — __ I ___ jI - b) 0 1 2.... r r r -a q q m F rure 4_,1 Graph of R(t) (a) vis-a-vis V(r) (b).. Interpretationo Graphs (a) and (b) are drawn relative to the same scale on the abscissae In (a), the ordinate Fb represents the expected size of the Rr(t)so In (b), V(r) is assumed to be infinite for O _ r 1 r,l, V(r) is expressed in terms of multiples of VO, In a 0 the current discussion, it is assumed that V(r) > V9 r = 0 19 r Burington, Ro S. and May9, Do CO, Handbook of Probabilit and Statistics with Tables9 lHandbook Publishers I"nc- San us"y,0:O' p-pTo-263.......

77 Rli ~ ~ ~ ~Mr(t) (t k (Nt k+ 1. Kt 1 (tk - (a) (1 2 r rK rK+ " k+ 1 K( t \'. \ V( r) V (r) (K+1)V0 L - 0r (b) ~ 1 2, rK re eK ra Fi iaure 42 Graph of 6(t) with the Mk(t) (a), vis-arvis V(r), (b). Interpretation: In (b), a continuous curve V(r) is approximated by a step-function V*(r) so that V*(r) - kV0, an inteer, for r [r 1 a+1 m K 1 K-2 q k an integer, for r c fr,+,.,r [(gr,] [ k = 1, 29... The corresponding points in (a) determine the boundaries of Mk(t): Mk(t) = (rk+l - rk)Fb (expected value)

78 R(t) M (t) b 1 R0 (t) 7/,! / i! ~ ~MK(t) / NK(t) J/ / < NK,(t) d1?'. r r r r r a q +1 q m Fi ure 4 3 Stylized Graph Indicating Relationship of Mk(t) and Nk(t) vis-a-vis?(t), Interpretation' The set Nk(t) spans recovery sets Rr (t), therefore it is shown at the hatched area above, The ordinate does not indicate the cardinality of the R (t) f~ Nk(t) rather represents a stylized set boundary, 9 - 8 MM ~M 2 3 4 5 6 7 8 9 10 11 12 3 14 15 16 19 5Fi ~' r 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 6 17 18 E(M(0) 19 20 ra rq rm Fibre 4,4.~ Threshold Curve for Sample Calculation, Interpretation: Initially, neurons are distributed randomly and inde= pendently over recovery states so that E(R r(0) = 20I r = O, 19. r and E(R (0) = 80 Clearly, E(M8(0)) = 20~ m r E(HZT(0) = E(M-(0)) =... = E(M,(0) = 40 and E(MH(0)) = 80.

Table 4.1, Sample Steady-State -Calculation t X S R, R R R4 RSR6 RR Rq R10 R, RR R In M M1 012 345778 l011213 2 2 1 1 0 3 20 20 20 20 20 20 20 20 20 20 20 20 20 20 40 80 I 345 23 20 20 20 20 20 20 20 20 20 20 20 20 20 604 40-2+20-19=39 626 80-21+19=78 2 6375 25 23 20 20 20 20 20 20 20 20 20 20 20 20 605 39-2+20-18=39 629 78-23+18=73 3 a375 25.25 23 20 20 20 20 20 20 20 20 20 20 20 606 39-2+20-148=39 431 73-23+18=~9 4,345 23 25 25 23 20 20 20 20 20 20 20 20 20 20. 606 39-2+20-18=39,31 69-21+18=6 5 62,85 19 23 25 25 23 20 20 20 20 20 20 20 20 20 604 396.2+20418=39 626 66-17+18=67 6 627 18 19 23 25 25 23 20 20 20 20 20 20 20 20 602 39.4+20419=40 625 67-17+18=68 7 6255 17 18 19 23 25 25 23 20 20 20 20 20 20 20 602 40-1+20419=40 24 68116+19=7l 8 6255 17 17 18 19 23 25 25 23 20 20 20 20 20 20 602 40-1+20-19=40 622 7146+19=74 9,255 17 17 17 18 19 23 25 25 23 20 20 20 20 20 602 40-1+20-19=40 622 74-16+19=77 10 a27 18 17 17 17 18 19 23 25 25 23 20 20 20 20 e02 40-1+20-19=40 622 77-17+19=79 11 6285 19 18, 17 17 17 18 19 23 25 25 23 20 20 20 602 40-1+20-19=40 624 79-19+19=79'. 12 6315 21 19 18 17 17 17 18 19 23 25 25 23 20 20. 602 40-1+20-19=40 625 79-20+19=78 13 033 22 21 19 18 17 17 17 18 19 2t 25 25 23 20 604 40-2+20-19vS9 626 78-20+19=77 14 6345 23 22 21 19 18 17 17 17 18 19 23 25 25 23 604 39-2+20-19=38 627 77-21+19=75 15 636 24 23 22 21 19 18 17 17 17 18 19 23 25 25 605 38-2+23-18=41 629 75-22+18=71 16 6345 23 24 23 22 21 19 18 17 17 17 18 19 23 25 605 41-2+25-18=46 630 7121+18=68 17 633 22 23 24 23 22 21 19 18 17 17 17 18 19 23 605 46-2+25-21=48 629 68-20+21=69 18 635 21 22 23 24 23 22 21 19 18 17 17 17 18 19 604 48-2+23-23=46 627 69-19+23=73 19 6315 21 21 22 23 24 23 22 21 19 18 17 17 17 18 604 46-2+19-23=40 626 73-19+23=77 20 633 22 21 21 22 23 24 23 22 21 19 18 17 17 17 604 40-2+18-21=35 626 77-20+21=78 21 633 22 22 21 21 22 23 24 23 22 21 19 18 17 17 604 35-1+17-17=34 627 78-21+17=74 22 635 21 22 22 21 21 22 23 24 23 22 21 19 18 17 604 34-1+17-16=34 627 74-20+16=70 23 6285 19 21 22 22 21 21 22 23 24 23 22 21 19 18 604 34-1+17-16=34 626 70-18+16=68 24 627 18 19 21 22 22 21 21 22 23 24 23 22 21 19 602 34-1+18-16=35 625 68-17+16=67 25 6255 17 18 19 21 22 22 21 21 22 23 24 23 22 21 602 35-1+19-17=36 624 67-16+17=68 26,24 16 17 18 19 21 22 22 21 21 22 23 24 23 22 602 36-1+21-17r39 622 68-15+17=70 27 624 16 16 17 18 19 21 22 22 21 21 22 23 24 23 602` 39-1+22-18S42 621 70-15+18=73 28 A24 16 16 16 17 18 19 21 22 22 21 21 22 23 24 602. 42-1+23-70=44 21 73-15+20=78 29 6255 17 16 16 16 17 18 19 21 22 22 21 21 22 23 602' 44-1+24-21=46,21 7846+21=83 30 6285 19 17 16 16 16 17 18 19 21 22 22 21 21 22 602 461+23-22=46 622 83-18+22=87

80 40303 Negative Feedback The logic of the computational scheme presented above for determining steady state hinges upon the set Rr remaining constant in expected size, q ie, ET ) = constant = R for all t, Should R (t) decrease below R r r q q for a few consecutive time steps, F(t) correspondingly would decrease below Fb As F(t) decreases, R (t) gradually increases; Unless F(t) went to "' r q zero, F(t) would then increase, Since Rr (t) might increase to a value q greater than R9 F(t) may increase to a value greater than Fb This again would tend to deplete R (t) and again F(t) would be expected to decrease q Within a certain variance in F(t), this is precisely the behavior one would expect: F(t) tends to oscillate about its mean Fb However, under certain circumstances9 a transient deviation in F(t) may be ampli= fied so that fatal oscillations occur, Suppose as Rr (t) decreases in q one of the "ebb" periods, hence resulting in a decrease of F(t), the sets R (t) Rr (t)q o. Rr Ml(t) corresponding to the sets of neurons r r1 =2 rl q q q that fired during a previous "ebb" periodso (T1 is the length of the transient decrease of F(t) from Fb)0 Then, Rr 1(t), op R (t), are q A q i less than their expected values, This tends to decrease R (t) even r q further. Likewise, as Rr (t) increases in one of the "flow" periods, q resulting in an increase of F(t), the sets Rr (t), Rr 2(t) may q q correspond to sets of neurons that fired during a previous flow period (T2 is the length of the transient increase of F(t) and Fb during that period), R.. R (t) are greater than their expected values, q q 2 and Rr (t) increases above R,. q Thus, a transient deviation in F(t) from Fb may result in successive undamped amplifications of the deviation, eventually leading to fpatal oscillations. Whether this occurs or not depends upon the phase

81 relationship between Rr (t) and the sets Rr 1(t) and the sets R 1(t)p ooo R (t) or R l(t), 0 R (t); if (t) is r r r -1 r rq q q 1 q q 2 q increasingbuta Rr ql(t)9 o o0 Rr (t) correspond to a previous "ebb" q e q -1 the effect of the increase in Rr (t) may be cancelled out, Similarly if q Rr (t) is decreasing, This phase, of course, is a function of 1 and T20 q In general, however, the transients do not appear to cancel out, The chief reason for this is that the lengths T1 and T2 are not constant~ Thus, for one transient "ebb"' say)T1 = I* but for the next'ebb" T1 = T + 3 The next time the corresponding fired sets arrive at rq, the decrease in Rr (t) is even greater than before, causing a further q increase in -rj, say T1 = T + 5, etc, Likewise9 during transient "flows". as F(t) increases above Fb, it robs the sets R (t), Rr (t)( tor of q q more than the expected number of neurons required for steady-state, This prolongs the transient, increasing T2, and incidentally making the following "ebb" the more severe, since fewer neurons will be available to fire0 The principles of the above paragraphs are illustrated by a continua= tion of the computation of Table 4olo Notice that the oscillations become more and more severe until F(t) goes to zero at t = 91o Interestingly enough, as will be discussed in Chapter 5, the corresponding simulated network turned out to be stableo This is related to the fact that both N and p were relatively small, p not a function of the distance, and fatigue was present To conclude: while the scheme of Section 4,3$2 will tolerate small deviations in F(t), there is the definite possibility that a small transient deviation will eventually result in fatal oscillations, We wish now to examine some mechanism or combination of mechanisms that will counteract the effect of such transients0 That is, we wish to implement

1'able 4~2~ Sample St e ady~ St ate Calculation (continued) R0 R2 R3 R4 RS R6 R7 R8 R9 R10 Rll R12 R13 ~2 M2 ~1 $1 ~:.545 25 19 17 16 16 16 17 18 19 21 22 22 21 21 ~02 46~1,22~21m46 ~25 87~22+21=86 52 ~404 27 25 19 17 16 16 16 17 18 19 21 22 22 21 ~05 46~2+21~22m45 ~29 86-25+22-83 35 ~45 30 27 25 19 17 16 16 16 17 18 19 21 22 22 o061 45~3,21~20m41 ~55 83=27+20=76 $4 o45 30.50 27 23 19 17 16 16 16 17 18 19 21 22 ~075 41~5+22~19=41 ~56 76=27+19=68 35 ~404 27 50 50 27 25 19 17 16 16 16 17 18 19 21 o075 41-~5+22~ 18w42 ~56 68=24+18=62 56 ~545 23 27 30 50 2725 19 17 16 16 16 17 18 19 ~061 42=3+21~18m42 ~55 68~24+18=60 57 ~285 19 252750302725 1917 16 16 161718 ~05 42~2+ 19-19-'40 ~29 60~ 17+19=62 38 ~255 17 19 25 27 50 50 27 25 19 17 16 16 16 17 ~02 40=1+18-19=58 ~25 62~.16+19=65 59 ~225 15 17 19 23 27$050272319 17 16 16 16 o02 58~1+17~18n$6 ~22 65.~14+17=69 40 ~225 15 15 17 19 2327$0302723 19 17 16 16 ~02 36-1+16~17=34 ~20 69~14+17~72 41 ~225 15 15 15 17 19 23 27 50 30 27 23 19 17 16 ~02 34-1+16~16"33 ~.20 72~14+16=74 42 ~24 16 15 15 15 17 19 23 27 30 30 27 23 19 17,02 33~1+16-15 —33 ~20 74~15+15~'74 43 ~255 17 16 15 15 15 17 19 23 27 30 30 27 23 19 ~02 33~1+17~15=34 ~,21 74=16+15 —73 oc 44 ~255 17 17 16 15 15 15 17 19 23 27 30 30 27 23 ~,02 54~1+19'~15=57 ~22 73-16+15=72 ~o 45 ~255 17 17 17 16 15 15 15 17 19 23 2730 30 27 ~,02 37-1+23~16m43 ~22 72,'~16+16=72 46 0255 17 17 17 17 16 15 15 15 17 19 23 2730 30 ~02 43~1+27~18-49 ~22 72~16+18=74 47 ~255 17 17 17 17 17, 16 15 15 15 17 19 23 27 30 ~02 49~ 1+30~22m56 ~22 74~ 16+22=80 48 ~285 19 17 17 17 17 17 16 15 15 15 17 19 2327 ~,02 56~1+30~,26=59 ~22 80~18426=88 49,~345 23 19 17 17' 17 17 17 16 15 15 15 17 19 23 ~.02 59~1+27~29=56 ~25 88-22+29=95 50 ~,465 31 23 19 17 17 17 17 17 16 15 15 15 17 19 ~,05 56~,3+23~28-48 ~29 95~-28+28=95 51 ~585 39 31 23 19 17 17 17 17 17 16 15 15 15 17 ~,08 48~4+19=24=39 ~37 95-35+24=84 52 ~615 41 39 31 25 19 17 17 17 17 17 16 15 15 15 oll 39~4 17~19m35 ~44 84~37+19=66 53 ~$1 34 41 59 31 23 19 17 17 17 17 17 16 15 15 ~125 33~.4+15.~15z29 ~46 66~30+15=51 54 ~.345 25 54' 41 59 31 25 19 17 17 17 17 17 16 15 ~095 29~3+15-15"28 ~40 51-~.20+13=44 55,~210 14 25 54 41 39 31 25 19 17 17 17 17 17 16 ~05 28~ 1+15-15"29 ~29 44~ 13+13-~44 56 ~36 24 14 23 3441 3931 23 19 17 17 17 17 17 ~02 29~1+16~,13~31 ~19 44~.23+13=34 57 ~,18 12 24 14 23 34 41 39 31 23 19 17 17 17 17 ~04 31-~.1+17~14n33 ~31 34-11+14=37 58 ~.12 8 12 24 14 23 34 41 39 31 23 19 17 17 17 ~018 33-1+17-15=34 ~18 37= 7+15=45 59 ^075 5 8 12 24 14 23 34 41 39 31 23 19 17 17 ~005 34 +17=15=36 ~11 45- 5+15=55 60 ~ 075 5 5 8 12 24 14 25 54 41 39 31 23 19 17 ~. 005 36 + 17~ 16~$7 ~, 095 55~ 5+ 16~66

Table 4~,2,, Sample Steady-State Calculation (concluded) t ~ R0 R1 R2 R3R4R5R6R7R8R9 R10 Rll R12 R13 ~2 M2 ~1 M1 61 ~,09 6 5 5 8 12 24 14 23 24 41 39 31 23 19,005 37 +17..17=37 ~095 60- 6+17=77 62,~105 7 6 5 5 8 12 24 14 25 24 41 59 51 25,~005 57 +19-17=39,,095 77- 7+17=87 65.12 8 7 6 5 5' 8 12 24 14 25 2441 39 51,005 39 +25-17~45 ~095 87 — 8+17:96 64..18 12 8 7 6 5 5 8 12 24 14 23 24 41 39,005 45- 1+31-19-56,.11 96-11,19=104 65.,~3 20 12 8 7 6 5 5 8 12 24 14 23 24 41 ~018 56- 1+39~22=72.,18 104-19+22=107 66,~:465 31 20 12 8 7 6 5 5 8 12 24 14 23 24 ~,04 72' 3+41-30-80 ~26 107-28+30~109 67,..69 46 31 20 12 8 7 6 5 5 8 12 24 14 23 ~08 80- 6+34-35~73 ~37 109~40+35-104 68.:,96 64 46 31 20 12 8 7 6 5 5 8 12 24 14 ~16 73-12+23-32~52 ~5 104-50+32=84 69 1,,.0 67 64 46 31 20 12 8 7 6 5 5 8 12 24 ~,26 52-14+14-21-31 ~.63 84-53+21=42 70 ~51 34 67 64 46 31 20 12 8 7 6 5'5 8 12 ~,264 31 8+24-12~35 ~.63 42~26+12 —28 71,.21 14 34 67 64 46 31 20 12 8 7 6 5 5 8,~095 35,- 3+12-12-32 ~40 28~11+12-29 72,105 7 14 34 67 64 46 31 20 12 8 7 6 5 5 ~02 32~ 1, 8-22-17 ~19 29~- 6+22=45 72;,075 5 7 14 2;4 67 64 46 31 20 12 8 7 6 5 ~,005 17. $-11-11 ~.095 45-. 5+11=51 74...075 5 5 7 14 34 67 64 46:51 20 12 8 7 6 ~005 11 + $- 8,,8,.095 51~ 5+ 8=54 75 075 5 5 5 7 14 34 6764 46 31 20 12 8 7 005 8 * 6- 5,=9,,095 54-5' 5=50 oo 76,,075 5 5 5 5 7 14 2;4 67 64 46 2;1 20 12 8 ~,005 9 ~ 7- 5,,.11 ~095 50- 5- 5=50 77,,075 5 5 5 5 5 7 14 34 67 64 46 31 20 12,.00,5 11 + 8~, 6,,13,095 50- 5.,6=51 78.~075 $ 15 5 $ 5 5 7 14 54 67 64 46 51 20 ~005 13,12- 7-18 ~095 51~ 5+ 7-53 79 ~,075 5 5 5 5 5 5 5 7 14 54 67 64 46 51 ~005 18 +20- 8~50 ~095 55~ 5+ 8=56 80,,075 5 S 5 $ $ 5 5 5 7 14 54 67 64 46 ~005 30 +51-12,,,49.,095 56- 5.12=63 81,~09 6 5 5 5 5 5 5 5 5 7 14 34 67 64 ~005 49.46~20m78 ~095 63.- 6.20=77 82,:105 7 6 5 5 5 5 5 5 5 5 7 14 34 67 ~005 78,64-$1~111 ~095 77- 7+31=101 83.. 165 11 7 6 5 5 5 5 5 5 5 5 7 14 34 ~005 111~ 1+67-45=132 ~095 101~10+45=136 84..24 16 lI 7 6 5 5 5 5 5 5 5 5 7 14 005 132 — 1+34-63~102,.11 136-15+63=184 85,~585 39 16 11 7 6 5 5 5 5 5 5 5 5 7 002 102~ 2,14-66~58,:21 184-37+66=213 86 ~975 65 39 16 11 7 6 5 5 5 5 5 5 5 5 ~11 58~ 6* 7-32m27 ~44 213-59+32=186 87 1..77 118 65 $9 16 11 7 6 5 5 5 5 5 5 5 ~.245 27.~ 7+ 5-11-'14 ~61 186~111+11=86 881.2 79118 65 39 16 11 7 6 5 5 5 5 $ 5,:53 14~ 7+ 5- $-9 ~83 86-72, 3=16 89.-21 14 79118 65 39 16 11 7 6 5 5 $ 5 5 ~34.O.,-3+ 5- 2,,9 ~,70 16-11+ 2:7 90 O1S 1 14 791.18 65 39 16 11 7 6 5 5 S 5,~02 9 + 5 — 4:10 ~19 7~ 1+ 4=10 91 0 0 1 14'79 118 65 39 16 11 7 6 5 5 5.- 10 + 5~ 5:10 10 + 5:15

84 some source of negative feedback in the model NegativeF eedback Mechanisms (a) The Fatigue Mechanism Since the fatigue function effectively raises the threshold of a neuron firing at a rate above Fb, or conversely lowers it if the firing is at a rate below Fb, we might expect some assistance from this direction. That is, the neurons of an "ebb" period tend to fall below Fb, those in a "'flow'", above Fb. Conceivably, the fatigue might be sensitive enough to lower or raise the corresponding threshold quickly enough that large oscillations do not occur, However, the fatigue is designed to be effect tive over longer time intervals than are involved here and to take effect rather slowly: it is a cumulative9 relatively long range functiont What is needed here is a function that will be effective over relatively short time periods, in fact, fractions of a recovery period, The fatigue mechanism9 therefore, is rejected for this purpose, although it will be useful over longer time intervals, (b) Negative Connections In the analysis of Section 4 3- 2, it was assumed that the synapse= levels were all set to a common value X0. corresponding to a positive synapse value,, This, of course, can be weakened to a distribution of positive synapse values (synapse=levels) about the mean S(X0(;t0) It is values on the network`s behavior, It will be seen that this provides the negative-feedback desired, To illustrate the principle involved, consider the simple scheme of Figure 4,5. Sl and S2 are two subsets of a network [I L Suppose the synapse values from neurons of Sl to those of S2 are as shown below

Figure 4.5. Illustration of Effect of Negative Connections. +1 A - D (Assume a net stimulus of +1 synapses is sufficient to fire D, E, or F at t+1.) BF A, B, C fire at t, then D only fires at t+l. If B and C fire at t' then E and F fire at t+l. C F If C fires at t, then D, E, and F fire at t+1. +2 Table 4.3. Sample Stability Calculations for Periodic Stimulus Case t X R R R R R R R R R R R R R R 1 o 1 2 3 4 5 6 7 8 9 10 11 12 13 -.3 20 20 20 20 20 20 20 20 20 20 20 20 20 20 40 80 1.405 27 20 20 20 20 20 20 20 19 20 19 20 19 20 39 77 2.405 27 27 20 20 20 20 20 20 20 19 20 19 20 19 39-2+20-19=37 77-25+19=71 3.375 25 27 27 20 20 20 20 20 20 20 19 20 19 20 37-2+19-19=35 71-23+19=67 4.345 23 25 27 27 20 20 20 20 20 20 20 19 20 19 35-2+20-19=34 67-21+19=65 5.285 19 23 25 27 27 20 20 20 20 20 20 20 19 20 34-1+19-19=33 65-18+19=L 6 6.27 18 19 23 25 27 27 20 20 20 20 20 20 20 19 33-1+20-19=33 66-17+19=68 7.36 24 18 19 23 25 27 20 20 20 20 20 20 20 20 33-1+19-19=32 68-16+19=71 8.35 22 24 18 19 23 25 27 20 20 20 20 20 20 20 32 +20-19=32 71-21+19=69 9.3 20 22 24 18 19 23 25 27 20 20 20 20 20 20 32 +20-19=32 69-19+19=69 10.285 19 20 22 24 18 19 23 25 27 20 20 20 20 20 33-1+20-19=32 69-18+19=70 11.285 19 19 20 22 24 18 19 23 25 27 20 20 20 20 32-1+20-19=32 70-18+19=71 12.285 19 19 19 20 22 24 18 19 23 25 27 20 20 20 32-1+20-19=32 71-18+19=72 13.39 26 19 19 19 20 22 17 18 19 23 25 27 20 20 32-1+20-19=32 72-18+19=73 14.375 25 26 19 19 19 20 22 17 18 19 23 25 27 20 32-2+20-19=31 73-23+19=69 15.345 23 25 26 19 19 19 20 22 17 18 19 23 25 27 31-2+20-19=30 69-21+19=67 16.3 20 23 25 26 19 19 19 20 22 17 18 19 23 25 30-1+27-19=37 67-19+19=67 17.27 18 20 23 25 26 19 19 19 20 22 17 18 19 23 37-1+23-19=40 67-19+!9~69 18.255 17 18 20 23 25 26 19 19 19 20 22 17 18 19 40-1+23-24=38 69-16+24=77 19.375 25 17 18 20 23 25 19 19 19 19 20 22 17 18 38-1+19-22=34 77-17+22=82

S(ABD) A- 4) s ) - AF " and suppose- at,the times steps,- of' intere't,,.i or more synapses will be:~e -rd.- a neuron1 otf S,.. if neurons A, B, and C fire at t, oniyv Woned, nU.r i-. t T,. I i f-re It,..:t time I step,. If neurons B d4. <:i s:r'h.;.r. -and F fire.it t ~ I Fnl1y, if neuron C alone fires, then De E. and?';T'c at t -* i This suggests the following generai con.~:i;sion' for c,,'ta in dist ributions of synapse values over connections, thc tp-esenfz ot' iegaitvt,- connections suppresses firing of certain neurons at t ". if F(t F: and "frees" certain neurons for firing at t * I if F t) < Fb If FKt - then F(t+l) = Fb Thus: the desired homeostatic mechanism is obtained., it remains to express it quantitatively for the generaJ;.ase of networks witch uniform randomn distributions of connections Sup-po',,;e tlhe netw ork dey:s~ ty: is given as a stunm ~''* + oi+ + +.1 +'-. —-:',' s:. s{) A~where s):and s, are positi ve ltnteg'rs and s is the expected ntunber of connections received for emitted) by a neuron of with synapse-value st,r nrowS, - ill be assumed to be an integer, although the analysis can be modtfi ied to;comn.)r:-OiF, f':a. cti- onal 5 A:-ynapse —value assignment scheme can be designed to viel.d this decomposition of o into sO + sl + I independent dlistributions of sVynapSe.-p'ialueS S. S -s S:. 1.. I 1 0, +1,.. h1 w na4vsa sf.....4 s.f. c a...4qu, tihe poobabilitv aki(t )v, ihe piobabthli~ty that a neuron receive ~ k connections from RO(t) the.:V -. of 3e:i:: f ring at t. With positive and noeativ.c ~alued connection.:

87 present., k becomes a function of all the XCs, Pk k( A {t)9 l(t)9 Xl(t)9 o X (t)) where, as k k=so (t>-.X S +1 I before9 X (t) = =R(t) N s = -SOq -:,9 sil k S is the probability that before, X (t)Ro O s s the sum of all incoming synapses to the given neuron is > k, Again, Pk * Mk(t) is the expected number of neurons of Mk(t) that will fire at time step t. Since 0-valued connections do not affect the firing of a neuron and the different synapse-value distributions are assumed to be independent9 the presence of O-valued connections will be ignored in the analysis, The role of these connections will be discussed in Chapters 6 and 7. Let AS and AS denote the following eventsAsk'a given neuron receives exactly k connections with synapsevalue s," Ak a given neuron receives _ k connections with synapse=value s.; Clearly, AkS o - j=k J Denote the probability of an arbitrary event A by P(A), Since the events As k = 09 19 2, ~ are mutually exclusive9 by the law of total probabil= ity co co V A) =;' P(AS) PAk) = k j=k _ By the multiplication law9 for arbitrary AkS and A t P(Aks t) = P(A'*S) P(A*t) We now give estimates for the probabilities Pk for various values of ks so and sk k 1l The sum of incoming synapses to a given neuron is +1 or greater if

88 the event E occurs where Eqo 4o301 E= =i E where E represents the event "the neuron receives - Q positive synapses and =< Q1 negative synapses," The E's are given by expressions of the formnn: E-A*l A*2 ASl A 01 E1 = AA, A. sO 1 1 A0 0 0 0*1 *2 2 *1 *3 *S514v A ]A 02 A0 E2= [A2.A 0 1 A 0 O A*1 *2 *1;3 *Si 1 A 1 1 2 -2 -s [A *2A *1A *3A AOA![[A- VAA\ 1]AVA AO A c A 0 E3 l A2 ]t 0 0 [o 1 2 ]V0 1 0 *2 *2 A *1 A 1 1r24 4I 4I c1 22 4 Es [AA2 A A o oA 0S[A- All.A A ]AO AO Al Am E4 [A2 1 A2 A4 ]o,,A0 I[ A1 A2 3 ]A0 /[A0 A 1 ]A1 ]jA0O Attention will be restricted in this discussion to two cases: sO = s1 = 1 and sO = s1 = 2, For the former, Eq.. 4,3,,2 E _ [A1AO] v [A2(A (A v A1 )J A11 (A l;1 2 0 3 1 2 Al For the latter, 6 Eqo 4~,32 E = E i= 1 where A1 21A[A*2 v A* 1] E A 0 1 1, O A21 2 E2.~ A;1 [A^0" Ai' 41 2 1 1 -1 -2 *2 *1 E [A0 A v (A0 vA v A )AO ]A A1 1 1 -1 1 2 4 -1 2 *2 *2 *1 *1 E [[A- v v A v A. v [A] v v ] ][A v A1 v A4 4 0 A A3 0 0 1 2 1 2 A4] 1 =1 -1 I 2-1 -2 4 4 4 -2 4 -1 = [[A v A vA vA vA ] v [ v v ] v E O[0 v 21 32 4 0 A0 A1 2 A A0 2 [A2A v A1 v A* ] 2 1 21 3 5 1 41 -1 4I -1 -2 4 4 -1 4 -2 E6 [[A0 v A v A2 v A1 v A v ]A. v[A v A;1 v A v A3]A v[A vA v A2A2 vA1 A v

Then, for so = sI = 1, by the law of total probability and boole~s in= equality and noting that P(AO ) P[ iA' 1 k- O Eq 4 3 4 P P(E) = P(E) = P(A )P(A ) = P(A )[P(A ) + P(A1 ) * 1 1 21 + P(A3*)[p(A1 ) + P(A1 ) + P(A2 )], For so = s1 = 2, 6 6 Eq; 4 3-.5 p* P(E) P(= e E.) g P(L t 1: 1 i1l where the Ei are those of Equation 4,3,3, Some observations concerning the computation of P~ for various values of the AXs. especially for the case sO = s5 = 2. will be made later- It is interesting to note that P(Ak =Tr,) Sk( k k 1 since P(Ak ) P( A: P(A.j) j=k j;k k 2 The sum of incoming synapses to a given neuron is +2 or greater if the event E occurs where Eq, 4,3,6 E E1 1=2 where E1 represents the event "the neuron receives > Q positive and ~ Q 2 negative synapses a' The Eus are given by expressions of the form E *2 *1 A1 *A2 *s 1 -2 -s0 2 [1 v2 0]. A0 A0 AO0 E [A;2oA 1 *SltA=0Iv A —l,,-2 S O = 1 ] Ao A0..1 0 0 *2 l*vA-*2 *1 IA],,A0i[A v 2 -2A;1 2 3 - = [A2 o0 1 2 4 1 A1 vA2 A01 V A O',A Attention will be restricted to the case s0 = s5 = 2C For this case, E may be approximated by 8 E = IEi 7i:2

90 where E, (A v A 1 A A2 *2 * 1 -1 -2 E3 A1 A (A0 v A1 )A0 *2 *2 *1 *1 -1 2 1 -2 E4 (A2 vA A vA )(AO vA vA ) AvA A; A*2A *1 *2 *11 -1 -1 -1 -1 -2 1 -1 -2 -2 E (A2 A1 vA1 A3 A )[(A vAlvA A2vA )A0 v(A vA )(A vA )] *2 *2 *1 *2 *1 *1 -1 -1 -2 1 1)A1 -2A -1A-2 6 (A3 vA2 A2 vA1 A4 vA6 )[(A vvA4 ) v(A0 v vA2 )A vAA 2 =(A3 A 1 2A vA 1 A [A 0 V 0v vA5 )A0 v1 * 22 2(A11 [1 21 2 * (2 *2A1 *2 *1 - -* 2 - 1 - -2 -2 (A4 vA3 A2 vA2 A4 ) [(Av - vA6 )A0 v(A v - vA4 )(A vA f1 =1 -1 -2 -2 v (A0 vA )(vA vA2vA 2 )vA1 8 4 3 2 2 ] Then, for P2 we get 8 P2 =- P(E) = P( U Ei) Eq. 4e3-8 8 i=2 For A5 (s = 2, -1, 1, 2) in the range.1 to lO,, Equation 4,3,8 will be approximated by -1 =2 *2 *1 Eq: 4,3,8(a) P2 = P(E) = P(A0 )P(A0 )(P(A1 ) + P(A )) 2 0 ) 1 2,%-1 2 2 *1 *2 + P(A1 )P(A ) 2P(A)P(A ) P(A 0 1 ) + A 2 -1 -2 *2 1 + P(A1 )P(A2 )(P(A2 ) + P(A4 )) * 2 *1 + P(A21)P(A02)(P(A2 ) + P(A4 )) For k1 and A2 in the range 0,6 to 2.5 and 1 and; in the range 1,2 to 5,09 Equation 4$3,8 will be approximated by Eqi 4,38(b) P2 = P(E) P5 + P6 + P7 P

9 ) where 6) 1 p x (A P(A5 )+PA)P(A )(A -P(A3 ) p )P(A2 )#P(A2 ))P(A( A 2 1 2 A 2 1 P [P(A P(A1)P(A2 )+P(A )+P(A )P(AI) 4 3 22 2: 1 4 6 *2 (*=12 2 =2 - 7 2 I 2 x [(I.P(AS ))P(A )(1 I - P (A* P (A +PA PA )( P [P(A )P(A )+P(A )P(A )+P(A )P(A)] ) (A x [(I=-P(A'))p(A -2)+-P+(A - )(' A I —P(A'2))+(=P=(A2- ))(i=P(A3 )'s = [P(A *2 2)PA ))1 x [(1-P(A7 ))P(A )+(l=P(A* I) P8 4= 3[2 24P( A 4+4P(A p x (1-P(A2 2);)+(lP(A 1)) ( 3p (A2 ))+P(A )P(A; 2 In Equation 4.3.8(b), an attempt has been made to eliminate the overlap of events E., = E8 by replacing Aj by Aj where appropriate. Events E2, E3, E4 are ignored since9 for the given range of Xs s = -2, =2, 1, 2),, the P(A )Is and P(Ag )Os are negligible, k = 3, 4 and s s = 2 Precisely the same reasoning as for the preceding cases may be carried out for this case. We give only the'approximations for P3 and P* for X in the range,1 to L. O 4 s p(A41 A 2 )2 *1 *2 a*l -1 P 2 2 P3 PA )P(%A )P(A )+P(A )+PA )P(A2 )+PA1 )P(A )P(A2 ) Eq. 4,3,9 P4 P(A )P(A 2 )[PgA2 )+P(A4 )I] From Equations 43,o8(a) and 4,3,8(b) emerges the following general principle~ to compute Pk (k = 19 2, 3., ). examine the expansion of P(E) = P(CJ Ei) For a given range of the x's (s s- e 5S) only a few of the Ei are such that P(Ei) is not essentially 0, For these Ei. attempt to reduce the overlaps as much as possibleg that is, attempt to make the Ei.'s mutually exclusive so that the law of total probability

92 applies (Note that Boole s inequality P(y Ei) < Z P(Ei) provides a useful upper bound), The effect of negative connections upon Pk can typically be seen from examination of the expressions P.9 P6, 0~~ P8 comprising Equation 4,3o8(b), Each Pi consists of two factors, one a sum of products of the general form P(A.)P(A.) or P(A. )P(A )9 s, t positive, the other a sum of products of the form P(A. )(l-P(A t)) For small A and small j9 k, the latter is moderate in size, decreasing as XAs increaseso At the same time, for small Xs, the former is small, increasing as s Ps increases, Since X (t) = Ro(t) varies directly with the number of neurons fired at t p5s and N being fixed parameters of the network, this suggests that the parameters N po, p5 s -sO s~, may be adjusted so that as Ro(t) increases above a certain amount Fb, Pk decreases and as R0(t) decreases below Fb9 Pk increases, This gives exactly the negative feedback characteristics desired, This reasoning holds as long as Ro(t) does not vary too greatly from Fbo For larger variations of Ro(t), higher order terms ignored above come into play and eventually the Pk may increase again with increasing R0 (t) or decrease with decreasing R (t) In Figures 4 6 and 4, 7 sample calculations are given for P* and P* 1 2 using the network parameters for one of the network of Chapter 6, Notice that for this network, to obtain the desired negative feedback, Fb should be set to approximately 35 (or correspondingly, if Fb is set to any other value, the p Is should be adjusted to yield the same results),>

(a) P 11 21 - - i 21......20'.17'o16 _ 14,132 42 1 O 0 10:20 30 40 R0(t) F (t) Fb (b) P2 14 13 412 o09 i,081 058.07 T 0 10 20 30 o 40 Ro (t) Fb lj.ure 4: 6, Variation of P and P as a Function of Ro(t).; P1 and i'2 are plotted as functions of R(t) in (a) and (b) respectively, Equations 4,3,5 and 4,.3.8(a} for the case s = S1 = 2 were used, N = 400 1 = p2 7 and Pl = P-2 =14P P = 02 + Pl + 1 + 0p2 = 42 (exclusive of the po term used in Chapter 6)

94 20i o19_ 016 /.15 / /.11 /. I 09 —, 08. ~ o 07 —' 0O6- p" 0403 1',02 2.. 01 ~O S 1'0o' 3 R (t) = 2 1,24 2 48 = 3e72 \ 496 Fure 4 7:. Variation of as Function of R0(t) P2 is plotted as a function of R0(t). Equation 4.3:8(b) for the case sO = sI = 2 was used. N = 113 1 =2 7 = P=2 - 14, P'= 02 + P1 + p1 + P2 = 42, For each value of R0(t), the values of Xs (s = -2p -1 l,, 2) are also given, The dotted curve represents a continuation of the computation, but using Equation 4.3 8(a). At the point R0(t) = 10, the results of both equations were added to give the value P2 = o17.

95 Conclusion A mechanism involving negative synaptic connections has been devised that tends to surpress variations of R0(t) from FbM Letting 6R0(t) = RO(t) - Fb, it works as follows0 For a certain range of 16RO(t) V if 6Ro(t) > 0O then Ro(t+l) is decreased, i.e., Ro(t) Ro(t+l) > 0. If 6Ro(t) < 0O then Ro (t+l) is increased, i e,9 Ro(t) - Ro(t+l) < 0O. Whether this occurs or not depends upon the "mix" of positive versus negative connections, that is upon p = B P and the value of Fb, s==s0 The analysis of Equation 4, 3 2 remains unchanged, except that the probabilities 7Tk(X(t)) are replaced by the probability Pk = k(Xs (t)h o,9 X (t))0 k k s0 s1 Throughout this section, and indeed the entire chapter, statements such as the following appearc "event A occurs if quantity X remains within certain bounds (is not too large, is not too small)" and more precise quantitative information is not given, In general, such information is not easy to derive and one is forced to be more qualitative than quantitativee, This unfortunate situation9 however9 is considerably relieved by the simulation itself. For example, for the statement above9 "'for a certain range of 6Ro(t), etc," can be modified9 on the basis of experimental findings alone9 to the following, "if 6Ro(t) is no greater than approximately Fb/2, then c. For larger variations9 generally fatal oscilla= tions develope,"1 Thus, much awkward and unwieldy analysis can be Recall that F = E(F(t)) is the expected number of neurons of l that fire at t and 1/rq is the expected frequency of firing of neurons in steadystate, Fb and 1/rq are related by Fb = N/rqo The statement concerning the absolUte variation 6R0(t) could be replaced by one in terms of the relative variationw hr (t)Fb 1 1 Nr(t) = relative variation~ 6*R (t) = r(t) where F(t)

by passed by the simulation C(nce initial trends,fire established by the analysis, and estimates obtained for the network parameters, a series of control experiments may be executed to further refine the initial estimates. 4, 3o4,Networks- with Distance Bias Geometry of Networks with Distance-Bias Specification of a particular distance-function d = d(j,i) over a network implies the existence of some geometry over and conversely, For a criterion to determine which are the desirable geometries fort it is necessary to recall briefly the fundamental objective of this study; to study the formation and development of cell-assembly-like structures in the given class of networks (4. The outstanding feature of these networks is the absence of any particular a.ri structure, Rather structure is to evolve through the synapse-growth law, in response to certain conditions or patterns of stimulus. This seems to imply a basically uniform locally isotropic geometry, Such a geometry places the responsibility for development of structure squarely upon the shoulders of the synapses growth law and the stimulus, without the assistance of any intrinsic spatial variations, Therefore, excluded are such spatial anomalies as "warps" (arising in non-Eudidean geometries) or poles and essential singularities (arising in complex function theory)0 In the ideal case, several geometries appear to be plausible: (a) the infinite two-dimensional plane with the usual Eudidean metric, (b) ordinary three-dimensional Eudidean space, (c) the surface of a cylinder extending to infinity in either direction of its central axis together with the appropriate metric, (d) the surface of a sphere0 (Geof metrics (b) and (d) will not be considered here, primarily because of

97 difficulties arising in their implementation), These geometries all have the desired qualities of isotropy and uniformity: (a), (b), and (c) may be taken as approximations to fragments of the cortical association layer. (c) in fact may be regarded as an infinite strip in the plane folded so that its edges coincide, For this, the simple Eudidean metric may be retained. For the simulation9 of course9 it is necessary to reduce the infinite unbounded geometries to finite bounded ones, From (a), a simple square may be taken; from (c), a finite cylinder, The continuous geometry must then be approximated by a finite, discreet geometry, Points of these geo= metries are neurons of the model, Unfortunately, reduction of the infinite geometries (a) and l') Do finite bounded geometries violates the principle of isotropy needed9 for the spaces are not uniform at their boundaries~ For sufficiently large networks (N ='j.very large) this perhaps poses no serious problems, since then the space of l may be considered essentially infinite, However, for N relatively small (N = 200 - 900) as in the present study, the presence of boundaries may cause serious anomalous perturbations of the behavior of t'o Consider9 for example9 the square of Figure 4,8~ Suppose9 along one of its boundaries, E, f neurons fire at t = to0 The width of the "swath" along the boundary (Area 1 of Figure 4,8) is a function of p(d), If f0 chanced to be sufficiently large9 by the reasoning of tion 4~3,3 (assuming negative corrections present)9 few, if any, neurons in Area 1 would fire at t0 + 1., Let Area 2 be the set of neurons not in the firing of neurons of Area 2 at tO + 1 might be increased if fS were not too large (limits determined by synapse value distribution)o Suppose

98 Area l. fo neurons fire at t = to Area 2.fl neurons fire at Area 3, f2 neurons fire t = t + 1....i/ at t = t0 + 2 fire at E' t to + k Figure 4, 8, An Example of a Boundary Problem, See text for explanation~ fl neurons fire at to + 1 in Area 2C Likewise, at to + 2 most neurons that fire will be in Area 3, since the neurons of Area 2 would be inhibited while most of those of Area 1 are refractory or inhibited. Similarly for tO + 3, tO + 4, oo, to + k, In effect, a "wave" has been sent across the network, leaving an abnormally large nunmber of highly refractory neurons in its wake, In particular, the number of neurons fireable at to + k + 1 may be nil, Thus, the possibility arises of violent oscillations in F(t), Admittedly, this is an extreme case, However, less violent but equally undesirable effects may be obtained, For example, if f0 above were of moderate magnitude, the wave sent out from E may reflect off the opposite side E1 and come back to E, reflect off E, back to E, etc,, creating a standing wave phenomenon, The behavior of the network would be locked into a particular pattern, with the result that the network's response to stimulus would be erratic0 That is, stimulus may produce no

99 Image 8 Image 1 Aea 2 Image 2 E1 Area 3 Are aZ el Area i. 4 Area 3 / / Ae' all I Image 6 Image 5 Image 4 effect or may cause fatal oscillations, (This is especially true if the number k is less than rqi If k >> rq such phenomenon as the above would appear to be less likely to occur since the wave would tend to break up as it travels from E toward E.) The difficulties arising from bounded geometries may be resolved by the "quasi-torusI'," Consider the geometry of Figure 4,9 The edges EI 1 12 and E1 are identified, making the square a cylinder, Then, E2 and E are identified, bending the cylinder into a torus, Throughout this process, however, the Eudidean metric of the square is preserved, It is as though the square were iterated on all sides (Images 1 - 8 on the figure) For example, point a has neighbors with a circle of radius R within areas 1, 2, 3, 4 as shown, These areas may be considered on the one hand as the intersect of circles with centers at the virtual images

1O0't a.:and'.t'''t: v' th. ti'. -iar? L i. hand, tlny temay be consi. rcXE,0 - sl — 1v: e: C4.''...2t I3 -.',iit::! center at a with the i:;o'es 1.', 2; r:l.->ectiveiv.'ihe h oe ".na it: neighbors within a circle of radius R in Area 5 as shown, ihr I._. sC.- l vation of the metric in the iterated cells distinguishes the geomnetry from the actual toroidal geometry, although some essential aspects of the torus remain, e,g,, in Figure 4410 the lines A, A' intersecting El wrap around at El, likewise B, B' intersecting E2 wrap around at E,,. Note that the points B NE' and B (C E2 are identical as are A' n E1 and A' ( E' A (h E' and A /E' etc, The geometry of the quasi-torus is finite and isotropic, the met ric is particularly simple, and boundary problems do not exist: It may be regarded as a crude approximation to the highly convoluted cortical geot.. etry, or it may be regarded merely as a convenient artifact which pre. serves the basic property of isotropy and allows exploration of Hebb's thesis, Again, as N + c, the geometry tends to that of the infinite plane. Throughout the remainder of this paper, all discussions of networks with distance-bias will assume this quasi-toroidal geometry as the underlying space, E1 A' A continu d cont inued E2 B continued B E2 B' continue\ A [ |B' A' 1ti Figure 4.10o Properties of the Quasi-Torus,

101 Stead -State in Networks with Distance-Bias The characterization of steady-state behavior in networks with uniform random distributions of connections took the following simple form: "the expected number of neurons firing per time step is Fbo" From this condition the conditions of SectionA 4a3,2 and 4,3, 3 on the threshold curve and synapse-value distribution were derived. Conceivably, on the basis of the principles outlined in those sections, a complete mathematical stability theory could be developed for neural networks with uniform random distributions of connections, Unfortunately, a comparable theory for networks with distance-bias seems to be beset by a multitude of difficult problems from the outset, Consequently, the role of the simulation as a means of by-passing tedious mathematical analysis is all the greater for such networks, However, certain useful principles still may be developed and used as guides to designing effective experiments6 The statement "the expected number of neurons firing per time step is Fb" by itself is not an adequate condition for steady-state in networks with distance-bias, For consider the situation depicted in Figure 4o11 F neurons fire at t0, but these neurons are entirely contained in the circular area Ao Suppose Fb = A, Then, at t0 + 1, neurons in the annulus AI- determined by p = p(d), will fire, At t0 + 2, neurons in the annulus A" plus a few in Al will fire, etc., This yields a series of gradually dissipating) expanding concentric rings of activity with a refractory core in A.J AU A"ly 0 0 Several ills might occur in this situation: (1) - (A U A'tJ A"v 66U) may be essentially exhausted, that is, not enough fireable neurons are available to maintain stables steady-state,. F(t) would then tend to zero~ (2) Enough neurons in the compliment [ - (A \,: A' v A"U 0,o) are left to maintain activity,

102 but the neurons in A U A' i) A" ),, become hyper-recovered, This violates the second function of steady-state listed in Section 4.3,1, Consequently, the danger of violent, possibly fatal, oscillations arises, and some qualification on the condition is needed. What is needed clearly is some condition on the spatial distribution of neurons firing in steadystate. The following provides the necessary spatial distribution of neurons: the Fb neurons are distributed randomly and independently over u like, for example, the flying-bomb hits on London in World War II, bacteria on a Petri-plate, etc. Let FA - the expected number of neurons firing in area A, then Fb Eq. 4,3.10 F(A) = = /)A q since F N = Fb; 1/r is the probability that a neuron of 1 will fire q at t. If N is sufficiently large so that the geometry may be regarded as continuous, this is equivalent to saying that 1 F(A) = F A (CA" the area of A) q.Aota a wA - A"' Figure 40110 Anomalous Steady-State in a Network with Distance-Bias See, for example, Feller, Wo An Introduction to Probabiity Theorye and Its Applications, Vol I, Wiley, pages 150-152.

103 and Eq, 4.3,10(b) dF(A)/dA = /rq In other words l/rq may be regarded as the rate of change in firing per unit area, Equation 4,3 10(a) will therefore be accepted as a necessary condition for steady-state in networks with distance-bias, It is, of course, an approximation, since in principle, the firing pattern at t + 1 can always be uniquely determined from the firing pattern at t. The conditions for the firing of a neuron of _ at t are sufficiently complex and variable that it may be regarded as a "random" event (the usual condition for applications of elementary probability theory), The equation essentially limits the expected activity of neurons in any given area, F(A) is an expectation, therefore the question of its variance arises, just as it did in Section 4o3o3 for F(t)0 Again, as the example above shows, too large a variance can lead to catastrophic results, Recalling the moral stated at the conclusion of Section 403o3, it will be assumed that var F(A) remains within certain bounds, these bounds' being determined by the simulation, Conventions of Distance-Measure The quasi-toroidal geometry of l takes the following specific form: Let N = e 9 the network being laid out as a square on a two-dimensional grid with e neurons per side (see Figure 4,10), Neuron i will have coordinates (xi9Yi) where xi is the number of units from (0,0) to the projection of i on the x-axis9 yi the numbesr of units from (0,0) to the projection of i on the y-axis, Given another point.(neuron) j the probe lem is to define uniquely d(ji), the distance from i to j o If j has coordinates ({xyj), it would be tempting to define d(ji) as

104 d(j, i) = [(x-x)2 + (YiYj )2]1/2 since an Eudidean metric has been promised. However, a glance at Figure 4.12 indicates that this definition is inadequate: The point j has copies jl in the iterated squares 1 - 8. Of these, the points j in squares 3, 4, and 5 are such that the d(j,i)'s are equally valid candidates for d(j,i). Y @06 4 3 2 (e-l, / ( ) (Xe-,,j) 1 0 1, 0 P(Yi~) - i 7,,, * o i Figure 4.12. Distance Measure on the Quasi-Torus

Therefores some method of choosing the minimum of all possible d(jsi)'s is required, For simplicity, suppose one of the points has been mapped into the origin, For d(Oi), d1, d29 d3 and d4 are candidates, Set d = min(xie 1 x i) d = min(yi.e 1 yi) Then, it is easy to see that the required distance is d(Oi) = (d2 +d2)1/29 x y In the figure9 d(Oi) = d4o For arbitrary points (xi,Yi) and (xj,yj), the mapping takes the form (Xj yj) + (090) (xi yi) 4 (xi=Xj yi yj) Should x. = x. or yi yj become negative, e = 1 is added, (xiyi), (xI y where x xi - j if xi xj O0 =x.l Xj + e 1 ifXi Xj < 0 y= yi = Yj if y1 y Yi y yj + e -1 if yi yj < O 1 1 and x 9y are used in the definition of d and d x y Disk Distribution The specific form of P = P(d) to be used throughout the remainder of the present work whenever distance is involved is the following (recall Section 4(,2,2 above); Given any neuron e cs!, the expected number of connections received by i from sI is p(rj) = constant = 0 0 for 0 - ri. R p(ri) = 0 for R < ri 1~~~~~~~~

106 where ri is a radius emanating from neuron i (see Figure 4,13). The basic scheme for assigning connections is modified so that no connections are obtained outside the disk CR of radius R with center at i, This means that i receives connections equiprobably from the neurons of CR and none from \l- CR0 For N sufficiently large, =e 2 CR = area of CR = 2wR2 and Equation 4.3.10(a) becomes F(CR) - 2 R2 rq S The numbers R and Pg are constant for a given network; po = -S0 The Difficult0 Ideally, a calculus similar to that of Sections 4,3,2-3 should be developed for networks with distance-bias, Unfortunately, an attempt to modify that calculus to treat networks with distance-bias on the distribution of connections immediately leads to a computational impasses The probabilities ok(r(t)) or Pk must be modified to account for the distance variation of p = p(d), hence of the X's, s =. -so, Ott S1l To do this, the geometric structures as well as the respective cardinalities / CR (shaded) /:: i -.ke ~,//.'~ ~~neuron i receives \ i/ connections equiprobably from all neurons of the disk C none at all rom neurons out. side CR, Figure 40130 Example of a Disk Distribution

of tihe sets RoCt), Rl(t),,c R (t) would have to be specified. Next, q the expected number of connections received by each Rr(t), ra' r.'rq from R0(t-1) would have to be estimated, The latter process gives rise to integrals of the type discussed in Section 4,2,2, Figure 4014 illustrates the variation in the expected number of connections received by a neuron from simple configurations (using crude approximations to the area integrals). While these estimates assist in understanding the effects of distance-bias on the connection probabilities'k or Pjk. they are not really useful for the present purposes0 There are, however, some useful techniques for estimating the threshold curve and for setting the parameters 0p, R, r and Fb, q b

108 (a) Connections Received from a Disk of Radius h0 Receiving neuron i at 1 —~'R (b) Connections Received from a Line Segment. CR 2 2 1/5 Elk\R P1= 2K(R r)1 r a a2 _R2 r2 (c) Connections Received from a Box with Circular Ends, 1 2 For i in Area 1, p = K(l+d) These ignore the corner effects (CR intersects For i in Area 2. use approximation (a) - both a circle and the box), Figure 4l14, Estimates of Expected Number of Connections Received by a Neuron i from Simple Geometric Structures. The basis for the estimates given above is that the expected number of connections A' received by a neuron i e'8 is proportional to the area of A (in (b), to the length of a line segment)e This holds since p(r) is uniform for O I r 5 R, The general inference from the estimates below is that pa varies quadratically with the distance from i to the boundary of A)

109 Given a network {, with: = N, distance-bias function p = p(r) as above, Fb = 1 N the expected number of neurons firing per time step,Aid, q in v!. Perform the steady-state calculations of Sections 40302 - 4.3.3 for the neighborhood CRC \I o For this, from Section 4.0310 R 1 ~ 27rR2 F C b - R = q q and the X's are given by FR F 0 b O 1 O b s s R s N I CR q since r = N' In other words, this is equivalent to treating itself q F as though it were a network with uniform random distribution of connections with density = - 0 s=-S1 This calculation, of course, ignores the effect of neurons in - CR upon those of CRO Depending upon the distribution of synapse0 sl values over synapse-levels (giving the decomposition p0 = >'p0 ), the s —sO resulting threshold curve would tend to produce either overdamped or underdamped behavior initially in 4l. At this point, refuge is again taken in the simulation, the calculations above being used as a guide for setting V(r) then a family of curves derived from V(r) may be tested experimentally, gradually homing in on the one that best satisfies the basic conditions laid down in this chapter for stefdy-state and stability. In Chapter 6, it will be seen that this procedure is not as haphazard as it appears and that many of the principles that apply to networks with uniform random distributions of connections carry over with only

110 slight modification to networks with distance-bias0 4 3.5 Synapse-Level Drift In the preceding discussions, the synapse-values have been assumed to be set to discrete integral values -so, -s0+l, cot, s1t If the network is operating in steady-state, the analysis of Chapter 2, Section 2,3,4, guarantees only that the expected change in synapse-levels is zero, In practice, the levels will "drift" from their initial settings since locally some neurons may be firing at rates greater or lesser than Fb for brief periods of time. If the network is operating in a true steady-state and if all the network functions are properly adjusted, this "drift" should ultimately take the form of a multi-modal distribution of synapse-values about the means -so, -s0+l,.oo, SlV As usual, the problem of variance arises and, as usual, the burden of the proof will be on the simulation, A drift in synapse-levels in a particular direction would indicate an imbalance in the network functionso For example, the threshold curve might be set for a steady-state firing rate of I/q, while the synapse-level balancing equation is set for a rate of l/r' q, and l/r > l/r`; In this case, a net increase in synapse-levels (values) q q would be expected, 4 4 Networks Under a Single Periodic Stimulus 4.4.1 Stimulus and Stability In the preceding section, computational guidelines were laid down for determining the network parameters N, p, rq, etco, and the threshold function V(r) that will yield stable steady-state behavior in a given

111 network.o The criteria for such behavior are outlined at the end of Section 4.3.1. Of the five conditions given there, the fifth has not yet been accounted for, namely "a single application of a moderate sized stimulus to a network operating in steady-state must not produce epilepsy," An analysis of this statement and its consequences forms the basis for the present discussion. Consider a network operating in steady-state, This means that there is a distribution 1(t) of neurons over recovery states that is stationary in the sense that the expected values of the R (t) remain constant (in fact all equal to Fb) for all time, The negative synapse-values provide the necessary feedback control mechanism to damp out the effects of transient variations in F(t) from Fb, the bounds on such variations to be determined empirically, The origins of these transient variations were attributed to the random fluctuations of the R (t) from their mean Fb that are inherent to any random process0 Suppose, however, that a subset EOCI is selected as an input set., At certain time step t0 the neurons of E0 are provided with an external stimulus SOO In general, the neurons of E0 will be distributed uniformly over (in the case of uniform random connection distributions) or over \jO some subregion of (\ (in the case of distance-bias) Likewise, neurons of E0 will be distributed randomly over recovery-stateso Since a neuron of Z0 fires at t0 if SO plus the incoming stimulus to the neuron at to exceeds its effective threshold, for a given S0c the expected number of neurons of.0 that will fire at t0 is F0 where (a) (uniform random distribution) F0 =. N N -...._ N

112 where M* is defined as o[s] M* — Mko 0 k=l (b) (distance-bias) 0 ~t A A = - A OFO = A) s N N s N where A is the region of l containing z, L0 AC AC In (b), it is assumed that neurons are distributed uniformly over recovery states over area: if R neurons of are in recovery state r, then the expected A number in this state in an area A is Rr N, Notice that (a) and (b) are r N equivalent if A =(i A = N),, The basic reasoning of Sec, 4.3.3 may now be applied to the case that F(t) undergoes a variation of approximate magnitude Fb + F0 at too (In general, the subset of Z0 that fires at t0 and R0(t) will overlap, Consequently, the actual expected variation is Fb + F - - or Fb + F 0 - FOFb 2 depending upon the absence or presence respectively of distance-bias' The error term is, of course, frequently negligible,) If F0 lies below a certain bound Fmax$ the network will remain stabled Empirically and computationally, a safe bound was found to be F Fb/2 Fmax Fb/2 although larger variations were tolerated, Should F0 exceed Fmax depending upon the distribution of synapsevalues over connections, F(t) will be driven too far above or below Fb for the behavior to remain stable and fatal oscillations would be expected to occur, The steady-state calculus of Sections 4~3~2,-3 may be applied to Recall again that F = E(F(t)) is the expected number of neurons of that fire at t in steady-stateo

113 study the effects of stimulating E0 and to obtain estimates of fmax0 Each subset Mk C Ms has to be reduced at to by the expected number that k s0 fire in Mk,, otherwise the calculation proceeds as before, An example of this calculation will be given in Section 4,4,3,.4,4 2 Periodic Stimulus The basic thesis of Hebb is that cell-assemblies and phase sequences of cellassemblies come into existence through the repeated application of a training stimulus pattern. Once the assembly is formed, via the synapse-growth law, it responds to a brief application of this pattern and tends not to respond at all to different patterns, "'Response" means arousal of activity in the assembly that continues for a brief period of time independently of external stimulus9 a combination of effects of fatigue and external inhibitory stimulus damping out the response, More specifically, a subset of neurons of the assembly will operate briefly at rates greater than the background rate 1/r q For this subset, the distribution L(t) is no longer stationary, since for a brief period of time the mean of the R (t) differs from its steady-state value and not all the R (t)9s are equal, Eventually, the effects mentioned above drive *(t) back to its stationary formn It is the purpose of this section to examine the effects upon the behavior and structure of O of the application of stimulus with the simplest type of pattern9 namely simple periodicity. Suppose a set of neurons E0 (as defined in 4,4o1) is stimulated with stimulus SO every T0 steps at t = t0 mT0, m = 0, 19 2,.. At t0 + T, the set ~ T T f

114 will fire where Il" is the component due to the steady-state (ioe,, the Fb neurons firing at t = t0) and ZT is the component due to the combined effects of the neurons of E0 that fired at t0 and the steady-state)., The expected number of neurons firing at t0 + T is f = Fb + F T b T where F = E This decomposition of ET into E E' holds only for T T' T T'T small To As T grows, the effects of E0 (unless epilepsy were produced) become dissipated, It is important to realize that FT need not remain positive, T < TO, There are three caseso (1) F remains positive; (2) F becomes negative; (3) FT goes to zero, In general, if' is large (3) indicates that the stimulus has dissipated, (1) indicates the presence of a path of connections from E to E1 to E to ET (abbreviate 0 1 2'T PP(o, * EZ)) such that EZ assists in firing V () <.' < T, Such a path will be designated as an effective connection path from E to 1' PE (O + E' (21 and (3) may or may not imply a P (El E s) In the case of (2), however, Fb + FT has become so large that R (t) is r q decreased below its steady state value so that F(t0+T) = F' decreases below its steady-state value Fb, This raises again the danger of violent oscillations, For now, merely assume this does not occur and there exists a P (E - Z') At subsequent applications of the stimulus (assuming the same subset 0' C 0 fires each time) at tO + mT0 an effective path P E.(Em E + 0 0 will exist, These paths will, in general, at least partially coincide after repeated applications of the stimulus if the stimulus is sufficiently "strong". i,el, T 0.- Fm T << rqi SO large. (While this statement is not, perhaps, a priori evident, again the appeal will be made to the stimulation, which will bear it out~ The role of distance-bias is

115 vital in this argument and will be discussed in 4,4,4, below,) An important characteristic of the stimulus must now be introduced, If To << rqD repeated applications of the stimulus will result in fatiguing the neruons of L0, so that eventually they respond but sporadically to ito Therefore, consider that the stimulus is applied periodically every T0 time steps for t t o t t = t1, then it is turned off from t = tl to t = t2, then turned on again' Assume, as in Chapter 3, that the lengths tz of these "on-off" periods are equal: t1 to =t2 t = 000= t = constant, Set tz so that the neurons of E0 just begin to fatigue by tO + t,, and completely recover with respect to fatigue by tl + t+ etc, The case is very similar to that of the synchronous periodic inputs of Chapter 3, AssumeD therefore, that a path E =E( ) develops, T < rTo possibly over several'onoff"' cycles. Notice that the synapse growth law is involved in this development since the neurons of Z 1 will be repeated assisting those of El in firing at a rate greater than 1/rq, therefore causing an increase in the..jis, j ~ ZL i e LI The question arises now: Will the path PE eventually close back on itself? That is, will an effective path PE ( E ZT ) develop? Since = this means that a closed cycle C) + ) of length r. T0 Z0 this means that a closed cycle C(E.) =-PE(EO " 0 To has evolved, C(%,) will form a candidate for a cell-assembly in the model, To demonstrate an affirmative response to the question with the current model will form the bulk of the empirical effort of Chapters 6 and 7^ However the following intuitive argument might illuminate the problem somewhat:

116 If p is sufficiently large (and R not too small if distance-bias is involved), then there almost certainly exists at least one chain of neurons from i e El to i El to 1 0 back to i0 againC 0 0 1 1 T TI 1 That is, there exists'a P El(z -+ Z )o However, it is unlikely that this chain would be "effective" in the sense that (1) S(Xji) > 0 j = i.,1 i =it.for = 1,, TO - 1 and iT = iOb (2) the S(Xji) above are such that, in general, neuron i _1 assists in the firing of neuron ig (2=l, 0ooo, 0)o That is, i' should not fire independently of i_10~ As a transient mechanism or artifact for increase ing S(X 1 ), this might be acceptable: however, as a permanent feature, it is clearly not desirable. These conclusions hold because of the distribution of synapse-values over connections, Each link i[1 - i[ of the chain has a definite prob= ability of having a positive or a negative synapse-value0 Therefore, the probability that all the S(Xji)'s are positive decreases rapidly with the length TO of the chain. At this point - easily the most crucial of this thesis the synapse growth law emerges in its fullest importance: Since the neurons of E0 fire every TO time steps (except in the off period) and since the connection along a"PE(E -+ Zg ) already exists, in particular, a link 0 i 1- + i0 exists, any chance firings of iTl at time steps t0 + m (TO=l 0 0 (m > O) will fortiori be followed by firings of iO at to + mToo Gradually, then, over sufficiently long time intervals, will be expected to increase, thus making the link i 1 + i0 effective, assuming for the moment that S(Xi li) is not strongly inhibitory To

117 (very negative)~ As this link becomes effective, the link i + i will likewise become soo The latter, of course, depends on the preceding links gradually becoming effective, Therefore, what must happen is that "both ends are worked against the middle" and the path'P becomes effectiveo The role of the repeatedly applied stimulus is to "lock-in" the cycle,, so to speak via the synapse=growth law, The process of making the link i =1 + i effective, in other words, the process of drawing neuron i into the cycle C(0'), is an example 0 of the phenomenon referred to by Hebb As "recruitment". Neuron i is 0-1 recruited into the cycle C(EZ)o Since a number of chains i0 i oo i i0 may exist, recruitment need not be a one time only occurrence, The longer the training period, presumably the more the neurons that will be recruited into C(%E)o Conversely, certain of the chains i0 + i1 + o iTl i0 may cease to be effective and a neuron i 91 may drop out of C(yE). This corresponds to Hebb's frac0 0 tionation phenomenon, Fractionation and recruitment depend some what upon the initial value of X If S(i ) is strongly inhibitory (i,e, near %O o 0 0 the maximum negative value), the increasing trend in 1 would be T 0 less likely to occur, since i -1 would tend to assert a strong inhibitory To effect upon io0 This tends to aid fractionationo If S(X 1 i ) = and -a1 a where al is a small positive number, the inhibition exerted by ito l upon i0 may be negligible and the preceding argument stands: recruitment might occur, This main point of this section is now the following; once the lockin described above is accomplished and an effective cycle C(Z%) is formed, the stimulus need not be applied in as strong a fashion as during

118 the training period. For example, assuming the neurons of C(%O) are not unduly fatigued, a single stimulus might go around the cycle C(EO) several times before being extinguished, Notice that C(EO) may consist of several neurons in each level ~ and there may actually be several paths in the link El 1 - E Starting at iO CE each time, the stimulus may actually circulate through different paths from to El Thus, the structure of C({E) is not inflexibly rigid, C(E.) would be a self-sustaining cycle since, once activity is initiated (E0 stimulated), it is maintained for a brief period of time (the stimulus circulates through C({0) a number of times before being extinguished): These properties of C(E_) — if they can be demonstrated empirically appear to qualify C({O) as a cell-assembly in Hebb's original sense [9] Success in empirically exhibiting a C(%Z) would appear to give some hope to the field of machine learning, 4.4.3 Stability Calculations The calculations of 4o3,2-3 may be readily modified to treat simple periodic stimulation of a subset Z0C&' First, at t = t0 when the stimulus is first applied, the subsets R of M* are reduced by the appro= r s priate expected amounts, that, together with the neurons of Rr (t ) that q would have fired at to anyhow in steady-state, from R0(t0) Then the calculation proceeds as usual until t0 + TOO at which point R T is reduced by the number in E0 that fired at t 0, 0, this number being added to the number that would have fired anyhow at to0 thus forming F(t0+o0), Continue in a similar manner for tO + n0 + 19 Jo Table 4,3 shows this calculation for the network of Figure 4,4 and Table 4 1: N = 400, p - 6. = 10 and 0 = 7, 6 Notice o ~~~o T

119 that R0(t)b R1(t), o R5(t) are increasing0 4.4.4 Distance-Bias In section 4o4o2, it was observed that distance=bias played an important role in the formation of an effective path P(I, + F), To, This is, in fact, an example of "localization" alluded to earlier in the chapter. Consider the network ( iof Figure 4,15o There, R = (uniform random distribution) and the neurons of Z0 are equi-probably connected to any other neuron of ij The path from 0to C may be any one of several paths at successive intervals t0 + mTO0 and a definite effective path'E(El + E) may or may not occur, Consider now the network of Figure 4.16, There, R is finite and 0- CRo The neurons of Z0 are spaced regularly along the 5 x 5 subgrid of (2, The spacing of the neurons is such that the neurons of E1 fired by E' will fire again after the next application of stimulus at to + TO. 0 etco Likewise, the neurons of El will tend to fire the same set El at 1 2 succeeding time steps t0 + 2 + mT0o Gradually, of course, the successorFigure 4,15, Non-Localization Property of Networks with Uniform Random Distribution of Connections. A neuron i c 0 is equi-' probably connected to any other neuron of h lN Localization (in any uniform, regular sense) is not possible.

120 0 0 a 0 0 ~~~ o 0 0 0 0 0 0~ 0 a 0 0 0~ 0 0 o 0 0 o. o a 0 0 0 0 o 0 0 0 0 0 0el 0 0 0 Q 0 I. 0 0 0 0 0 0 0 0 0 0 0 0 C0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0~~~o o o. 0. 0 0 0 o e F) Q0 0 0e0 0 0 0 0 0 0 * 0 ~ 6j~ 0 0 0 0 0 0 0 ~ 0 0 0 0 0.0 0 0D 0 00 0 0 0 0, 0 0 0 ~0 0 0 0 ) j 0 0 0 0 0 0 0 0 0 oooooo)(~oEI1~~~~,~OX~ 0 0 0 0 0 0 0 0 0 0 0 0o o~l 0 0. 0 0 o o 0 0 0 0 0 0 0 0 0 000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0b 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fi 0a ur 4l6 Eaml of an Inu Se 1 in a Netor) it 0o0o t Distance-Bias. In this network, R = 6. A is the shaded area, %0c-A the encircled neurons. Every neuron of A- 0 lies in f\ Ci) oieog may receive connections from any neuron ofZo Neurons in the exterior of A,(, - A. receive connections from E 0 as follows: -Neurons in the squares may receive connections from any neuron ofo —I -Neurons in the solid circles may receive connections from all but one neuron of - 00 tions foalbuthee nuoso

121 sets Z spread out over o If R is small,, it could happen that E0 would be insulated from 1 and closure would never occur, If the 0 spacing of the neurons of E0 were too close, a core of refractory neurons could result with the consequences outlined in section 4,3,4, The setting S1 of go = 2 s is clearly equally critical, s=_S0 In conclusion, the distance-bias mechanism is very convenient for ensuring development of the initial path segment.PE(l - Z)o However, it introduces pitfalls all its own and greatly compounds the number of variables available. 4,5. Networks Under Two Alternating Periodic Stimuli 4oSo1 Alternating Cell-Assemblies In the preceding section, a heuristic argument was given to show the possible formation of a closed self-re-exciting cycle C(E0). It was duly noted that such a model constitutes a candidate for a Hebbian cellassemblyo Given that a cycle C({0) may be formed and that it indeed is a cell-assembly, folloiwng the natural development of Hebb's theory, it is natural to ask: (1) Is it possible to form additional distinct cell-assemblies in K (as a result of applying distinct stimulus patterns)? This is far from a trivial question, at least for the relatively small N = "1 used in this work, since conceivably one C(%o) could exhaust)l of neurons available for recruitment into new cell-assemblieso (2) If the answer to (1) is affirmative, then how will these cellassemblies relate one to another? Specifically, how will activity (circulating stimulus) in one affect activity in another? This is not meant to imply that there necessarily be a relationship between two

122 arbtrarily chosen cell-assemblies of a network, There may or may not be such a relationship, However, if the cell-assemblies are "proximate", it is reasonable to assume that such a relationship does exist0 "Proximate" means that, given cell-assemblies C(E0) and C(E*), some segments of their respective paths lie within a distance R* from each other0 If R* < R, R the distance-bias radius, C(EO) may have an immediate effect upon C(E*) and conversely, If R < R* < 2R, the possible effect will be delayed one time step, etc. Clearly, in the current model, the larger the R, the less probable the possible influence of one cycle upon the other becomes0 In the physiological situation, V fibres or other long-distance axoms might yield the short graph distances requires to be "proximate", while entailing long geometric distances, A special case of (2), to be considered in detail below, is(2') How do cell-assemblies arising from negatively correlated stimuli relate to each other? The hypothesis will be made below that they will become mutually cross-inhibiting. From (2) then flow all the questions concerning the structure of Hebb's theory of learning, i,e,, concerning the arousal of cell-assemblies, alternation of activity in cell-assemblies, the development of phase= sequences of cell-assemblies and phase cycles, etc, To adequately discuss all these questions, to provide the necessary analysis and empirical evidence that would be needed, simply exceed the bounds of this paper,l All is not lost, however, since the advanced concepts of Hebb's theory 1To say nothing of the bounds imposed by the existing computer hardware, The maximum N used effectively in experimentation was 400 One would want for exploration of the more advanced part of theory at least N = 1000, preferably N - 10,000o

123 seem ultimately to reduce to the effects of proximate cell-assemblies upon each others namely: activity in one cell-assembly should tend to arouse activity in a proximate cell-assembly, These cell-assemblies presumably come into existence through sequences of (spatially and temporally) patterned stimuli of some type, For a complete discussion of how this might occur. the reader is referred to Chapter 5, Perception of a Complex: The Phase Sequence, of Hebb [9 ]o For now, merely assume that there exist two proximate cell-assemblies, C(O0) and C(E0), and it is desired to study the possible mechanisms by which one may arouse the other, An important facet to this question is that there appears to be, from the development given by Hebb, a temporal restriction on the arousal of the one cell-assembly by the other,'In fact, in general, the temporal relation of cortical events takes great importance in his theory, Specifically, assume that, say, activity in C(lO) is not to start activity in C(EO) until a certain time interval has passed, Presumably, this time interval would be related (a) to the stimuli applied to E0 andEO, (b) to the rate of fatigue of neurons of C(EO) In effect, then, C(EO) has to suppress activity in C(E0) for a period of time, the degree of suppression gradually decreasing as the activity in C(%O) is damped by fatigue, Conversely, it is interesting to ask, after C(E0) starts becoming active, will it tend to suppress activity in C(E0) for a period of time, then allow it to build up again as fatigue takes effect? Therefore, a "multi-vibrator" effect may be possible: after a training period in which C(Z0) and C(sE) are stimulated in certain patterns, alternating between one and the other, a brief stimulus applied to one will set up activity that will *'oscillate" back and forth between the two for a period of time0 If this effect were possible in the given model, it would appear that the model would suffice for deeper study of

124 the theory, For phase sequences involve a generalization of this type of behavior from that of two cell-assemblies to larger numbers, all intricately interrelated by arousing and suppressing one another at appropriate instants in time. For this reason, the study of this alternation of activity in cell-assemblies will conclude the present work, Note that the role of inhibitory connections (negative synapse-values) is absolutely vital here, For C(E0) to suppress C(E0) when the former is active implies the development during the training period of strong inhibitory connections from C(E0) to C(E0) and conversely, To the author's knowledge, this point is not explicitly recognized by Hebb [9]., although, of course, it certainly is implicit: Milner [10] made the general role of negative synapse values explicit; The development of this mutual or cross inhibition again depends critically upon the synapse-growth law; this time in reverse: Neurons that are active (say in C(Z0)) are trying to fire neurons in C(ZO) that are fatigued and will not respond, Consequently, the corresponding synapse-values drop, Not all synapse-values from C(0O) to C(*0) must become negative however. A few must remain positive so as to help start C(<Z) up again as the activity in C(2E) is damped out The arousal of C(EO) may depend upon the presence of some "sue oru f rilt'T1i.:. structure as well as upon the decrease in inhibition from C(%O) For a simple example, a subset E C 4)may be supplying both structures C(E0). and C(O0) with a light:,us at all times, becoming effective on the one only when the inhibition from the other decreases, etc, Notice that distance-bias on the, distribution of connections is truly indispensible at this point. In fact, recalling the highly diffuse character of the hypothetical assemblies described by Hebb, in the long

125 run one probably would want a more general distance-bias function than the one adopted here. This would approximate even better to real networks and tend to make (as desired by the theory) the network or less susceptible to local damage (slicing, tranma, etc).o In conclusion, success in demonstrating an alternation of activity in cell-assemblies as suggested above would strengthen Hebb's original attempt to build a "molar" calculus in which human behavior could be more adequately related to basic underlying neurophysiological phenomena. This calculus would form a bridge between detailed neurophysiological knowledge on the one hand and the far grosser body of psychological knowledge on the other; 4.5.2 Alternating Period Stimuli Given a network (I with distance-bias p = p(r), O S r < R, let LC A(C% and *C A*CA be two distinct input sets, A FA* = 0, Suppose A and A* are separated by a distance do0 d(j9i) = d0 for all j E A0 i c A, dowill be taken sufficiently large to minimize the direct effects of stimultation of CE upon o* and conversely, yet sufficiently small that the paths (El + El) and (Z0* - El,) will interact (activity in one affecting activity in the other), A good choice of dO was found to be R R < d < R, 2 0 As in section 440o2, attention will be restricted to the case in which the stimulus has the simplest possible pattern, alternating simple periodicity: Suppose Zo is stimulated with stimulus SO once every s0 time steps in intervals Im m = O0 19 2, oo ~ and Z0 is stimulated with m~~~~~~~~~

126 stimulus SO once every E0 time steps in intervals Im, m = 0, 1, 29 0 m where I and I* alternate as follows: m m Im = [t0 + m(tl+t*+ ), t0 + tl + m(tl t;+ )], m = 0, 1, 2, * = [t + tl + m(tl+tl+ ), to + t1 t + m(t +t*+ )], m = O, 1, 2, ti and tl are the lengths of the intervals over which SO and SO are applied respectively. 6 is delay whose function will be discussed below, The overall stimulus pattern is then 0 I I. 1 I I 6 I* I 6 ~m-1 m-d m m m+l m+l starting at time too The "on" period of one stimulus is the "off" period= for the other (see Figure 4017)o ti and t., as in section 4o4o2, shall be chosen so that by the next application of the respective stimuli, the neurons shall be recovered with respect to fatigue from the effects of the preceding stimuli, They must, however, be long enough that the paths P(%l + ZT) and-P'(E* r E) may eventually close. Likewise, the delay 6 is chosen to allow the neurons of E0 its respective successor - sets to recover with respect to fatigue before the next stimulus cycle begins, This is especially important for smaller networks (N = 400) and relatively large input sets and areas (.0 = = 9, A = A = 25) since during one complete stimulus period I * m m tory (high fatigue values, *(Li) large)0 If 6 were zero, the next simulus sequence might find too many neurons too highly fatigued to react as desired, Moreover, the presence of a relatively large number of hyperrefractory neurons raises again, as discussed in 4,3,3, the danger of underdamped or overdamped behavior, At the best, either of these phenomena tend to make fewer neurons available for recruitment,

127 Stimulus S,. All stimuli off in these intervals I!2 2O L tQ to 6l t tt6: Figure 4.17. Alternating Stimulus Envelopes for z and. * Stimulus SO is applied to E0 every l0 time steps in I1D I 2, 4 6 Stimulus SO is applied to E0 every To time steps in I*, S is shown as less than SO only to aid in discriminating between 0 0 the envelopes Im and I* S0 O or S > S might equally well have been chosen depending upon the given experiment0 Suppose now that a C(E0) already exists, having been obtained by prior training as suggested in section 4.40 Then; the alternating stimulus sequence is turned on (keeping the same period TO for 0 that was used to form C(Z0) of course)0 Two questions arise: (1) Will a selfre-exciting cycle C(Z0) develop? (2) Will cross-inhibition between C(%0) and C(E0) develop? The reasoning of 4.4 may be applied to convince oneself of an affirmative answer to (l)o Since C(,Z0) and C(2) are proximate (and N is small), they certainly will influence one another connections will go from one cycle to the other, Since a majority of the neurons of C(E0), say, will be highly fatigued after an interval Im, the neurons of C(f5) will have little effect upon them and the corresponding synapse values will drop eLikewise9 neurons of C(id) will tend

128 to be fatigued after an interval I* and will be little affected by neurons of C(EO), etc, Thus, an affirmative answer to (2) may be expected, Once again, for a demonstration of these claims, appeal is made to the simulation. Unfortunately, lack of time and money for computer usage forced cessation of the experimental work before the additional assembly C(E.) had actually evolved. However, as will be seen in Chapter 7.' the results obtained tend to confirm the claims made in every way; in particular, cross-inhibition appeared'to be developing0 4.6 Summary The main results of this chapter are summarized below: (1) A steady-state calculus is developed for networks with uniform random distributions of connections. Letting p be the network density, this calculus relates the threshold curve V(r) with the expected number of neurons firing at time t in steady-state as follows: E(F(t+l)) = Fb = nl(X(t))Rr (t) q where Ro(t) I(t) = S " p and Ro(t) is the act of neurons firing at t - 1, R (t) the set of neurons q of whose recovery states f? exceed rq. The assumption of steady-state reduces this to: E(F(t)) = Fb = r1(I O))R- (0). q Neurons of are assumed to be distributed randomly and uniformly over recovery states r = O, i, 2, o o, rq at t = 0; ioe0, E(Rr(O)) = constant = Fb for r = 0, 1, 2,.,o, rqo (2) In general, positive and negative synapse-values will be present in il, giving a decomposition of p into corresponding components:

129 S1 P PS P + P 1 + ~*b + P + + P + + P5 + 00 0 where -so, -s0+l..o, -1~ O 1., t.e sl are synapse-values (so > 0) Ps the expected number of connections received by a neuron ofi i with weight s, It was shown that for some settings of the p, the negative connections suppress firing of certain neurons a t + 1 if F(t) > Fb and "frees" certain neurons for firing at t + 1 if F(t) < Fb, A general procedure for calculating the. Is t~,-iT.ai this effect was given This form of negative feedback seems to ensure homeostasis, ite,, in steady-state, transient deviations in F(t) from Fb will not build up into undamped, possibly fatal, oscillations, (3) The steady=state calculus was extended to networks with distancebias, p = p(r) for r _ R ("disk" distribution), The expected number of connections received by a neuron i c 0 from neuron j E is p if d(ij) _< R zero otherwise, A type of closed geometry (quasi-toroidal) is used over.!o (4) An attempt was made to characterize (simple) cell-assemblies as closed cycles of subsets, C(O), arising from a patterned stimulation of ~0., The synapse-values from the successor=set Zi to Zi+l of C(ZO) tend to high, excitatory values, while all others (en,g0 the "back" values from EZ l to Zi) tend to small or even negative values, C(%O) is responsive only to the particular pattern which created it, (5) The case of alternation of activity of cell-assemblies was discussed, In the case that two negatively correlated stimuli are applied to input areas Z0 and 0* respectively, the hypothesis was defended that the resulting cellassemblies C(0O) and C(O) will be cross-inhibitory.

130 Symbols and Terms Introduced in Chapter 4 A - the cardinality of the set A S AK -the event "a neuron receives exactly K connections with synapse-value S" (Section 4e3c3) AKS _ the event "a neuron receives- K or more connections with synapse-value S" (Section 4.3,3) alternating periodic stimulus -N - a stimulus applied periodically first to one subset EZ C over an interval I of t time steps, 0 a-~- Vm %m then applied periodically (usually at a different rate) to another subset EOC l over an interval I* of t* time steps. - - m f The stimuilus (of magnitude S0) is applied every To time steps in intervals I, (of magnitude SO) every To time steps in intervals I*. After a lag of 6 time steps, this sequence is repeated, etc. (Section 45,.2 and Fig, 4,15)S behavior of ~ - the sequence F(O), F(1), F(2),,t, F(t-1), F(t), F(t+l), g, of neurons ofl firing at t (Section 4,3)> CR - a disk of radius R centered at a neuron of ~ from which the neuron is expected to receive p = pR connections. The neuron receives no connections from - CR (Section 4,3,4). C( 0) -a closed cycle of subsets 0~ + Z' + 2' + +t' + 2t' _ El arising from periodic stimulation of the subset 0 stimulation of the subset El C (Section 4o4), (Note: In later discussions, the prime is dropped, E0 literally means the neurons of Z0 that fire when the stimulus is applied,) d(j,i) -the distance from neuron j ct to neuron i EI (Section 4,2,2):

131 distance-bias distribution - a distribution of connections overl in which the probability that a neuron i receive a connection from neuron j is a function of d(j,i) (Section 4,2.2). E,Ei -compound events used to determine the probability that a neuron receive K(K=I, 2,...) positive connections froml in the case that negative connections are present in (Section 4o3o3) E(X) -the expectation or expected value of the random variable X, epilepsy -the condition of violent fluctuations in F(t), usually with the result that F(t) goes to zero (Section 4.3)6 F(t) - the number of neurons of W firing at t. Also the set of neurons of' firing at t. (This dual usage should occasion no-confusion since the context will always indicate which meaning is intendedo) (Section 4,3o 1) Fb - E(F(t)) when is operating in stable, steady-state (Section 4,3). FT -the number of neurons of ~ firing T time steps after stimulation of ZE directly due to the stimulus (Section 4.4. 2), F = ~ T T F' -the total number of neurons of t firing T time steps after stimulation of %0 (Section 4,4,2), Note:~ F' = F +F = S' -rT b T T fatal oscillations ---- (see epilepsy) violent oscillations in F(t), leading to F(t) going to zero (Section 4,3)e I I * -the intervals within which an alternating periodic stimulus m m is applied (Section 4~5S2 and Figure 4-15)< Mk(t) -a subset of neurons of P at time t requiring at least k connections to fire (Section 4o302)~

132 - the number of neurons of tit, N - ( Nk(t) _ the set of neurons of ~ at time t receiving at least i connections from RO(t) (Section 43.e2), negative feedback G a homeostatic mechanism by which the steady-state behavior of ~ is forced to be stable - ie,, E(F(t)) = Fb and the oscillations of F(t) "not too violent" (Section 4,3,3)3 "on-off" simulus envelope ----— a sequence of intervals in which a single periodic stimulus is alternately "on" (being applied) for t; time steps, the "off" (not being applied) for t~ time steps (Section 4,4,2). oscillations - variations of F(t) from Fb (Section 4,3). P(E) -probability of the event E. P -the probability that the sum of all incoming synapses to k a neuron of ~ be > k (negative feedback present) (Section 4,3-3). Pe(o YT) - an effective path from E-cm to ZEc_ (Section 44.2). periodic stimulus - a stimulus first applied to a subset SO _ every TO0 time steps over an interval of tQ time steps ("on"), then suppressed ("off") for tR time steps, this basic sequence is then repeated, etc, (Section 4,4,2)., quasi-toroidal geometry - the geometry in which networks of the model are embedded in order to obtain a distance metric (Section 4,3o4), rm the maximum value of the recovery-state. r -the recovery state corresponding to the quiescent.value of V(r) (Section 4,o3,2)o

133 -the expected value of r for neurons operatfnq in steadvstate: rq < rq rm (Section 4.3..2), R -radius of the disk CR (Section 4,3,4), Rr(t) - the subset of neurons of C with recovery state r at time t (Section 4,3,2). RJ (t) - the total subset of neurons having recovery state r r at r q G(jC:mc t (Section 4.,3...} - t- tile distribution of neurons of over recovery states at timre t (Section 4,3.2). - the subset of neurons of receiving connections from F(t-l) (subset of neurons firing at t-l), St is called tihe successor-set to F(t=l) (Section 4,3,o) 0 stability -the condition of the innutt-free behavior of' in which F(t) does not oscillate "too violently" about its mean Ft (Section 4,3). steady-state behavior of a behavior of that satisfies the stability criterion (Section 4 3) successor-set --- see St above. uniform random distribution of connections - a connection distribution over neurons of'X in which any neuron of 1 is equiprobably connected to any other neuron of l (Section 4,21), S6 _ the lag between successive applications of alternating simuli (see "alternating periodic stimulus" above) (Section 4e,5,2 and Figs 4015)5 6R0 (t) - R0t) -) E(R0(t)) = R0(t) - Fh (Section 4 3,3) LA(w) - element of area in a continuous two=-dimensional neural network about the point w (Section 4o2,2)o

134 - mean value of svYa:)Lse!.et Is (rcsidol 4I.3-,i ) x(t) -the expected number of: connections received by a neuron o: from RO(t a i at tire rt (Section 4.32)J, the synapse, level from rculron A to neuron b,. nk(X(t)) -the probability that a neuron of E receive at least k connec,tions from RO(t) at time t (Section 4.3,2j. p the network density parameter (Section 4,2,1) p NPOO p0 = expected number of connections received bv a neuron o \i from any other neuron oft. 0s -.in the case of negative- feedback, the density of synapse-values with value s (Section 4,3e3). In the case of negative feedback, S2 P = ~ P s=-Si p(r) (j,i) -densities as function of distance (Section 4.3), p(d) ) r0 - a subset of to which external stimulus is to be applied (Section 4,4). -the alternate input subset to E0 in the case of alternating periodic stimulus, t' Z I;. -..subsets of B evolving T time steps after stimulation of F o (Section 4,4, 2) (See F, F' above) E' = E,,t T - number of time steps after stimulating 20 (Section 4.4o2). To -interval between successive applications of the stimulus to.0 (Section 4,4), 0 _. interval between successive applications of the stimulus to s. (Section 4,5.2),

5o METHODOLOGY OF EXPERIMENTS 5$1 Introduction In the next two chapters, experimental results obtained using networks with cycles are presented. These results constitute the empirical verification of the claims advanced in Chapter 4, These claims, stripped of complicating qualifications, reduce to the following three: (1) It is possible to produce stable, stimulus-free behavior in networks with cycles (with or without distance-bias) by means of appropriate (a) threshold curve setting, (b) initial distribution of neurons over recovery states, and (c) distribution of synapse-values over connectionso This behavior will remain stable under perturbation by a moderate external stimulus (2) It is possible to produce in some networks with cycles closed cycles C(Z0) as a result of periodic stimulation of a certain input set L0, the stimulation occurring in a sequence of V'on-off"' envelopes (the "traifiing" period), C(O0) will consist of a sequence of subsets, + 1 2 T+ CI+ T C 0 0 0 with the property that for j cE Zk i E 1 k+ k = 0, 1, 2p, coo T-1, T'(EO), the synapse value S(Xji) tends to be strongly excitatory and for j c Ek, i e Zgn Q sr k+l, S(Xji) tends to be moderately positive, zero, or even inhibitory0 C(Z0) is a candidate for a cell-=assembly and is a "learned" response of the network to the given stimulus, (3) It is possible to produce in some networks with cycles a pair of mutually cross-inhibiting cycles C(E0) and C(ZE) where 0 0 C(E0) — E0 -* EI Ec ST 1= E1

136 and the values of the connections between neurons of the two cycles tend to inhibitory values. Substantiation of claim (1) and related topics will be the object of Chapter 6. Investigation of claims (2) and (3) is deferred to Chapter 7, An important assumption pervades the following two chapters: simulated networks closely approximate the abstract (ideal) networks discussed in Chapter 4. That this need not be the case is shown by the following example: The random drawing of a random number n, used in determining the connection distribution for a network |, is implemented by a pseudorandom number generator. Pseudo-random number generators for digital computers are known to have many pitfalls, often producing "skewed" distributions of n. It is, therefore, essential to determine just how closely the resulting simulated distribution approximates the theoretical distribution. Such matters as this are relegated to Appendix B, and it will be assumed that, for all practical purposes, the theoretical and the simulated models coincide. The material is presented in these chapters in parallel with the development of Chapter 4, Networks with uniform random distributions of connections are examined first, then networks with distance-bias are considered, Both cases are subdivided into two subcases: (1) negative feedback is absent (synapse-values all positive) and (2) negative feedback is present (positive and negative synapse-values are present), This order of presentation is a departure from the actual historical order, In the latter, networks with uniform random distributions of connections were considered first for the steady-state case, then for the cycle-ofsubsets case. Lack of success in producing cycles of subsets, together with some considerations such as those given in Chapter 4, led to

137 the introduction of distance-bias distributions into the networks of the model. The sequence above was repeated for networks with distancebias: steady-state experimentation, then cycles-of-subsets experimentation, The almost immediate success of the latter led to a lengthy experiment, described in Chapter 7, culminating in an alternating cycles (cellassemblies) experiment, Since the steady-state experiments for both cases (uniform random and distance-bias distributions) shared many things in common9 they are presented as a unite Similarly for the cycle-of-subsets experiments, Before giving the description of the experiments9 a brief review is given in the next section of the experimental methodology followed in this work, This is done since it is essential that the reader understand the nature of the advantages and the disadvantages offered by the simulation which forms the basis of this worko Finally, at the end of both chapters, the main experimental results are recapitulated in a summary, 562 Methodology The general methodology used in Chapters 6 and 7 not dissimilar to that of experimental physics or chemistry - is the followingo First, a hypothesis about the behavior of a network is made, given certain parameters values, etc, This hypothesis is defended by the calculus of Chapter 4, as much as this is possible, it being recognized that the initial setting of the parameters is only approximate, The experiment is performed (run) resulting either in "failure" (hypothesis not confirmed for the given parameter settings) or in "success" (hypothesis is confirmed for the given parameter settings)0 If the result was "failure"

138 the experiment is repeated by varyin,1; -i a systematic way the parameters Successive failures9 of course, do not invalidate the hypothesis for all possible combinations of parameter values. rather only for the values (or range of values) tested, Likewise,'success" merely provides a set of parameters that work, success is not guaranteed for all possible combinations of parameter values, although it might be implied for a range of parameter values, etc, Actual experiment must be tempered with reality, especially since the experimental apparatus usedl is extremely expensive and relatively inaccessible, In the ideal case, a set of parameters might be varied over a large spectrum of values, In practice this would yield for too many possible behaviors than would be profitably analyzed in the span of a lifer time, even if the experimental apparatus were available for such extensive use, Here the skill and intuition of the experimenter enter in an essential way in reducing the number of unnecessary or redundant runs, and in making meaningful inferences from incomplete data, The latter act takes two forms: (a) The networks used are so large and complicated that a detailed monitoring of the complete state of a network at each time step is impractical, resulting in an astronomical volume of computer outputTherefore, values of state variables of the network must be sampled in an economical and, to the experimenter, significant fashion~ (b) The expense and relative inaccessibility of the experimental apparatus often dictate reducing the range of parameters to be tested, The experimenter must then examine his incomplete set of data and infer that an untried setting of parameters might yield success, This type of inference is, The IBM 7090 computer with an IBM 1410 satellite subsystem at The University of Michigan Computing Center~

139 of course, potentially very dangerous, and must be carefully defended a priori with whatever analytical tools are present, Usually strong "trends" are present in this situation9 making the reference more plausible. The parameters of interest in the sequel are: (1) N =.D the size of o Generally N = 4009 however there were several runs involving N = 200 and one involving N = 900, (2) p, the connection density (uniform random case), Typical values were p = 6D p = 12, p = 24, (2') pand R, if distance-bias is presents, where R is the radius of the neighborhood disk CR0 A typical setting was p = 55, R = 60 (3) The initial distribution of synapse-values over connectionso (4) The threshold curve V(r), In this study, ra = 3, r = 19o rq varied with the experiment, (5) The fatigue curve O(Z) and the associated tables of values a!(Z) and A2(), The fatigue function was taken as an additive component of the effective threshold9 instead of multiplicative as in Chapter 3. (6) The synapse-value curve S(A) and the associated tables of probabilities U(X) and D({)o Since the network-generating program was rather time consuming, the parameters ND p. and R (if distance-bias were present) were varied as little as possible0 Internal computer storage determined an absolute upper bound on N of approximately 1000 neurons; however, such factors as ease of analysis9 time required to simulate one time step, etc, dictated a moderate value of No N = 400 was taken as a compromise~ It is large enough that the statistical assumptions of the theory should hold true,

14 ) yet not so large as to make running true exhorbitant or analysis of results any more tedious than necessary The initial distribution of synapse-values over connections likewise tended to be fixed. although a set of experiments was devoted exclusively to a study of the effects of varying this parameter SimilarY. ly for S(X) and the tables of U(X) and D(XA) By far the most varied parameter was the threshold curve. For a given N,, p, the calculus of Chapter 4 gives information about the form of V(r) needed to guarantee stability, However, especially in the cases of negative feedback and distance-bias, the calculations are unwieldy and at best yield a first crude estimate for V(r) (or for Ph but it is easier to vary V(r) as noted)o Consequently, finer "tuning" of the net= work may be necessary by varying V(r) slightly, It is important to note that the calculations were used primarily as a guide to obtaining an initial estimate of the network parameters. Then, calculations were abandon= ed and experimentation begun, The modus operandi was to avoid (if possible) much tedious calculation and place the burden of success upon the simulation.. The analytical theory, of course, remains important as an aid to the understanding of the models; only the excessively tedious calcula= tions are bypassed' The fatigue curve ~(Q) and its associated tables of values A1i(9) and A2 () were seldom varied, except in a final series of control runs, As noted in Chapter 4, this function was not really adequately analyzed in this work:. That is9 no general calculus similar to that of Chapter 4a section 4. 3, was developed. Although not an impossible task, the modifi. cation of the analysis of section 4 3 to consider the effective threshold V(r) + ~(t) will have to await a future work~

6, STIMULUS-FREE BEHAVIOR IN NETWORKS WITH CYCLES 6.1 EXPERIMENTAL OBJECTIVES AND PROCEDURES 644. Objectives The objectives of the experiments described in this chapter are twofold: (1) to exhibit networks that maintain stable, input-free behavior (steady-state) (2) to derive experimentally stable networks that are adequate for the cell-assembly experiments of Chapter 7o (1) is the avowed purpose made in claim (1), Chapter 5, Section 5el1 The implication of (2) is that some networks may maintain. steady-state behavior, but not yield the desired closed cycles of subsets (cellassemblies) when subjected to periodic stimulation, This might occur if the network is too small or if the threshold curve is too "steep", In the first case, a sufficiently large fund of neurons might not be available for recruitment into the paths PE(Z0 EZ ) so that they never close into a cycle C(LO) In the second case, V(r) might be so large in the vicinity of r = To (T0 the stimulus rate as in Chapter 4, Section 4A4) that the set Z cannot fire neurons of SZ (E E ), their threshold values T0-l To being too high, Again, PE(O E x 1) would never close into C(z0), The implications of "adequate" stable, steady-state behavir, therefore, are that (a) N is sufficiently large and (b) V(r) is steep enough to maintain stable steady-state behavior but not so steep that PE(E0'+ ZT1 ) will never close into a cycle, (a) raises the dilemma of running time, since the larger the N, the longer — hence, the most costly the experimental runs, It was blatantly assumed that N = 400 was adequate6 This assumption will be defended in Chapter 7, but also the possibility that this value of N may be too small will be examined1 141

142 Similariy,, b) raises the dixlemma of parameter variations yielding many runs9 hence again the issues of expense and time.. As mentioned in 5 1 the historical procedure was to perform a series of steady=state experiments, varying the network parameters until a network displaying a very stablelbehavior was obtained,, Then this network would be subjected to periodic stimulus and judged for adequacy, If inadequate, a new series of steady=state runs would be performed with a new set of parameters, If adequate, a series of closed cycles (cell= assembly) experiments would be initiated, etc, Since a sufficient number of problems arose in the steady=state experiments per se, they are treat= ed as a unit in this chapters Chapter 7 is devoted to the closed cycles experiments 6,1..2 Outline of General Experimental Procedure The experimental procedure used may be divided into three phases: initialization or set-up, run-in, and detailed long-run testing, These are described belowPhase I Initialization or Set-Lup 1, Given N and p (N, p, and R if distancebhias is to be present), the corresponding network is generated. The neurons of are then uniformly distributed over recovery states r O- 01, 1~. rm (rm was taken as 1.9 throughout the study)9 yielding N E(R (O)) = r = 0( 1, r rr M q 2- Synapse-levels Xji(0) are assigned. according to a given The empirical criteria for "very stable" behavior are derived from Chapter 4 and will be described in section 6 1.2 below,

143 distributions to all connections j -+ i determined by the connection assignment scheme used in 1 above; i,e,, the connections are "weighted". In the early experiments described below, the Xji(O)'s were set all equal to a value giving a synapse-value of +1L Later, the Xji(O)'s were dis= tributed over ranges of X giving both positive and negative synapsevalues (Chapter 4, section 4e,333) 3, The neurons of are distributed, according to a given distribution, over fatigue states., 2 = 0, I Z9.max0 In most experiments described below,.i(0) was set to Zmax (complete recovery with respect to fatigue), i = 1, 2, too. N, 4 The functions V(r), 4(f), and S(X) and the tables U(X),, D(;s). A = 0,, i, Ax and A1(9), 2(2),. =- 0, 1, ax, are specified. 53 A subset EO Cn is selected to fire at t = 0, E(Z) = NUsually, a random sample of neurons of,was selected so that a fixed "starting" stimulus SO would be expected to cause N/rq neurons to fire at t = 0, Apart from this "starting" stimulus at t = 0, no further stir mulus was applied in the steady-state experimentsl Phase II Run-In The firing of Z0 of step 5, Phase I, properly begins the run-in phase of experimentation. In this phase, the network is allowed to operate over a time interval [0,T until one of three conditions arise. (1)'F(t) goes to zero, 0 < t < T= (2) The observer interrupts the run for reasons to be mentioned below,. (3) t = T (end of run interval is reached), Typically, T was chosen as 100, 200, or 400. If (1) occurred, the network was either overdamped or underdamped

i,14 and some parameter (usually, V(rU) modification is necessary, The appropriate parameter is revised and the run-in trial is repeated starting with step. 5, Phase I again, (2) usually occurs only if it obvious to the observer that the behavior of /;, is grossly overdamped or underdamped, anticipating that F(t) would go to zero soon anyway. This did not occur too often, since the observer did not always have direct access to the experimental apparatus In case (3), the network's behavior was labelled temporarily as i'stablev' since F(t) # O, t = 09 1,,op To During the course of the runin (3)e a number of network parameters are sampled at the discretion of the experimenter, These include.~ (a) F(t) for each t; lb) A7(t) for each t (distribution of neurons over recovery states at time t), (c)'~(t) for each t (distribution of neurons over fatigue states at time t) (d) Samples of Xji (t) for selected time steps t, (e) The "firing pattern" at t for each or selected to This is a'"picture" of the network, showing the neurons firing at t1, The outputs (a) - (e) of the run-in were then studied- If F(t) remained bounded, E(F(t)) = Fb' in particular if no underdamped oscilla= tions appeared to be developing, the network was cleared for further testing from t = T + 1 on (Phase III) The distributions of Aji(t) were checked (when samples were obtained) for sudden or peculiar changes from the distributions at t = 0, This is primarily a test to ensure that no sets of neurons are operating at too high or too low firing rates.

145 If oscillations appeared to be forming, or if some other anomaly appeared to be present, one of two courses might be followed; (a) if the anomaly appeared to be not too serious, the run-in might be continued from t = T + 1 an additional T time steps (subject to conditions (1) - (3) above),, "Not; too serious"' means that hints were present suggesting that the anomaly might be purely transient in nature, (b) If the anomaly appeared not to be of a transient character, the appropriate parameter (usually V(r)) was changed and the run-in repeated from step 5, Phase I,. Phase III. Long=.Run Testing Once past the hurdle of Phase II,, the network was subject to further running from t = To + 1 on, T0 being the terminal time step of Phase II (To = T or To = 2T9 etc,, depending upon the option followed in Phase II)> The same outputs (a) - (e) of Phase III are obtained as desired by the experimenter, If, for t sufficiently large, the behavior of(1, appeared stable with no non-transient anomalies present, the network was judged to be "very stable" and passed on as a candidate for the cell-assembly experiments, "Sufficiently large" means anywhere in the limits t = 400 to t = 1000. Notice that'very stable"' is a purely empirically inferred condition~ "Very stable" means essentially that (1) F(t) does differ sharply from Fb except in a purely random, transient fashion iofe,, undamped oscillations are not building up, E(F(t)) = FbC In particular, F(t) never goes to Oo (2) No sub-rosa accumulation of neurons with high fatigue values occurso This could lead to a pocket of hyper-refractory neurons, the evils of which have already been expounded in Chapter 4,

(3) No accumulative large d(e.ato.(n cf the., js occur with tile effect of either damping out F(t) or producing underdamped I(t)Us,e This could occur, for example, if the synapse-level growth law were too fast, i.,e., q The occurrence. of any one of the anomalies mentioned in (1) (3- ) is grounds for modification of parameters and return to Phase II. Notice that Phases II and III take advantage of the modularity present in forming a network in steps 2 -- 4 of Phase I, In fact, the parameters of these steps may be varied at any time step, it not being essential to always back up to step 5 of Phase I. For example, throughout the course of Phase III, it might appear that V(r) allows a slight underdamping, the cumulative effects of which could produce fatal oscillations. The experimenter may, if he wishes, interrupt the run at a certain point, insert a new threshold curve, and continue from that point on, Moreover, as a matter of course, the entire state of the network was "saved" periodically on magnetic tape for future back-up or retrieval purposes. Thus, a library of experiments (at different time steps) was built up for future reference or modification, 6 1 3 Hypothesis The specific hypothesis being tested in this chapter is: (Stable, Steady State H pothesis) Given a network, with specified parameters N, Po R (if distance-bias is present), V(r), 4(Z),a U(X), D(X), A1(Z) A2(Q)9 O,i maintains stable, steady-state behavior as a result of a selected subset I0 & being stimulated at t = 0, E(0) Fb =

147 Phase II may provisionally affirm the hypothesis or may completely invalidate it, In the former case, more complete confirmation awaits Phase IIIo In the latter cases parameter modification is indicated (Phase I), The hypothesis may also be rejected in Phase III, if none transient anomalies arises 6o2 NETWORKS WITH UNIFORM RANDOM DISTRIBUTIONS OF CONNECTIONS 6,2,l Series I, Networks with Positive Connections Only In this section, experimental results are given for networks in which the initial distributions of Xji(.O)s were spch that the corresponding synapse-value~, S(Xji (O)) were positive, The primary purpose of these experiments was to demonstrate the validity of the basic theory of Chapter 4 (4,3,2) without the complication of negative feedback, Several experiments (variants of the basic experiments below) included negative feedback- however, they were performed before the principles of Chapter 4 4o3,3, were well understood, Their description is, therefore, included in this section, Basic Experiment I The basic claim of the theory developed in Chapter 4, Section 403 2, is the following; Given a network b with density p(p # p(R))3 S(Xji(t)) = laR(t) such that Rr(t) = N/(r+l), then will maintain a stable, steady=state behavior provided V(r) is chosen so that the expected number of neurons of R (t) firing at t is Fb = N/(rq)o The r b q q assumption is that few, if any, neurons of the R (t) for r < r fire at /1111 r q to A simple calculus relates Rr (t)9 p, N9 Fb, and F(t), namely q Fb = E(F(t+l)) = E(R0(t+i)) = =k(X(t))Rr (t) q

148 where (Axt)) = X t) "k ( A(t)) = j=k and: X(t) = N p~ To test this basic hypothesis, the threshold curve V(r) of Figure 6,1 was chosen, Since V(r) = X for r = 0, 19 2, oeo, r = 15 and V(r) = 1 for r = r q+1 = 16, 17, 18, rm = 19, perforce no neurons of R (t) for r < rq may fire, The remaining network parameters were: N = 400, p 6, ji (0) = k0 such that S(X) = 1, ~(Z) 1 1, U(X) = ~72, D(X) = p052 for all x where D(X) 1 1 q 0 = 27 neurons were stimulated at t = 0, iLe, F(O) = 270 4(0), of course, was initialized so that R0(0) = T(0) = 0o Rr (0) = N20 It was expected that neuron i c' would fire for ri lying between r = 16 and rm = 19, perhaps distributed around a mean rq of 17, E(rq) = 17, The result - surprising at first to the author, but typical of the networks described in this section was that after approximately r /2 time steps, F(t) repeated itself every 17 time stepsF(t+17) = F(t) for all t > 10 - m and rq 17 (exactly)0 This effect was termed "periodicity"0 Discussion of it will be delayed until Section 6.2o3. It will suffice for now to observe that it is not a desirable behavior pattern, since, from the point of view of information theory, such rigid periodicity suggests that no information is present, where "no information" would be taken to mean that no neurons are available for recruitment into cell-assemblieso Consequently, it

Figure 6.1. Summary of Results for Basic Experiment I and Variant. (a) Threshold curve V(r) and other parameters. (b) EEC ("electroencephalogram") for Experiment I. F(t) is given for selected tine intervals. (c) EEC for Experiment I, Variant. (a) V(r) N = 400 (c) F(t) (Synapse p 17 Values) =6 0 1 3 V(r) =0060 shaded area repeaJ E0 = 27 from t 37 on 2 50 1 40 30 0 214 6 8 10 1214116 18 20 r (recovery state) 20 10 (b) F(t) shaded area repeats C =27 2from t 37 on 30 40 60 E0 2t o0 103 so - Note: (1) The time axis in all the EEC's / in this chapter and the next is dis40 / placed one place to the right from the one used in the text: i.e., t = 0 in 30 the text is equivalent to t =1 in the EEC's; t = t0 is equivalent to t = 20 ~~~~~~~~~~~~~~~~~etc. (2) Successive points of the graph of F(t) are connected by straight line I, t segments only as a guide to the eye for inspecting the EEG - a continuous F(t) is not implied. 0 10 21J 30 40 50

became a subgoal of this work to eliminate this type of behavior pattern of possible, It is interesting to note that the introduction of distancebias (with negative feedback) automatically seemed to eliminate it, Figure 6,1 contains a summary of the resulits of this experiment, including an "electroencephalogram" or EEG, iXe, F(t) plotted as a function of t, The experiment was repeated for a different value of LOy Lo = 14. The results were quite similar to those above, F(t) displayed periodicir ty with period rq = 17, The corresponding EEG is given in Figure 6.1:. Variations of Basic Experiment I Two interesting variations of the basic experiment were performed next, In the firsts the synapse=levels, Aji(O) s, were distributed uniformly over the range 32 A.ji(O); Xmax = 63 corresponding to the synapse= value range 0. S(ji(0O)) <- S(max) = 32, In the second, the Xji(0)'s were distributed uniformly over the entire range of XA 0 A ji(0)' Xmax giving a uniform distribution of synapse values over equal positive and negative ranges, In both cases3 E0 = 18t The results of the first variation were precisely similar to those of the basic experiment: periodicity with period r = 17 F(t), however9 did not begin repeating itself until t = 27: i,e,, for t' 27, F(t+17) = F(t)f; The second case, however, was grossly overdamped and F(5) went to zero. The respective A-distributions and EEG's are indicated in Figure 5,2, It is of worth to note that the more complex A-distribution of the first variation above stretched out the interval in which F(t) was not periodic, This point will be considered in 6t,2 3 below:

Figure 6.2. Additional Variations of Basic Experiment I. (a) Variation 1. All parameters identical to those of Basic Experiment I, except the distribution of X (0) over A (i aj = 1s -. 400). In this variation, the X3i(O)'s are distributed uniformly over the interval 32 6 Xii(O): 63. The EEG is given below. (b) Variation 2. Similar to Variation 1, except the ji (0)'s are distributed uniformly over the interval 0 9 -..(O) " 63. (c) S(X) - the synapse- Aue curve (function of.)ji for Variations 1 and 2. (a) F(t) so L o 18 I shaded area repeats 40 30 20 10 0 10 20 30 40 50 60 70 t (b) (C) S(X) 30 0 =O 5 18 (synapse 32 value) 20 Network overdamped 16 LA.(0)'s distributed = 32 to X = 63 10 0 16 2 48 64 -16 x 0 10 20 (synapse-level) -32 A.. ()'s distributed uniformly over A = 0 to A X 63 in (b)

Basic Exteriments II The preceding experiment and its variations demonstrated the validity of the claim of Chapter 4: the neurons of can be forced to "cycle" with an expected period rq N' by setting V(r) so large for 0 < r - r q = Fb q that no neurons in the corresponding R (t) may fire, yet V(r) is such that for rq - r 6 rm, Fb neurons are expected to fire at any time step. It is now necessary to see how well the claim holds up when V(r) is chosen so that some neurons in the R (t) may fire, This would appear to be the more realistic situation, Tentative calculations showed that for N = 400, p = 6, etc0 as above, threshold curves of the general form of Figure 6~3 should yield stable, steady-state behavior, More detailed calculations, such as those of Table 4,1 of Chapter 4, show that stability may occur, but in the absence of appropriate negative feedback -fatal oscillations may build upo Consequently a sequence of V(r)'s were tested for stability, the network otherwise precisely as it was in Basic Experiment I, These curves together with the corresponding_'s and EEG's are given in Figure 604a The results may be summarized briefly by noting that the "steeper" curves produced stability, the "shallower" ones instability, In the former case, the sets Mk(t) for k = 2, 3, 4, o: are smaller in cardinality than in the latter case, For example, in curve 2 which produced stability, E(M2(t)) = 40 = E(M3(t)) = E(M4(t)), etc,, while in curve 3 which produced instability, E(M-2(t)) = 200, In all cases, E(Ml(t)) = 800 Since p is relatively small in these experiments, the rk(X(t))'s become negligible quite rapidly as k increases beyond k = 2 or

Figure 6.3. G4r6eral Form of the Threshold Curve V(r). The general form of V(r) to be used in subsequent experiments is a monotone decreasing function of r. V(0) = V(1) = V(2) = ~, V(3) =V, V V =6 1. nl''r 16 q V(r) 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10111213141516171819 r absolute q refractory period (ra = 2)

Figure 6.4. Summary of Results for Basic Experiments II. (a) Threshold Curves (1)-(6). (b) EEG's and E'S for Variations (1)-(6). Curve (4) is used for Variation (7). 0 (a) Curve 1 Curve 2 V(r) 8 V(r) s 7 7 6- 6 S- $ 4 4 3 []g......., M N 80 3 i, 380 2- 2 1~~~~~~~~~~~~~~~~~~~~~~~1.....'.... I' I' I 0 2 4 6 8 1012 14 16 1820 r 0 2 4 6 8 1012 14161820 r Curve 3 Curve 4 V(r) 8 V(r) s 7 7 J 6 6 -n 5 - 5 I 4- 4 1 II I 3 - 1 80 2 8 2~~~~~~~~~~~~~ 0 2 4 6 810 12 14 16 18 20 r0 2 4 6 8 10 12 14 16 18 20 r Curve 5 Curve 6 V(r) 8V(r) 8 7 - 6 4. 4 5~ I 3 M 80 3 NI.80 2 2 12~~W 1 4 6 I 12 L [ 11 I 2 4 6 8 1012 1416 1820 2 4 6 8 10 1214 161820

Figure 6.4. (continued). (b) EEG's for Variations (1) - (7) of Basic Experiments II. Variation (1) Variation (3)\ Variation (6) F(t) 60 - 14 F(t) 60 (t) 60o 0 = 14 ~~~~~~~~~~~50 ~~~- A 50~50 Underdamped: Underdamped: F(11) = 0Underdamped 40 -F(14) 0 40 40 (F(5) = 74, F(6) = 89) 30 30 30 20 20 20 10 10 10 [ L I I I IS I 11 I I +~L L I 0 0 2 4 6 8 10. t 0 2 4 6 8 10 12...t 1 2 4 6 81012 1416 Un Variation (7) I F(t) 60 =b \ E0= 45 (approximately 100% increase over previous %Z's) 50 Underdamped: F(23) = 0 40 (F(16) = 87, F(17) = 101) 30 20 10 0 2 4 6 8 10 1214 16 1820 22 24 26 28 t

156 -4 4) U) U co........0P.. V)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.'-4:!I4............ _ 3 _tO 4)a)~ )4 0 4~~~~~~~~~~~~~~~~~_ — ~.. __~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~............. - r......... 0 0 Ln~~~~~~~~~~~~~~~~~~~~~~~C 4)" 2~~~~~~~~~~~~~~.4............... -,,:Ir- l?4 0.~. ull~~~~~~~~~~~~~~~~~~~ IC"'0 U 0 0 0 0 0 0 Cd~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~l-.,0 0 4) CJLn -4a.,4 0~4 0 ~-4 -4 — $4~~~~~~~~~~~~~~~~~~~_.,4 ~-4 cd rcd;c k d~jjiijiijiiiiiiiiiiii CU d CU L;.C ed ~re Ai~iii~r~iiiiiiiii~iiii~iB iilcU u L~...i:''iiiiiiii~iii~

157 k = 3., Therefore, "steepness" may be judged primarily in terms of the cardinalities M2(t) or M3(t). Once more, the positive results of these experiments were marred by the periodicity occurring in Basic Experiment I, At this point, various mechanisms for eliminating this were studied, culminating eventually in the introduction of negative feedback and distance-bias as described in section 6,3.,2 below,. Variations of Basic Exeriments II Experiments 2 and 4 of Figure 6~4 were repeated for the case that the Aji(O) s were uniformly distributed over the range 0 _ A - Amax = 63, all other network parameters remaining the same as in Basic Experiment I, The EEG's, etc., are given in Figure 605, experiments 2' and 5' respectively., In 2', F(t) went to zero at t = 57, In 41, the run was terminated prematurely by a programmed "clock" error at t = 43, Inferring from the general pattern of the behavior F(t), it appeared that the net= work might be stable (Phase II), Notice that the behavior F(t) in 59 is not periodic and well modulated — ibe0, F(t) does not fluctuate violently as in 21, The results of experiment 5' gave considerable impetus to detailed consideration of the effects of negative feedback~ A third variation of Experiments 1 - 7 above entailed changing only the U(X) and D(X) tables and resuming Experiment 3, The new tables, EEG, etc,, are given in Figure 6060 For each A, the relationship n (A) 1 U(X) + D(A) 17, holds (assuming r' { 17), however for increasing X, U(A) and D(A) inq creaseC The motivation for this will be discussed in Chapter 7, Section 7o2c Essentially, it was to encourage path closing to form cycles

Figure 6,5. Variations of Basic Experiment II, Variants (2) and (4). Experiments (2') and (4') below are identical to Variants (2) and (4) of Figure 6.4 except that the Xp j()'s were distributed uniformly over the range 0 < X " ma = 63. This means that the initial synapse-values, SCA), are distributed uniformly over -32 f S(l) < 32. Experiment 2' F(t) Underdamped: F(57) -0 40 - (F(49) a 78, F(50) - 82) 30 20 100 10 20 30 40 50 60 70 80 90 t 00 Experiment 4' F(t) This run was terminated prematurely due to programmed clock error. The behavior appears to be 60 - oscillatory, but not violently so. There are no signs of the rigid periodicity evidenced in earlier so - experiments - a consequence of the presence of negative synapse-values. 40 30 20 10 (run terminated prematurely at this point) 0 10 20 30 40 50 60 70 80 90 t

159 0,100 0062S 33 c20 ~01250 1 103.00645 34 o21 o01310 2 106 00663 35 22 o 01375 3 o109 00670 36 23 011440 4 4 112 e0070 37 24 601500 5. 115 l0072 38 e25 o01560 6 o 118 e00738 39 026 601625 7 121 00756 40 e27 o01690 8 2 124 00775 41 o 28 01750 9 o 127 600794 42 c29.01810 10 o130 600813 43 o30 o01870 11 133 o00830 44 31 c01940 12, 136 o 0850 45,32 o 02000 13.139 o 00870 46 o 33 02060 14,142.00889 47 0.34,02120 15 145 00908 48: 35.02190 16 6148 ~00927 49 e36 602250 17 o 151.00945 50 637 602310 18 o154 00964 51 o 38 o02380 19 157 o 00982 52 o 39 02440 20 o160 o01000 53.40 02500 21 163 o01020 54 o42 o02620 22 o166,01040 55 644 o02750 23 o 169 01055 56.46 02880 24 o 172 o 01075 57 48 03000 25 o175 601094 58 o50 S03120 26.178 o01110 59 652 o03250 27 o181 01130 60 654 603380 28 184 ~01150 61 656 6 03500 29 187,01170 62 658, 03620 30 6 190,01188 63 6O 03750 31 *193 01205 32.196 c01225 Figue 6.6 Variation of Basic Experiment II, Variant (5)6 This experiment is a repetition of Variant 5S Figure 6,4, in which the U(X) and D(X) tables were replaced by the tables shown above, For each X in this table, D(X)/(U(X)+D(X)) = 1/r = 1/17, qi

Figure 6.6. (continued). EEG. F(t) r= 20 70 70 60 shaded area so t = 51 on 40 30 20 10 0 i0 20 30 40 50 60 70 8

161 C(0o) in the periodic stimulus experiments. Again, F(t) cycled precisely on rq = 173 The new tables U(X) and D(X) will be assumed in subsequent experiments until further notice. Thus far a detailed study of synapse-level drift had not yet been carried out, Threshold Curves with Dips To anticipate somewhat the results of Chapter 7, Section 72.l1, the stable networks exhibited so far failed to yield cycles C(Z0) under periodic stimulation of a subset C0 C' (this E0 is to be distinguished from the Zo used to start a network at t = 0), The following ad hoc mechanism was introduced in an attempt to encourage formation of paths PE(ZO' ZT) and closing of these paths into cycles C(E0): V(r) is no longer a monotone decreasing -function of r, but decreases rather rapidly to low threshold values, increasing to a maximum, then eventually decreasing to the quiescent value Vq (=1), The general form of this type of threshold curve is shown in Figure 6,7(a), The principle was to provide sufficiently low threshold values. for neurons with recovery states near r = r0 that recruitment into paths PE(o + Z ) and eventually closing of PE(O S) into a C(E0) would be encouraged, It is interesting to note that apparently some observed recordings of neuron membrane potentials show recovery characteristics similayr to those of Figure 607(b), where the dip occurs around r = rm or even for r >> rma, followed by a hyper-refractory swell, These curves were eventually completely rejected for reasons to be discussed in Chapter 7, Section 7,3,1, However, their study did provide some interesting sidelights on the behavior of networks with cycles, Therefore, it is included here,

Figure 6.7. Threshold Curves with Dips. (a) General Form of a Threshold Curve with Dips Used in the Sequel. V (r) 6 1\ Vd occurs around r =5 to 8 4 Vd 3 2 1 0 10 20 r ro (b) Observed Resting and Action Potential of Squid Axon, (Adapted from Ecclcs [ ] page 51). Voltage level 40 30 20 10 -10 -10 - 1 2 3 4 t (m sec) -20 -30 -40 -50 T-60 - resting -70 - hype rrecovered potential From (b) we might conclude that the threshold curve is at least bounded below by the curve for the action potential - allowing thus approximately.5 m sec of hyper-recovery for the squid axon.

Figure 6.8. X-distribution for Experiments of Figure 6.9. S(A) =0 Expected neurons o connectio with value X (in %) S(X) <0 S(X) >0.0624.0382.0156 0 10 20 30 40 50 60 70 The dips in the distributions at X = 41 and A = 44 arose from a minor programming error and were not intentional. This distribution was selected strictly ad hoc before the theory of Chapter 4 was developed.

Figure 6.9. Threshold Curves with Dips Experiments, First Series. Except for V(r) and the initial -distribution (see Fig. 6.8), the networks used are identical to those of Variant 3,,Figure 6.4. 10 9 9 9_ Variant 1 V(r) 8 Variant 2 8 Variant 3 8 6 6 6 N1 = 80 Ml 80 s 4 1 1 4 1 4 EEG for Variant 1 3 - 3 3 1 F(t) 2 60 / 1 / I 10' I' I I' TI o0 o0 10o 20 3 40 oA 10 20 r 40 E = 20 30 B ehavior slightly underdanmped, not completely 0 periodic 20 F(87)=5 F(33)=69 F(52)a61 F(69)=57 10 F(94)=66 F(53)=71 F(70)=68 F(88)=65 F(71)=51 F(89)=53 0 10 20 30 40 50 60 70 80 90 100 Variant 2 Variant 3 60 F(t). 20 Underdamped, s =o 20 Underdamped, 50s ~ 0o~ ~F(25) 5 0 F(26) - 0 40 -40 30 30 20 20 F(16)=76 10 10 10 20 30 t 10 20 30

Figure 6.10. Threshold Curves with Dips Experiments, Second Series. Repeat of Variants of Figure 6.9, all parameters identical except all X.. (0)'s set to X as in Experiment I. Ti 0', ~~~~~~~~~~~This pattern ll Variant (1) F(t i repeats from so 20 Behavior rigidly periodic t =57 on. 0 40" 30 20 -, Variant (2) F( ) 10 20 30 40 50 60 70 t a 0 = 20 Behavior rigidly periodic I Repeats from 40 - t = 57 on 30 - 0', 20 10 I 10 20 30 40 50 60 70 80 t Variant (3) F(t) This basic pattern Vain Ft = 20 Behavior not rigidly periodic repeating every 3 0 0 rt 20 I: / 30 _ 10 20 30 41 50 60 70 80 t

Two basic selries of experiments were performed using threshold curves with dips, The first entailed precisely the same network parameters as Experiments 3 above, except for V(r) and the initial X-distrib bution, For the latter, the Xji(0)'s were distributed according to Figure 6,8, Some of the results of this series are given in Figure 6 9 (V(r)s$, S0's! EE@Gs). It is interesting to note that Experiment 1 of Figure 6.9 did not yield exact periodicity, although F(t) appeared to be "homing" in on an exact period of' r q 17,: The second series was a repeat of the first, with the i (0)'s all equal to X9 so that S( =O) - 1, just as in Basic Experiment IC The results of this series are summarized in Figure 610,PO. Two interesting items emerge from theses (a) Curves 2 and 3 resulted in stability, whereas in the first series they produced instability, (b) Once again, exact periodicity reappears, This certainly strengthens the growing feeling that lack of complexity in the underlying %- distribution is the main determi. nant of this undesirable phenomena, 6,2,2 Series II - Networks with Negative Feedback The experiments of the preceding section were devoted in the main to networks with initial positive, equal synapse-values,, Growing discon. tent with the rigid periodicity obtained for stable networks led to a few experiments with negative feedback present, It was realized that a systematic study of the role of negative feedback was essential to the dual goals of obtaining "very stable" behavior and "adequate" networks for the periodic stimulation experiments(, At this point, the basic theory of Chapter 4B Section 4o303, was worked out0 This produced an ins valuable aid to the intuition, Hfowever, the complexity of the steady=

167 state calculations for connection distibutions of the form S1 p = Ps9 s=-SO in particular, for the cases so = sl > 2? was sufficiently great that often crude estimates only were obtained, then refuge taken in the simula' tion, Of course, such calculations could be programmed for a computer, This, from the point-of-view of long-range studies of networks with cycles9 eventually must be done, It was not done in the current work simply because there was a sufficient amount Qf programming effort involved in the simulation per se that precious little time was left for anything elseo As will be seen in this and the next few sections (6.302), s the crude approximations used to determine V(r) given p = p ps were sesO surprisingly accurate, Three series of experiments were performed, These are described in turn below, To summarize briefly the results, stability was obtained for several V(r)9s and rigid periodicity was virtually eliminated, Unfortunately, the "best" results involved threshold curves with dips, It was to eliminate this ad hoc mechanism that gave the initial impetus to consideration of networks with distance-bias. Ne ative Feedback Exeriments 1 p = p + p 12 p1 9, p =3 The experiments of Figure 6.10 were repeated for basically the same network of that example except that the network density p was raised to p = 12 and the initial X-distribution such that the initial distribution of +1 valued connections had density pt = 9, that of the 1 valued cone nections had density p 1 = 3o The results (V(r)'s, EEG's9 O'&s) are summarized in Figure 611, All networks displayed underdamped behavior,

Figure 6.11. Negative Feedback Experiments 1. Repitition of the experiments of Figure 6.9, all parameters unchanged except that the initial X-distribution was made according to p a p1 + p- 1, wb,*gr rf 9, p-. 3, p - 12. V(r)'s for Variants (l)-(3) below are the same as in Fig. 6.9. Variant (1) F(t) 40 30 20 10 0 10 20 30 40 50 60 70 t Variant (2) 40 30 20 10 0 10 20 30 40 50 60 70 t Variant (3) 50 40 30 20 10 0 10 20 30 40 50 60 70 t

169 with the third surviving the longest (F(89) = 0) The underdamping appears clearly to be a result of the proportionately larger number of positive than negative connections present in the network, This suggested the second series of experiements in which p = 6, p1 = 3, pl = 3. Negative Feedback Experiements 2: p = p + 1 = 6 p 3, P 3 =1 1 1 p Excess of positive valued connections seemed to produce instability in the preceding experiments, Therefore, the density of +l-valued connections was reduced to 3, that of the -1 valued connections remaining 3, The network parameters (except V(r)) remain as before, The results of some of these experiments are summarized in Figure 6, 12, The first two involved "normal" threshold curves (no dips), The second yielded rigid periodicity on rq = 16 (notice that M1(0) = 100 in these experiments to compensate for the presence of negative connections), In the first, F(t) was homing in on a rigid period r = 16 by the termination of the rune The second two q involved threshold curves with dipso Again, in one of these the behavior was rigidly periodic, in the other no definite periodicity was present, although the avera e firing rate (intervals between firings) of neurons of the network was r + 16, Ne ative Feedback Ex eriments 3 Case 1 p = p2 + p 1 + * 2 = 6, p o 1p,5 s = -2, -1, 1, 2. The experiments of Figure 6,10 were repeated for this more complex synapse=value distribution, The results are summarized in Figure 6413. One additional experiment using threshold curve 2 of Figure 5a4 was performed, It is also included in Figure 6,139 Experiment 4a The first two threshold curves produced instability, The last two, the last of which is a "normal" threshold curve, were stable for t e [0,99], but

Figure 6.12. Negative Feedback Experiments 2. The same basic network of the preceding runs was used as basis for the experiments here except that p = 6P1+P -v, pi p_1 = 3. The respective threshold curves for Variants (1) - (4) are given in (a)9 the 88G's in (b). (a) Threshold Curves for Variants (1) - (4). Variant (1) Variant (2) V(r) V(r) 1 0 I 10 99I I 7 7- 1 6* 6 5 5 NII 100, =io 5 I I 100 5 4 1 4 21 Ihi 1C I I i 1 4 0 I0 20 r 0 10 20 Variant (3) Variant (4) V (r) V (r) lo - 10t 7 I 7 6 6 6 5 NI 100 S =100 ~ III ~ M 100 4~4 43 3 2 2'II 0 12 0__ _ _ _ _ _ _ _ 1 10 0 r 0 10 20 r

Figure 6.12 (continued). (b) EEC's for Variants (1) - (4) of Negative Feedback Experiments 2. Variant (1) F(t) 60 50 = 28 behavior slightly underdamped, appears to be homing in on 40 cycles on r- ='16., 30 -II\J 20 10 0 10 20 30 40 so 60 70 80 90 loo t frVariant (2) F(t)' t= 28 behavior rigidly periodic from approximately 60 0 t 43 on (iq 16). 50 n This pattern 40 n~~~~~~~~~~~~~~~~ppates 6 40 on. 30 20 0 10 20 30 40'50 60 70 80 90 100 t

Figure 6. 12 (continued). (b) (continued) EEG's for Variants (l)-(4) of Negative Feedback Experiment 2. Variant 3 F(t) IThis pattern 60j repeats from =O 28 Behavior rigidly periodic, irq 16 t =64 on so 40 301 20[ 10 / 010 20 30 40 S0 60 70 80 9 0 Variant 4 F (t) 70 Z 28 ~Behavior not rigidly periodic 0 ~~(though possibly homing in on periodicity), 60- average firing rate = q 16. so 4030 20 10 0 10 20 30 40 50 60 70 80 9 0

173 apparently underdamped: Rigid periodicity was not present in either of the last two cases, Case 2. p = Pp2 + P1 + P0 + P + P2 = 24 pO = 12, p = 3, 5s 32, 1 2 The total network density was raised to p = 24 with a O-valued subdistribution with density 12, The intent was to provide connections whose effect would not be noticeable initially, but which might provide additional positive or negative connections if needed by the network later (through synapse=level drift), This would be particularly relevant to the formation of paths in the periodic stimulation experiments, The last two experiments of Figure 6,9 were repeated using this distribution, The results are summarized in Figure 6,13, Case 2, The EEG's display violent oscillations, but rigid periodicity is lacking, 6,2.3 Conclusions Stability may be produced in networks with uniform random distribu' tions of connections, just as predicted by the theory of Chapter 4 for this case, However, it was produced, with several exceptions, at a cost: (a) unnatural ad hoc threshold curves had to be used, or (b) rigid periodicity in F(t) occurred (F(t) = F(t+r ) and E Et+ ) q t t+r A detailed study of these experiments suggested that the introduce tion of distance-bias into the networks would eliminate both these difficulties, especially if negative feedback were present~ Therefore, further work with these networks was abandoned and attention turned toward. networks with distance-bias, Periodicit~ The difficulty of (b) perhaps deserves further mention. It is

Figure 6.13. Negative Feedback Experiments 3. Case 1: p = p-2 + p-1 + p1 + p2 = 6, p5 = 1.5, 5 49 -19 1, 2. Case 2: p a p-2 + p-1 + p0 + p1 + p2 = 24, p0 = 12, p5 = 3, s z -2, -1, 1, 2. For Case 1, Variants (1) - (4), threshold curves (1) - (3) of Figure 6.9 and (2) of Figure 6.4 respectively were used. For Case 2, threshold curves (3) and (2) respectively (Fig. 6.9) were used for Variants (5) and (6). Case 1 Variant (1 Variant (2) F(t) F(t) 80 80 70 L a 20 70 - r = 20 0 0 60 ~- 60 - Initially overdanped, F(26) = 0 Initially overdamped, F(27) 0 so )-so 40 40 30 30 20 20 PO 0 \ 10 0 10 20 20 t 0 10 20 30 40 50 t

Figure 6.13. Case 1 (continued). Variant (3) F (t) 70 - 0 = 20 Behavior oscillatory, underdamped, non-cyclic 6 (V(r) with dips) 4030 20 10 0 10 20 30 40 50 60 70 80 90 100 t Variant (4) (J F(t) I = 20 Behavior oscillatory, underdamped, non-cyclic 70 0 (V(r) "normal", i.e., no dips) 60 50 40 30 20 10 0 10 20 30 40 50 60 70 80 90 100 t

Figure 6.13. (continued) Case 2. Variarnt (4) F(t) 7o - Lo 20 Behavior underdamped, oscillatrys non-cyclic. 60 50 40 30 20 10 0 10 20 30 40 50 60 70 80 90 100 t F(t) 70 60 =- E0 20 Behavior underdamped, oscillatory, non-cyclic. 50 - 40 30 20 10 0 10 20 30 40 50 60 70 80 90 100 t

177 a theorem of the Theory of Finite Automata that the behavior; f a finite automaton without inputs will cycle on a definite period it,, say The period T is, roughly, an increasing function of the number of states of the automaton; The networks of this section may be regarded as finite (probabilistic) automata, Therefore, one intuitively expects the behavior of these networks to eventually repeat itself; i,e,, F(t+T) = F(t), E = E t+f One would like X to be very large:, essentially infinite. If the threshold curve V(r) is very steep, with p = 6 (relatively small), N = 4009 and positive, equal connections only present in'\ it is clear that in some sense the number of states of ~ (or state=transitions) is more restricted than if V(r) is shallower, the synapse=.values are distributed over positive and negative connections, etc. Therefore, in the former case, less information is present than in the latter. The introduction of distance-bias adds another order of magnitude to r and one would expect far richer behavioral patterns from networks with this feature, 6b3 NETWORKS WITH DISTANCE=BIAS 6e 3A 1 Introduction The conclusions of the preceding section pointed to the necessity of introducing a distance-bias mechanism, The mechanism adopted was that described in Chapter 4o It has the advantage that for r > Ra a neuron receives no connections from other neurons at distance r from itself, yet within the disk r < R, it receives connections uniformly and equiprobably from neurons at distance r, The expected number of

178 connections it receives from CR is p, This means that within a disk CR, a receiving neuron behaves as though it were in a network with uniform random distributions of connections, CR = N, p the p of the disk, etc, With this mechanism present, "very stable" and "adequate" networks were very rapidly obtained -a surprising fact in view of the analytic intractability of such networks, To study these networks, it was necessary to start out again with the simplest cases (in particular with ~(Q) - 0 and Xji(0)'s = 0)P develop an experimental:intuition for the behavior of these networks, then gradually introduce greater complexity into them, In the experiments of Section 6o2, the fatigue function was taken to be identically 0, 4(Z) _ 0 for Z = 0, 1, oo, Zmaxo Since fatigue is a slow, cumulative function, over relatively short time intervals (t = 0 to 200, say), this should do little harm, However, for longer time intervals such as are necessary to determine "very stable" and "adequate" networks, a more realistic 4(g) must be usedo For example, even with negative feedback present in a network'9 there might by chance be a set of neurons that could fire at a rate much greater than 1/ir for relatively longer periods of time, q These neurons then might eventually influence the behavior of, either producing instability orunwanted cycleso In fact, the entire heuristic argument of Chapter 4, Sections 4o4, hinges on cycles not coming into existence spontaneously, One method of guaranteeing that this will be the case is use of the damping effect of the fatigue mechanism0

Figure 6.14. Spread-of-Excitation Experiment (Summary). N = 900 V (r) = 20.5 (density within disk of radius R) 10 R= 10 CRr =314 9 - () 0 (fatigue nechanism inoperative) 8 - so t(0) =X0 that S( = 1 (ij = i, -, 900) 6 (U(A) and D(X) curves as in Fig. 6.6) 4 M =281 4 ~~~~~~~~~~~1 r =16 2 q 1.. I r r i. F =56 0 5 10 15 20 r F(t 18 EO3= 36, 180 0 This pattern 170 behavior I repeats from cyclic on t = 33 on.'160 1 = 16 1501 q( 140140 r =16 130 q 120 (exactly) 110 100 90 80 70 60 50 40 3020 10 10 ~ 20 30 40 50 60 70 8u t

Figure 6.15. Spread of Excitation (Illustration). (a) Firing patterns for the spread-of-excitation experiment of Figure 6.14. (b) An hypothetical illustration of the build-up of hyper-recovered neurons in~ when R is too small. (a) Firing Patterns at t = 1, 2, 3, 4 for Experiment of Figure 6.14. Interpretation: Recall that the network is represented by a 30x30 grid of neurons. Assume that the neurons of A are numbered from 1 to 900 row-wise, from right-to-left, bottom-to-top as follows: At time t, the firing of neuron K oftwill be indicated row 30...... by a 1 in the position of the grid corresponding to 900 899 873 872 871 neuron K; the non-firing by a 0 in the same position. The actual computer output shown below groups the neurons by fives - if all five of a group did not fire at t, row i the word ZERO is inserted into the appropriate position row i 6 0 e 30+1for the group. A limitation imposed by the computer restricted the number of groups per line to nineteen. Consequently, the firing pattern output is somewhat awkward to interpret. In the output given in the sequel, row 2...... the boundaries of the rows (1-30) are indicated by a 60 59 33 32 31 vertical bar. The reason for such condensed output is row 1.. *.. rather interesting. Namely, the time per time step 30 29 3 2 1 required to produce more legible output (i~e., the 30x30 grid), or equivalently to post-edit condensed output 30 into legible output, would simply have been far too great to be economically feasible. In fact, even the condensed output obtained was a bit of a luxury.

Figure 6.15 (a) (continued). To illustrate the interpretation of the coniputer output, row 1 at time t = 1 is 30 29 28 27 26 25 24123 22 21 20 191817 1 1514131211 10 9 876 4321 (neuronnumbe 1 0 0 0 1000 11100 10 1 1 0 110 1000 (fired-notfiredbit) The right-hand position of each group marks the beginning of that group (e.g. 1, 6, 11, 16, 21, and 26 above). 0's to the left of the last non-zero are "suppressed" (an unfortunate property of the output routine in this case) and left blank. Thus, at t = 1 in row 1 (neurons 1-30), neurons 1, 4, 7, 8, 12, 13, 15, 18, 19, 20, 24, and 29 fired. Row 2 at t = 1 is 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 3736 35 34 33 32 31 i r I I I I 100 ZERO 1001 1 1100 100 On next-to-bottom line of On line of ine of block of ten block of ten lines for this lines for this time step, lefttime step, right-most three groups. most three groups. Notice that a t = 1, the neurons fired (i.e., EO) are clustered in rows 1-3 of J~. At t = 2, the neurons fired "wrap around" to rows 26-30, few in rows 1-3 firing. At t = 3, the firing has moved into the center of )"((rows 11-22), with slightly sparser firings in the top rows (26-30) and even sparser in the botton edge (rows 1-3). At t = 4, the firing has spread out fairly uniformly over (L(still sparse at botton edge, however).

T,.E, CUTPU,S 36,,SP AV ZEJ~O ZERO ZERO ZERO ZERO ZER ERO ZERO ZERO ZERO ZERO ZERO ~ ZERO ZERO ZERO ZERO ZERO ZER[~III".Z~RG ZERO] ZERO ZERO ZERO ZERt~'I~ERO ZERO ZERO ZERO ZERO ZER~'J~ZERC ZERO ZER[ ZERO ZERO ~ER~Jl'~.'ORO.ZEi),U ZERC ZERO ZERC' ZER~'Z~ERO ZERO ZERO ZERO ZERi3. ZERL"~'/-~,ER[ ZERO ZERLJ Zt:::RC ZERO ZERt~I'-I ERO ZERO ZERO ZERO ZER ERa ZERO ZERO ZERO ZERO, ZEROIljIt~ERC Zt:RC ZERO ZERO ZERO ZER ERC ZERO ~ ZERO ZERO-'ZERO SERe ~ERO,..E~O,ZERO ZERO ZERO Z,:~O'"f~ERO Z~C-Z~.RC'ZERC.-Z_~RO ZERO- Z~,~_ ZF~C~ ~LJi~E.RC Z~,~C-Z~,~ ZE~cPI'~R ZERO.ZERO ZERO -ZERO ZERI3J~i~ERO ZERO.. ZERO ZERO' ZERC ZER'i~iZERO ZERO.ZERO ZERO ZE~.O j~Ek~]'IISZE,~O ZERO ZERC~.~' 100~ 11 1CO ZERO.'1C 1S 1C171t'~ 1:r'- 11C~1 J, ZERO 1C'LCg ZERO 10 1,3'.33 1C'] ZERO.\1CC1:~oo' ~c~%~ccc.~c,~ ~c~ ~c~,~c~ \~c'~cc:". ~''',' TI~E 2 CUTPUT~ IS 1c~6 FIRING~"DISPLAY 1000 101' 1C lOClO 11CCC 1C1G j ZERO 1C ZERO 1, I 1CCSC J 1101 log ZERO 1] 1CC'.]C 1C1 jl,.?11C 100 10100'~ERC ZERO 1CC~ j ZERO 11CC 1 1CCC 1-CIC Z~RC J ZERO 1~, 1CO 101 ZERO ZERO J' ZERO IC ZERO 1 ZER~ ZEROI ZERO ZERO 11 1COCO lCC1 1ClCJ ZERO ZERO 10 I ZERO 1..Cc:JJ ZERO ZERO ZERO ZERO ZERO ZER[ I.ZERO ZERO ZERO ZERO ZERO ZERO j ZERI] ZE;RC ZERO ZERO ZERO ZERC~ ZERO ZERO Z:%RI] ZERO ZERI] ZERO ~ ZERO ZERO ZERO ZERO ZERO ZERO! ZERO Z'~RC ZERO ZERO ZERO ZERCJ ZERO ZERL1 ZEql] ZERO ZERO ZERO J-ZERO ZERO ZERO ZERO ZERO ZEROI ZERO ZERO ZERO ZERC ZERO 190C.:..J 11 IC1S:) ZERO ZERO ZERO 1 10 ZERO ZERO 1 101 ZERO J ZERO 10~1 1COOS 11:~C2' ZERO 1SJ lC.,D 101 1C001 IOC:IC ZE?,O 11']10 j 1CO I10~0 10110 11 lc:g ZERO ~ 11C ZERO ZERO I1CS ICCCC I~,,.S j ZERC ZERO IlSCS ZERO 11.]~1 1.CC~]O ~ 19qO 10~] 10000 ZERO ZERO i J 1COO ZERI] ZERO 1CIC ZERO ZER~ II ZE;~C ZERO ZERO ZERC ZERO ~ ZE~O ~ ZERO ZERO ZERO ZERO 10 ZER[: J ZERO ZERO ZERO ZERO ZERO ZERO TI~E 3 CUTPUT IS 161 FIRING DISPLAY ~-, ZERO ZERO ZERC lgCO In iCc ccJ Z'E,"~O L~:RO lrC,? 1,Cftc lot; ZER(] IZERC.ZERO ZERO 1~.'~1 lOi~C i'CS,~GJ 1,Z"'(? O~ 1 1 ZERI] 111CO ZERO J 1C ZERO ZERO ZERO Z-[RC I J 1 1 Z.ERC ZERO 100-]1 1el 1 ItC( 10110 ZERO 11 ZEROJ 1 l(:11C ZERO LC1C ICC. ZERO J ZERO 10 11CS.'O 1C? ZERO 11.)J 1111 11180 1C~S I 10011 11191JlC"1C1 ZERO 1C2 11CCC 1CO 111JlCCCC ZERd ZERC 1')1 11 1000',')j 111C IC 101C0 111~!.'. ZERO ZERO Jt 111 1C110 1COCO 10 lCCC! 1CJ lCCC 171C1 ZERO 1C100 1Ci]g l!]lJ 101C ZERO 11~sr]1 1CGC'O ZERO ~ 11 ZERO ZERO loll 111C~ 11C! 1CO 1C 11cc.c ZERO 1~] 10~ Z~'RC 1 iS1 ICC.~]I ZERO 1C-]:r J ZERO ZERO I~C: ZERO ZERO 11C1S I ZERO ZERO ZERO ZERO ZERO 1!j ZERO ZERE 10 i~jC 1CCC 1jZLLi~] ZERO ZERO ZERO ZERO 10011J 1 1C01':] 11S~C 1 ZE~C. ZEi~C J 1.0 ZERO ZiZRC ZER(] ZERO t3 ~ZERO Z~F,'G! ZERO 1COl ZERO j ZERO ZERO ZERO ZERO ZERIS ZERE.~ i ZERO 100 1~5'].0 ZERO 10 jZE?,O ZERO Zi-I:~,[] ZERO ZERO ZERO~ ZERO ZERO ZERO ZERO 1CCO, 1 TII4E 4 CUTPUT IS 86 FIRING DISPLAY ZERO ZERO lOgO ZERO 100 1~CJ ZERO 1C0 1C.S ZERO 10 1JZERO ZERO ZERO 1gO ZERO ZEROJ ZERO ZERO ZERO Z.ERC 1 leJ ZERO ZERO ICCCC I lCC ZERCJl.S100 ZERO ZERO 1gO.CS ZERO ZEROj ZERO 1 ZERO ZERO 1CCCC 1010'1 j lrC ZERO 1CO ZERO ZERO ZERO J. 1C0 ZERO ZERO Z:~RC 10 ZER[iJ ZERO 11 1CCC'1 100 ZERO ZERO~ ZERO 1CO1 ZERO ZERO 1COCO 1CCCSJ 11C ZERO ZERO ZERO 1LOS 1101j 10~]],') I ZERO ZERO 101' 1000 J ZERO ZERO ZERO 1COCO ZERO ZEROj 1CO 1GCC ZERO ZERO 1 lCJlgOel ZERtJ 103 11~ 1C'!:! 1000'~ZERO 1001 IOClO ZERO ZERO ZEROJ 1C 1CCC 1'ICCO ZERO ZEROJlOC, CO ZERO ZERO 1::,2 1CO ZERO I[1000 10001 ZERO ZERO ZERO ZEROJ[ ZERO ZERO 1 ZER~ ZERO 1COl ZERO ZERO ZERO ZERO ZERO.. Z[ROJlCCCC ZERO 1001 1CO 1010 ZEROJ ZERO ZERO ZERO 13 ZERt] 1SCCO~ ZERO ZERO ZER[ ZERO ZERO ZEROj ZERO ZERO -10 ZERO ZERO 10~ lOC ZERO ZERO ZERO ZERO ZERO J ZERI~ ZERO Z. ERC ZERO ZERC ZEROJ ZEi:~,L1 ZERO ZERO 1'000 ZERO ZERO JZERO ZERO ZERO ZERO ZERO ZERO

Figure 6.15 (continued). (b) Illustration of Build-up of Hyper-recovered Pockets inkwhen N is Large, R Small. Suppose 4 is represented by a square, a neighborhood CR being shown to illustrate the magnitude of R. Assume Z1 "clusters" at the bottom of A as in (a). As the excitation moves out from Ej (wrapping around as in (a)), it gradually returns to EO after roughly N1/2/R time steps. If R is sufficiently small, N sufficiently large so that this is r or greater, the neurons in the region of Er and of EC itself will be highly recovered. This raises the q possibility of massive firings leading to fatal oscillations. EI neurons of this region fire at t = 2 Z neurons fire at t = 3, etc. 2 -wave spreads and disperses f in downward direction At t = k, firing is fairly disperse, but sparse around bottom edge. Consequently, if R N11( k is large (~ r ), a few neurons could trigger massive firing in 1/2 This set of neurons becomes /m inactive and can recover (R < N1/2) 0 I lijiiii! neurons in shaded region fire at t =1 Since none of these neurons can fire until t+ra time steps, and since any neurons of this region not in EO that fire at t+l will 1/2 be refractory until t+ra+l, this region will firing N tend to be dead until revived by the return 11wraps around' wave of excitation from the top.

184 6,,3,2 Series I -- Familiarization read of Excitation Ex eriment The first experiment involving distance-bias paralleled Basic Experiment I of Section 6;2, The following network parameters were chosene N = 900, R = 10 p = 20, X5 ji(O)'s = X0 so that S(X0) = 1, ~(Q) -0, and D(X), U(X) as in Figure 6o6, The V(r) used is somewhat similar to that of Basic Experiment I, being large for ra < r' r = 14 and equal a 3q to 1 for r < r = r Figure 6,14 gives a summary of the parameters of q m this experiments together with Z0 and the EEG, The behavior was rigidly periodic (as would be expected) with period r = 16 after t = 17, i,e,, q for t 4 17, F(t+16) = F(t), = t+16 t There is an interesting sidelight to this experiment, The starting subset Z0 was chosen to lie along an edge of the grid of'| This was done deliberately to study the spread of excitation from E0 over the entire network. It brings out clearly the effects of the distance-bias, This is illustrated in Figure 615(a) for time steps 1 through 4, Notice that the excitation spreads in both directions from Oc, "wrapping around" to the top of the grid since the lower and upper edges are in reality adjacent (quasi-toroidal geometry),. After time step 4, a sufficient number of neurons are firing over the entire network that no clear pattern of spread of excitation exists, Since CR R = 314 neurons, the network maybe covered by.approximately three neighborhoods, This means that one would expect the excitation to spread over the entire network in three to four time steps, as actually occurred, If the neighborhood radius had been very small, the excitation spread would have taken proportionately longer to cover the network. For large

IN, tiiis could Ilad to violent.tpos:sibly fatal-oscillations since the spreading excitation would encounter progressively more recovered neurons, Therefore, two important principles emerged from this experiment; (1) Z0 must be chosen so that the spread of excitation from Z0 will not encounter areas of recovered neurons, (2) R must be chosen so that certain neurons would not be isolated from the excitation spread too long (thus developing a pocket of recovered neurons), These ideas are illustrated in Figure 6015(b):. B3asic Ex eriment III As in the case of Basic Experiment 1, the preceding experiment shows that with distance-bias present a network can still be "forced" into stable (albeit periodic) behavior by a sufficiently steep threshold curve. it is now of interest to test more realistic threshold curves, For this purpose, curve 2 of Figure 6o4 was chosen, The remaining parameters were, N = 400, R = 3, p = 6,78, Xji(O)'s = X0 with S(=X) = 1, 4(Z) _ 1, D(X) and U(X) as above, The parameters, EEGO, 0, etc., for this experiment are given in Figure 616, The behavior was unstable, underdamped, F(t) going to zero at t = 41= The small neighborhood radius illustrates the principles of the preceding paragraphs: during the dip in F(t) from t = 8 to t = 12, the nonfiring neurons of the network recover four time steps thus contributing to the swell that starts at t a 13o Had R been larger, more neurt-ors could have fired for t = 8, o,,o 12, possibly avoiding the build-up of fatal oscillations The subset Z0 was taken from the set of neurons i E R such that r; > 16.'I'hese neurons are randomly distributed over ~o As Figure 617

Figure 6.16. Basic Experiment III. V (r) N = 400 R 3 10 = 6.78 9~~~~~~~~~~~ 9 X )(0) X so that S(X 1 (ij = 1, 400) 7 I U(X) and D(X) as in Fig. 6.6. 6 4 NI 80 3 2 0 201 - 20 r F (t) 0 80 70 60 50 40 30 20 10 0 10 20 30 40 50 t

Figure 6.17. Firing Patterns for t - l, 2,-3 of Basic Experiment;III. The same basic conventio nsof Pig. 6.15(a) is used to interpret the computer output at time t. The main difference is that C is now a 20x20 grid, twenty rows of twenty neurons each, numbered as shown below: row 20 In the computer output, the neurons are again grouped by fives 40;0 3;99 383 382 381 — if none of a group fired, the word ZERO is inserted into the position for that group. Vertical bars indicate the boundaries between rows. For example at time t - 1, rows 1 and 2 are: row i. O ix2O ix20-l (i-i)x20+l (right20 o 0 0 Z E R Z E R 0 hand four row 2.row 2 40 39 38 3 35 34 3 3 31 30 a29 28 27 26 25 24 23 23 21 groups of row2 0 3.' line three 40 39 23 22 21 of output) row Z R ZER Z ER 0(line fine 20 19 2 rw 12l1 J1? l~ Ill i I of output row 20 19 18 17 1 1 1 1 11 10 9 8 7 6 5 3 1 block) Thus, neurons 19 and 39 (in rows 1 and 2) fired at t = 1. Notice that the neurons of E0 are uniformly distributed over A. 00 TIME I OUTPUT IS 23 FIRIN3 DISPLAY,4 k tI 1 ZERQ JER O ZER O ZERO " ZERPO Z=RO -E~q ZERO \ ZE,0 Z-RO 1.V1: n ZERF)~ ZERO 1!", ZG3 \R\Z-:- i i ZERIJZERO ZERO ZERO ZE ZERO Z'i-f rl E R ZEr. ZL — Z i' ZEER ~ZERO I \bZERO If 1' ~ T0u ZER IE:8Jn Z Rn I om 1oI R Z.o 1 ZEZO ZER*T)G V- 1n I(1 7 ZZIE ZERO ZERO ZERO~ZER ERO _ _ 10 ZER!3 1 ZERO.ZE.Z5P,,O ZEROn IOO ZR DOEZFRO Z ER ZERO ZErt L~.l ZERO LWR t 1000 ZERO ZERO) ZERO TIME 2 OUTPUT IS 32 FIRING OISPL Y ZERO ZERO 111) ZERO ZERO ZERO ZERO 1ZR \Z7ER ZO E Z. ZR 1 1^? i 1 11" 11000\ ZER100 2E0P z1m\ zro ZERO R10 i 0o 1 ZIE RO\I\0 Z n 7r o l \| ZR' ZERO ZERO ~ 1000 ZERO ZERO zERn ~ ZEROO L ZErO 1 ) ZERO ZER ZEiO Z R1 | 1 V ZRD 1. I ZERO 100 _100Rl ZERO E R1lI R Z ERO: O ZO ZERO ZEZERIO ZERO 1 0 ZZ ERO ZE|I 1- ZE 7i z V7 ZO ERO ZERO 1 ZER TIME 3 OUTPUT IS 37 FIRIN.G DISPLAY ZERO ZERO ZERO ZERO I ZERO ZERO 10 ZERO ZERO ZERO 1 ZERO O 1.3 1? Z 10 |ZERO- ZERO 10 ZERO \ ZERO ZERO 10 ) 0. ZERO 170 12)) ZERO ZE RT\ ZEO ZERn 13 ZFI \ nrO Z ZERO ZERO 100r 1 ZERO 1000J \ZERO ZERO 1 100 \ ZERO ZR90 OUI o1 100 | ZER) ZERO 1i0 ZEI. I 1 __._ _Z.ERO. 10110 __1 ZERO ZERO ZERO _ZER!\1000 Z ERO\133 ZERO ZERO \ 11t ZERO 1 I /ZRO ZtE, E ZERO ZERO 101. ZERO

188 shows, however, the small R caused the spreading excitation to cluster around E0 at t = 1, 2, 30 This, of course, reinforces the arguments given above. 60,33 Networks with Negative Feedback, No Fatigue In an attempt to understand better the behavior of distance-bias networks, a series of experiments was conducted for networks with various initial X-distributions and threshold curves, ignoring the fatigue mechanism, ioLe, (Q) _= 0 as in the previous experiments, These experiments, then, were intended as run-in experiments, the best of which would be isolated for further tests, using the fatigue mechanism, They allowed a precise study of (a) the effects of varying the pK's V(r), and R, (b) the synapse-level drift over the time intervals in which the experi.. ments were run, (c) the firing histories of individual neurons, etc, Five of these experiments are described here6 As is expected, none of these experiments yielded rigid periodic behavior, the combined effects of distance-bias, negative feedback, and reasonably shallow threshold curves providing a sufficiently large number of states that the period of the networks was essentially infinite0 Experiment_ 1 N = 400, R = 4, p = 2404 = K where p2 = p K=-2 P1 = P2 p 8, =.2Q V(r), U(X), D(X), E as in Basic Experiment III. This experiment was run from t = 0 through t = 276, It was "stable" in the sense that F(t) did not become zero, yet F(t) oscillated within the bounds of 1 to 50, which seems too extreme, An analysis of the sy= napse-levels ji(t)'s at t = 100 revealed a net positive drift, indicate ing that the network was, in fact, basically underdamped - e i6e,' a mar jority of neurons were firing at rates greater than l/r 6 The results q

Figure 6.18. Experiment 1 (Section 6.3). Negative feedback is present: N = 400, R = 4, p - 24.4 = p_2 + -1 + p0 + p1 + p2, where p =. (s = 2, -1, 1,2) and 2 V(r),- Z0, O(Z), U(X) and D(X) as in Basic Experiment III. CR - rR2 = 50. F(t) 80 70 r0 = 18, 0O distributed uniformly over 60 L Behavior: oscillatory, amplitude of oscillations gradually decreasing from approximately t = 100 on. This network clearly was underdamped initially; yet the negative feedback was sufficiently strong to 40 - present the development of progressive- ly larger and 30 - 20 10 0o 10 20 30 40 50 60 70 80 90 100 t F(t) 70 - From t = 200 on, the oscillations 60 A- I\ remained within the bounds 3 9 F(t) 5 52. 50 40 30 10 0 110 120 130 140 150 160 170 180 190 200 t

190 of this experiment are summarized in Figure 618, While the neighborhood size might be partially a factor here, it appeared more reasonable to assume that the proportion of positive synapse-values was too great, leading to undamped behavior~ Experiment 2 was devised to test this hypothesis~ 2 Ex.eriment 2 N = 400, R = 4, p = 24A4 = Ok where p0 = p1 = P2 = p/4, 2 P1 = 02 = p/8, otherwise identical to Experiment 1o This network was run from t = 0 through t = 262, The oscillations remained bounded between the limits ~sand 32 except at t = 180 when F(t) = 4o This would appear to confirm the hypothesis that more negative connections were needed, but the low value of F(t) at t = 180 suggests a slight overdampingo The results of this experiment are summarized in Figure 619, 2 Experiment gz N = 400, R = 8, p = 25,1 = a pk' o P1 = P2 p= /4, k=-2 P1 = P2 = p/8; all other parameters as in Experiment 2, The results are summarized in Figure 6,20, Note that the limits on the oscillations are F(t) = 6 to F(t) = 48., Ex eriment 4: Same network as in Experiment 2, but with a different curve V(r). The curve V(r) and the results are given in Figure 6,21, The bounds on F(t) were approximately 15 to 44., The behavior appeared to be somewhat underdampedo 2 Ex eriment 5o N = 400, R = 6, p = 54,9 =, where p =p = p/l, P k 0 2 -1 k=-2 P2 = P1 = p/8, V(r) and all other parameters as in Experiment 4, The results are shown in Figure 6,,22e The bounds on F(t) were 19 to 43, Again, F(t) appeared somewhat overdamped,

Figure 6.19. Experiment 2 (Section 6.3). This network is identical in all respects to that of Experiment 1 (Fig. 6.18) except that the proportion of negative synapse-values has been increased: p = 24.4 = P-2 + P-1 + PO + P1 + P2 where pO = P-1 = P-2 = p/4, P1 = P2 = p/8. The run was made from t = 1 to t = 262, minimum value of F(t) was F(t) = 4 at t = 181; maximum was F(t) = 32 for several values of t. The entire EEG is shown. Notice that F(t) is better modulated in the last 70 time steps than earlier in the run. F = 11, F = 30. mln max F(t) 60 50 40 30 20 10 0 10 20 30 40 50 60 70 80 90 100 t F(t) 70 60 50 40 $0 20 10 0 100 110 120 130 140 150 160 170 180 190 200 t

Figure 6.19 (continued). F (t) 80 70 60 so 40 30 20 terminus of run (computer interval 10 tine trap) 200 210 220 230 240 250 260 270 280 290 t

Figure 6.20. Experiment 3 (Section 6.3) This experiment is identical to Experiment 2 (Fig. 6.10) except that R has been doubledD R = 8 and p = 25.1 with pS (s = -2, 1, 0, 1, 2) as in Fig. 6.19. The EEG is given only for the first and last one hundred tine-steps. F(t) F6t) E =28 X0 uniformly distributed over6. F = 48, F m 6. F(t) is 60 0 max mmn perhaps not as smooth as in Experiment 2. 50 40 30 20 10 0 10 20 30 40 50 60 70 80 90 10) F(t) 60 40 30 20 10 136 140 150 160 170 180 190 200 210 220 230 t

Figure 6.21. Experiment 4 (Section 6.3). This is identically the same experiment as Experiment 2 (Fig. 6.19), except that V(r) has been replaced by the curve below. The first seventy and last one hundred time steps of the EEC are given below. V(r) F (t) 6 0 o= 18, E uniformly distributed over 4'. E(F(t) 5: 25 (should be 10 I so 400/rq where E(rq) =17.5, i.e., Fb - 400/E(rq)~ 23). Fmin 15, 9 I Fmax = 45), F(t) appears, consequently, slightly underdamped. 4 f M =80 2 fl02~1 il IIr I I I I 10 20 r 0 10 20 30 40 50 60 7Ct F(t) 80 70 60 50 40 30 20 10 100 110 120 130 140 150 160 170 180 190 200 t

Figure 6.22. Experiment 5 (Section 6.3). All parameters are as in Experiment 4 (Fig. 6.21) except that R has been raised from 4 to 6, P 549= P-2 + P-1 + P0 + P1 + P2 where po = P-2 = P_1 = p/4; p2 = P1 = p/8. F(t) 60 0; = 19 F(t) slightly underdanlped, F = 19, F = 50 (one point only). Compare this EEG with that of Fig. 6.24, identically the same experiment as this one except that additive fatigue so 0- (Q) is now present. 40 30 20 10 0 10 20 30 40 50 60 70 80 90 100 t F(t) 60 50 40 30 20 10 0 100 110 120 130 140 150 160 170 180 190 200 t

196 Resume of Experiments 1 - 5 By gradual "tuning" of the parameters R, p, ok for k = -2, =1, 00 10 2, and V(r), assuming the'fatigue mechanism to be inoperative — ) 0 — it was possible to produce nearly stable behavior, "Nearly stable" means Fb that while F(t) lies between certain bounds, egg,, 2 - F(t) < 2Fb (approN 400 ximately) where Fb == N- = 235, yet a tendency toward underdamped rq 1 behavior persists, In particular, a symptom of the latter was the fact that a relatively large number of neurons of the network were firing at rates greater than the expected rate l/rq This suggests that it is essential to include the fatigue mechanism to obtain very, stable behavior in these networks, The next section is devoted to this theme, 6,3,4 Networks with Negative Feedback, Fatigue Present The fatigue mechanism used in the subsequent experiments differs from the one used in Chapter 3 in that ~(Q) is, added to V(r) to determine the effective threshold. The advantage of this additive fatigue mechanism is that it does not affect the slope of V(r), whereas, of course, the multiplicative fatigue mechanism of Chapter 3 does vary the slope of the effective threshold function as +(Z) increases from +1o Preserving the slope of V(r) means essentially that the relationships between the sets Mk(t) are preserved and the basic analysis of Chapter 4 still applies (after translation of V(r) by O(i))o Four experiments (numbered in sequence with the experiments of Section 60303) are described below, They form candidates for very stable and adequate networks, the last one being used as the basis for the final, long-run cell-assembly experiments of Chapter 7,

Figure 6.23. Fatigue Curve Z), Tables A1(i), A (R) for Experiments 6 and 7 (Setion 6.3). The curves below were used for the experiments 6f Figures 6.24 and 6.25 to follow. A(19) = amount subtracted at' time t from Z to get new fatigue state for neuron i (i fired at t). 60 A2 C2(Z) u amount added at time t to Z to get new fatigue state for neuron i (i did not fire at t). so A t ) 1 1 Notice that A2(M+ 7) 7 for all Z except i= 0, 1, 62, and 63. 2 r 40 q 30 20 max 10 0 10 20 30 40 50 60z max z II (Z) A2(t 1a diz A2 AI M iZ(1 a6 2) n(;(e 0 0 1/16 17 2 2/16 34 2 2/16 51 1 1/16 1 0 1/16 18 2 ( 35 2 52 1 2 1 1/16 19 2 2/16 36 2 53 1 3 1 1/16 20 4 4/16 37 2 54 1 4 1 21 4 38 2 55 1 5 1 22 4 39 2 2/16 56 1 6 1 23 4 40 1 1/16 57 1 8 1 25 4 41 1 59 1 7 1 25 4 41 1 58 1 9 1 1/16 26 4 43 1 60 1 10 2 2/16 27 4 44 1 61 1 1/lb 11 2 28 4 45 1 62 1 0 12 2 29 4 46 1 63 1 13 2 30 4 4/16 47 1 14 2 31 4 2/16 48 1 2 32 4 49 1 6 2 2/16 33 4 2/16 so 1 1/16

198 Experiment 6b N = 400, R = 6, p = 54,9 = Pk = P / E Pkri0 =? P- 2 P1 = P2 = P/80 Same network and parameters as in Experiment 5e Additive fatigue function and tables of Figure4 6o23, Except for the fatigue function, this is precisely the same network as in Experiment 5, The results are summarized in Figure 6024, Notice that F(t) is bounded, 17 L F(t)' 40, obeying approximately the desired limits, The fatigue function definitely appears to have modulated F(t) somewhat, as is seen by comparing the EEG's of Figures 6e22 and 6,244 Exriment 7~ The exact network of Experiment 6 was used except that the synapse-values were re-scaled as follows- In all previous experiments, one unit of synapse-value was equivalent to 12 subunits, The threshold curve and the fatigue curve basically are represented in terms of these subunits, I f, for example, a threshold value of 12 subunits is added to a fatigue value of 3, the effective threshold is 1,25 (= 12+3/12) synapsevalues, The relationship between synapse-values and subunits was rescaled to 1l4 - one synapse-value is equivalent to four subunits, If a threshold value of 4 subunits is added to a fatigue value of 3, the effective threshold is now 1.75 (= 4+3/4) synapse-values, Given precisely the same network as in Experiment 6, this rescaling should make the effects of fatigue more pronounced, and it did, as seen in Figure 6~25~ The oscilla= tions remained, after an initial transient, in the bounds 16 - F(t) -= 38~ Ex e riment 8 This is a repeat of Experiment 7 with different Ai () and A2(Z) tables (see Figure 6~26), After an initial transient, 14 I F(t) g 49, Some neurons, however, were firing at rates > l/r 1/17, q See Figure 6o27 for summary of results of this run. E pe riment 9 The exact network of Experiment 8 was used with a new threshold V(r)e The new curve and results of the run are given in

Figure 6.24. Experiment 6 (Section 6.3). lThis experiment is identical to Experiment 5 (Fig. 6.22) except that additive fatigue is present and (2 ), 81(Q), and A2(t ) are as given in Fig. 6.23. F(t) r = 19, ZO uniformly distributed over. The fatigue mechanism seems to have modulated F(t) somewhat. 17 < F(t) = 40. 50 40 30 20 10 L*J L,, I I,,|,, l 1,,, I,,,,[,,,, I,,, 1,,,, 0 10 20 30 40 50 60 70 80 90 100 t " F (t) 60 50 40 30 20 100 110 120 130 140 150 160 170 180 190 200 100 110 120 130 140 IS0 160 170 180 190 200 t

Figure 6.25. Experiment 7 (Section 6.3). Precisely, the same network as Experiment 6 (Fig. 6.24), except that V(r) is expressed in terms of re-scaled (compressed) synapse-value subunits (see text). F(t) 60 -0 19, etc. Compared with Experiment 6, the oscillations here are more violent. The effects of the fatigue mechanism appear to have been sharpened by the re-scaling (compare also with Experiment 5 (Fig. 6.22)). 50 40 - 0 30 20 (run) terminated at this point 10 0 10 20 30 40 S0 60 70 80 90 lo t

201 A A l1({) A2({) Z A (Q{) A2C,) 0 0 1/16 32 2 2/16 1 0 33 2 1 34 / Again for 3 1 35 each Ia 4 1 36 2(Z) 5 a17 6 1 38 \() 7 1 39 2 2/16 1 8 1 40 1 1/16 17 9 1 1/16 41 10 2 2/16 42 11 43 12 / 44 13 I 45 14 46 s15 47 16 48 17., 49 18 Iso 50 19 2 2/16 51 20 4 4/16 52 21 53 22 54 23 55 24 56 25 57 26 58 27 / 28 60 29 61 30 4 4/16 62 0 31 2 2/16 63 1 0 FigUre 6,26, A1(Z) and A2(9,) Tables for Experiments 8 and 9 (Section 6,3)> The above A1(P) and A2(Z) tables were used for the experiments of Figs, 6,27 and 6,28,

202 Figure 6.28, The experiment was run from t = 0 to t = 399: apart from an initial transient, 8 s F(t) -9 36, Resume of Experiments 6- 9 Introduction of the fatigue mechanism into the networks of Experiments 1-5a where for each, al(t) i 1 AI(z) + A2() 17 q and 4(k) is an additive function; did indeed appear to smooth out the behavior of the networks, Rescaling of underlying synapse-value unit sharpened the effects of the mechanism. 6 4 SUMMARY OF EXPERIMENTAL RESULTS Stable behavior was readily produced both in networks with uniform random distributions of connections and networks with distance-bias when negative feedback was present, In the former case, however, an ad hoc design of V(r) had to be introduced to eliminate the rigidly periodic type of stable behavior frequently obtained there, If positive connections only were present, such rigidly periodic behaviors F(t+rq) = F(t) -t+ q Eat appeared to be the order of the day, In the distance-bias case, quasi-stable behavior was obtained, that is behavior with underdamped tendencies, until the additive fatigue mecha. nism was introduced, This appeared to yield several "very stable" networks,

Figure 6.27. Experiment 8 (Section 6.3). This is identically the sane experiment as Experiment 7 of Fig. 6.25 with the AIjZ) and A2(Z) tables of Fig. 6.26. F(t) o= 19 etc. After t = 4, 14 = F(t): 49. Still, F(t) appears underdamped (too many neurons of' 70 firing at rates > hrq 1/17). 60 50 40 30 20 10 0 50 100 t 70 60 50 40 30 20 (t0 0'1 200 t 300 350 6050 4030 20 00 706050 40 30 20 lu 300 350 0

Figure 6.28. Experiment 9 (Section 6.3). This is a repeat of Experiment 8 (Fig. 6.27) with the V(r) curve given below. Experiment 9 forms the basis for the Cell-Assembly experiments of Chapter 7, Section 7.5. V(r) 7 6 5 2 F(t) 0 10 20 r 50 - = 19, etc. Apart from the initial transient, 8 - F(t) ~ 36. 40 30 20 10,,,,,, I, I, I I,, 0 10 20 30 40 50 60 70 80 90 100 t F(t) 50 40 30 20 10 100 150 200 t

Figure 6.28 (continued). F(t) 50 40 30 20 10 0 I I I Q I I I t I I I I I I I I I I I I I I I I I f I I 200 2S0 300 t tiJ F (t) 60 50 40 30 20 10 300 350 400 t

7, NETWORKS UNDER PERIODACIG SiIMiULI 7t 1 INTRODiUCTION AND SURVEY OF RESULTS The substance of this chapter is to verify claims (2) and (3) of Chapter 5, Section 5,1 In summary form, these are (2) It is possible to produce'in some networks with cycles closed cycles (cell-assemblies) C(Z0) as a result of periodic stimulation of an input set LO within an "on-off"' stimulus envelope, (3) It is possible to produce in some networks with cycles, mutually inhibiting self re-exciting cycles C(Z0) and C(E0) as a consequence of applying periodic stimuli to LO and E0, the stimulus to the one alternating with the stimulus to the other in "on-off" envelopes (stimulus to f0 "on" over an interval I0,l stimulus to 0O off, t E IO; conversely, stimulus to E "off" over I*, stimulus to LZ "on" t c I) The development of the material of this chapter parallels that of Chapter 4, Sections 4A4 and 4,5, First the two basic classes of networks, those with uniform random distributions of connections and those with distance-bias, are considered under the effects of single periodic stimuli (claim (2))>, Secondly, a specific distance-bias network in which a cycle C(E0) has already been formed is considered under the effects of alternating perodic stimuli., In general, networks (with or without distance-bias) with negative feedback only are considered, In the first case above, however, several networks with uniform random distributions of connections and positive connections only are considered under periodic stimulations This is done merely to complete the argument of the preceding chapter, Section 6,2,3, on periodicity. There the claim was made that periodicity implied lack 206

207 of information. specifically, the lack of a reservoir of recruitable neuarons. It will be seen that this is, in fact, the case, Hence, the greater is the necessity for introducing mechanisms such as negative feedback and distance-bias in order to increase the amount of information present in the networks at hand, Even with the negative feedback mechanism present, the results using networks with uniform random distributions of connections were disappointing, Therefore, these networks were abandoned for the more complex dis= tance-bias networks, for which some very stable examples had been obtained (Chapter 6, Section 6,3,4, Experiments 69)>, Just as these had been readily obtained9 once distance-bias was introduced, so were "adequate" networks (allowing path and cycle-formation) equally readily produced when distances bias was present, Next (Section 705), an "adequate" network was subjected to long-term training, Unfortunately, this uncovered an imbalance in the fatigue mechanism~ The latter necessitated a series of control runs to isolate the exact nature of the imbalance, This being done, the experiment was continued using alternating periodic stimuli, The latter was prematurely terminated because of lack of funds and time - not, however, until a partial second cycle C(E0) and the development of cross-inhibition were observed. Once again, lack of time and money forbad continuation of the training period to the point that self-reexcitation could be positively demons strated. With the knowledge gained from these experiments, however, and by taking advantage of the larger computers emerging today, this most likely can be done using larger, richer, and more complex networks without undue difficulty

7.2 EXPERIMENTAL OBJECTIVES AND PROCEDURES 7,2, 1 Objectives Claims (2) and (3) of Section 7,1 imply several subgoals,, (1) The network must remain stable under stimulation ("adequacy") -=. i,e, F(t) must not become zero and it must not oscillate violently (epilepsy)., Once stability is obtained, this implies that the formation of one cycle C(Z0) must not "rob" the network of so many neurons that C(Z0) can never be formed, etc, (2) All the characteristics of cell-assemblies enumerated in Chapter 4,, Sections 4,4 and 4,.S$ must be exhibited, in brief, these are, (a) the presence of strong positive connections in the cycle C(E0) from the successor-set E 1 to ~; (I = O, 1, TO 0)., (b) the similar condition for the alternating cycles C(%O) and C(O),, including the presence of cross inhibitory connections between subsets of each cycle. (c) the "learning" or response to the particular stimulus at LO(LO and LO), leading to input-independent self-reexcitation for an interval of time (alternation of activity, input-inde= pendent for a brief period of time) (d) path and cycle formation are assisted by the recruitment of available neurons,; (e) cycle formation may be followed by fractionation, i,e,, the dropping out of the cycle of relatively inactive (or ineffectual) neurons,, This is especially important since it returns certain neurons to the reservoir of recruitable neurons0

Figure 7.1. (a) Typical stimulus envelope for single periodic stimulus experiments. (b) Firing patterns for six consecutive time steps for Experiment 9, (Section 6.3) of Figure 6.28. ~~~~(a) ~ ~- external stimulus applied to to every to time steps (a) | \ \ ~-____ within these intervals ("on periods). In the complements of these intervals ("off" periods), no external stimulus is applied to t(. usually a runin period, wit out external siuu stimulus ||stimulus | | applied applied O to * t*+to tO+2tL...t0+3t... t2nt t0+ (2n+l)tQ Note: The symbol Eo has been used both to indicate the set that is used to "start" or "prime" a network 1%to run at t = 0, no other external stimulus being applied (as in Chapter 6) and the subset of ( that is subjected to periodic external stimulus (as in Chapter 4, Section 4.4 and the current chapter). No confusion should arise out of this, since in Chapter 79 Eo will always refer exclusively to the input subset (receiving the external stimulus every To times in the "on" periods) of

Figure 7.1 (continued) (b) For an interpretation of these firing patterns, see Chapter 6, Figs. 6.15(a) and 6.17. TIME 367 OUTPUT IS 23 FIRING DISPLAY \ ZERO ZERO ZER0 Z-RO\IlCCO ZERO ZERO'ZERER l 1 ZERO ZERo Z co I ZERO 1000 ZERO ZERO ZERO__ZERO ZER g!CCOC ZERO IZERO IC CO ZERO ZERO 1ZERO0 ZERO 10000 ZER0 O 1000 10! ZERO R ZE0RO ICKc' ZERO ZERO IZERO ZERO ZERO ZERO IZERn ZERO ZERO ZERO IZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO I.ZERO ZERO 100C ZERO 1000 Z ERO ZERO ZE1RO ZERO 100E. ZERO ZERO _ZERO ZERO ZERO 1000 I ZERO ZERO 1 1 _ TIME 368 nUTpUT IS 21 FIRING DISPLAY 1CC;o3 ZERO ZERO ZERO 1lC,. ZERO Z ER O ZER1O ZERO ZERO ZER ZE ZERO ZERO ZERO ZER01 ZERO 11000 10C0 Z-RO ZiO li'\ Z ZERO0 IZ ZERO ZE ZERO 10030 ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERo1 O0C Zv RO ZER.R ZERO I ZERO ZFRO ZERO ZERO ZERO ZERO ZERO ICOOZ ZERO ZERO ZERO ZERO ZERO i1 Z ERo 11:- ZERO ZERO ZERO ZERO I10 ZERO ZERO ZERO I ZERO 1001 ZERO ZERo0 ic0 ZERO ZERO ZERO.ZERO ZERO ZERO ZEROZ RO TIME:369 nOUTP[UT IS 23 FIRING DISPLAY n1nQ Z ZRO ZEROn ZF:O 1\P ZERzO ZERO 1C000 ZERO ZERO 1RO ERO ZERO 10CO 1001O0 ZERO ZfER ZE-0RO ZER ZERn Z E R ZERO ZERO \ ZERn zERn 1rCO ZERnO 101 ZERO ZERO ZERO ~ ZERO ZER ZERO ZERO Z ERO RO ZERO Z-!ROI ZERiO ZP, O ZEO 102 | zERO ZERO ERO ZEROI1000, ZERO ZERO ZEROl ZERO iCOO1 ZERO ZERO ZER) ZO Z1RD C.? O zERr.,'0. ZERrO ZERO 1K) 1 ZERO ZERO ZERO i 1000o0 ZERO ZERO ZERO 100 N )ZERO ZZFERO ZR r:-O 1 Z TIME 370 nUTPUT iS 28 FIRING DISPLAY o... Z.ER ZL O ZEO 1 ZEO ZERO 1 ZERO.ZERO 1000 1 ZERO ZEROI! CY ZEROI 1, ZERn ZERO ZERO ZR ZER O ZE\ ZERO 10 ZERO ZERO ZFRO 10 ZERO io\ico ZERO ZERO 10 ZERr ZE iC1 ZER 1n ZERO ZERO ZERO\ ZERO ZERO ZERO ZERO ZERO ZERO ZERO\ 1 ZERO ZERO ZEROZ E ZERO ZERO ZERO 11J3 10 ZERO ZERO ZERO 10 10000 ZERO ZERO 1 ZERO ZERO ZERO I -ROI I TIME 371 OUTPUT IS 24 FIRINCG DISPLAY ZERO ZERO 1 1E 10ERO0-1'iZ11 10 ZERO ZERO ZERO ZERO ZERO ZERO ZER I ZFRI C Z Z ERO ZEO ZERO ZERO ZERO ZRO ZERO1 ZRO o ZERO ZERO I ZE.O i. ZERO ZERO ZERO O R Z ZERO ERO ERO Z1E ZE ZERO ZERO 1C0 ZERO ZRO ZER ZR ZERRO Z:ERO IZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO 1 ZERO ZERO 10000 ZERO IZERO ZERO ZERO ZERO1 TIME 372 OUdTPUT IS 21 FIRINi; DISPLAY \ 10 ZERO ZERO ZERO\ ZERO ZrRO ZERO ZERO\ ZERO ZERO ZERO 1. 100. ZERO 10 ZERO I ZERO 1O" 10 1:3 ZERO ZERO 1>J ZERO \ ZERn ZERO ZERO ZERO\ ZERO ZERO ZERO 10:0 ZERO 10000 ZERO ZERO ZERO ZERO 1000 ZERO\ ZERO 1 ZERO ZERO IEO ZERO ZO ERO 1Z ZERO ZERO ZERO ZERO E Z-00 ZERO ZERO ZERO ZEROl.".:9 ZERO ZEERO ZERO \ ZERO ZERO ZERO I 100 10 ZERO ZEROX ZERO ZERO ZERO 10 IZERO ZERO 12 1 -

211 7o,2,2 Experimental Procedure The experimental procedure of this chapter begins with successful completion of Phase III (Chapter 6) for a particular network., There are four over-all phases to this procedures Phase IV Sine~ Periodic Stimulus - Path Formation Tests Given a network ~ that has passed Phase III of Chapter 6, first a selection of the following is made1o The input set E0 2, The stimulus period x0, ra < O < rq 3, The external stimulus value SO (added to the incoming stimulus for each neuron of Z0, see Chapter 4, Section 4o4,1)D 4, The length of the "on" and "off" intervals, te0 Next, the network is subjected to the stimulus for a number tO of time steps - See Figure 7ol(a), Typically t0 ranged from 200 to several thousand time steps, At each time step9 the firing pattern of the network was obtained, The firing pattern at time t shows precisely8 in coded form, which neurons ofk fired at time to forming thus a display of t9~s activity at each time step, The firing patterns for Experiment 9, Chapter 6, Section 6~3 for a few consecutive time steps are shown in Figure 7o1(b)o From a close study of the firing patterns within the "'on"' interval of stimulation, it can be determined whether or not overlapping paths P(OZ T Z ), T < Z are being formed, This is accomplished by scanning for overlaps of the subsets ET and T for successive values of T throughout the "on"' interval5 1 One would not expect to find paths 1Recalling the notation of 4 402 the set EZ firing T time steps after stimulation9 may be decomposed into a steady-state component i' and a component ZS arising directly from stimulation~ Z~ = ET V this decomposition holding true only for a few time steps after stimulation0 It is the latter component that is meant here0

212 P (so - IT) closing back on themselves after a relatively short training period, although this might occur, If overlapping paths P(EO - I ) Tr < TO, are detected, the network is cleared for Phase V0 If overlapping paths are not detected, then two possibilities arise: (a) Further analysis indicates that the network is not adequate: such paths will never form, (b) Analysis suggests that a variation of one or several of the parameters 0O, t0, SO or t. might produce overlapping pathso In case (a), the network is abandoned, and return is made to Phases I = III in an attempt to produce an adequate networko In case (b), the appropriate changes are made and the whole experiment is repeated0 Phase V Sine Periodic Stimulus C FormatCion Tests Given a network that has passed Phase IV, ioe8, exhibits overlapping paths, the experiment is continued for a large number of time steps ( > 1000). Periodically the firing patterns are examined for closure of the paths P(EO - ZT) into a cycle C(EO)O If closure occurred. the network is passed on the Phases VI and, possibly, VII, If closure did not occur, again several possibilities arise: (a) Analysis suggests that closure may never occur0 (b) Analysis suggests that closure may occur by variation of one of the parameters 1-4 of Phase IVo In case (a), the network is abandoned0 In (b), Phase IV is repeated using the modified parameter values, etc, Phase VI Control ExPeriments This is really an intermediate phase that may be performed during or after Phases IV and V, The object is to determine whether or not prolonged stimulation of the network produced anomalies that would destroy the basic stability of the network0 For example extreme to be sure,

213 but nontheless possible -cycle formation might recruit all neurons of; jt Once the stimulus were turned off, F(t) would tend to zero since the fund of steady-state neurons would be exhausted and the fatigue mechanism would damp out circulating pulses in C(O) o Another type of anomaly that actually occurred, resulting in a prolonged series of control experiments, will be discussed in Section 7o503 below, The basic procedure in the control experiments was to turn off stimu= lation at a certain point and allow the network to operate without inputs for an interval of time (100 to several thousand time steps), If it did not return to stable, steady state behavior, an analysis was undertaken to determine the causeb The one example in which this occurred involved modification of the fatigue mechanisms, (Z) A!l(Z) and A2(Z) (Section 7o,5,3) Phase VII Alternating Periodic Stimuli -Alternati Test Once a network Idisplayed a single cycle C(%o), another input subset LO stimulus period T r stimulus S* and intervals tz, t were select= edo Stimulation proceded as indicated in Figure 702, Tests for over= lapping paths9 then overlapping cycles were conduced, Appropriate control experiments were performed,; As mentioned earlier, this work was terminated before a cycle C(Eg) had definitely been formed, However, excellent overlapping paths were obtained with the desired relationships between subsets of C(E0) and those of the evolving C(ZO)o 7o 2o3 Hypothesis The specific hypotheses being tested in this chapter are; (Closed=Ccle Formation H)pothesis) Given a network 6 obtained from

Figure 7.2. Typical Stimulus Envelope for Alternating Periodic Stimulus Experiments. Stimulus A applied to Zo within these envelopes every TO time steps ("on" periods of stimulus A). Stimulus B is off. Run-in period at background and training sequence using Stimulus A ~ t0 +t tt Go t; td t Stimulus B applied to E* within these envelopes every -To time steps ("on" periods of stimulus B). Stimulus A is off. Note: Here it is assumed that a C(%O) has already been formed in an earlier training sequence (t z 0 - t0 - 1),

215 Phase III displaying very stable behavior, then for appropriate choices of O, To0, S0, and tp it is possible to produce a cycle C(U0) in ~ as a result of periodic stimulation of L0 in a sequence of "on-off' intervals, (Alternating Closed Ccles Formation Hy othesis) Given a network ~ in which a closed cycle C(Z0) has been obtained (from Phase V), then for 1* * S, t*' appropriate 0o A0, t* a new cycle C(E5) will emerge after prolonged alternating periodic, stimulation of C(0O) and C(EO), C(O0) will send inhibitory connections to C(S0) and conversely., 703 NETWORKS WITH UNIFORM RANDOM DISTRIBUTIONS OF CONNECTIONS As mentioned in the Introduction, the results obtained using networks with uniform random distributions of connections were disappointing, Consequentlyin the sequel)experiments representing typical results only are displayed:, The many varied attempts to produce success (overlapping paths and cycles) are omitted from description since they were all futile — Wa "barking up the wrong tree" situation, 7,031 Series I... Networks with Positive Connections Only Ex eriment 1 ati gue Ino erative The network of Figure 6o4, 2, of Basic Experiments II (Chapter 6, Section 6o2o1) was taken as the basis for this experiment, The behavior of that network was "stable", albeit rigidly periodic E0 = 10, TO = 6= So = 7 (synapse-values),, and tZ = (("on" envelope only present), The stim= ulus- was started at t = 0 with F(O) = 25o The network was run for 602 time steps0 As in the Basic Experiment, e(Z) _ 0 (additive fatigue inoperative), The results of this experiment are summarized in Figure 7 3, The behavior remained stable, although the oscillations are a more pronounced than ~n the steady state experiment0 The EEG's for both these experiments

Figure 7.3. Experiment I (Section 7.3) — Fatigue Inoperative. (a) EEG for time steps 1-204. Notice that behavior is rigidly periodic.on period O0 x r a 6 x 17 = 102 time steps. The network is that of Fig. 6.4, Variant 2, of Chapter 6. q (b) The EEG for the same time interval for experiment of Fit. 6.4, Variant 2, is given immediately below. (a) F(t 70 60 _,Brimer set and input set stimulated at t - 1; F(J) = 25, Input set, E0 stimulated every TO a 6 time steps, 50so shown by A at appropriate time steps. - = 20 (number of neurons in input set), S - 7 (stimulus in units of 40 - synapse-values). 30 20 10.) ~10 X... 20.. 30, 40 50 60 70 80 90 100 t F(t) 70t) L 8r0 stimulated at these time steps -—. 60 - Shaded pattern of length TO x rq - 6 x 17 - 102 repeats from t = 211 on. The periodicity on rq * 17 is not present. 30 20 //?//// 10/ 110 120 130 140 150 160 170 180 190 200 210 t F(t) This pattern 60 -o = 14 (initial set fired) repeat s fr52 o m 50 40 30 20 1. 10 0 10 20 30 40 50 60 70 80 ~~ t

217 Figure 7.4. Firing Patterns for Experiment 1 (Section 7.3), t = 451 - 462. The firing patterns for two consecutive stimulus periods are given below. Except for some neurons of (0 = neurons 381 - 400), there is no overlap whatsoever between the set of neurons firing at t and tRose firing at t+Tr0 t+6. STIMULATED NEURONS 39.1, uu,, - tU. TIME 451 OUTPUT IS 30 FIRING DISPLAY 1- i 1 ll ZEbT RO 1.. ZERORO ZERO 10 ZERO ZERO ZERO ZERO ZER\ 10 ZERO ZERO 10 100 ZERO ZERO ZERO JZERO ZERO ZERO l1001 ZERO ZERO ZERO ZERO \ ZERO 10 ZERO ZERO/ ZERO 10. ZERO 10 1 0000 ZERO ZE4O — ZERO 1 1000 ZERO \ ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO 100 ZERO \ ZERO 1000 ZERO ZERO \ 1 ZERO ZERO 11000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO- ZERO ZERO 10 TIME 452 OUTPUT IS 31 FIRING DISPLAY ____ _ _ _ EROb ZERO ZERO ZERO ZERO \10000 ZERO ZERO 100 ZERO 1-0 O6-0tZERO ZER- I ZERO 0 —-ZERO -— ZERO 1000 \ZERO ZERO 10o ZERO 1 ZERO ZERO ZERO 10 ZERO ZERO ZERO\ ZERO ZERO 10 ZERO 10000 100 iio0o - t1 ZERO 10000 000 1000 10 000 ZERO 10 ZRO ZERO\ ZER 100E - ZERO ZERO ZERO\ ZERO 10000 ZERO Z ERO ZERO ZERO ZERO ZERO_ ZERO ZERO ZERO ZERO ZEROOZERO ZERO 1 \ io 0 ZERO 10 ZERO TIME 453 OUTPUT IS 15 FIRING DISPLAY \ ZERO~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~I ZER ZR.ZER:O ZERO ZERO Z-F-R'O — ZERO. ZERO ZERO ZERO\ - 1 1000-0 ZERO ZERO ZERO Z100-ZERO t ZEO ERO ZERO 1' 10100 ZERO ZERO ZERO\O ZEO ZER O ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO' ZERO ZERO ZERO ZEOR"O I ZERO ZERO RO -ZEROtz~k R-i- oZERO ZERO ZERO ZERO 100 \ZERO ZERO ZERO 10000 ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO\ ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO....... TIME 454 OUTPUT IS 14 FIRING DISPLAY ZERO ZERO 100..ZERO E ZERO`ZERO ZERO- ZEZEROERO ZERO.Z-ERO Z ERO- " Z-ER O — ZERO ZERO 1000 t ZERO ZERO zER6~~~~ rod ~~~~~~iERO t ZERO —ZE'RO~~~~~~~~~~ ZZRER Z ZEREROZER ZERO \ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO\ ZERO ZERO Z..R..ER ZER —— O ERO ZERO O ZERO ZER-O 1000 ZERO I ZO 1100 1001 Z-RO \ lr-r" — ER — 1000 ZERO ZERO JZERO ZERO 1000 ZERO_ 10 ZERO ZERO ZEROf ZERO ZERO ZERO ZERO _ 10 ZERO ZERO ZERO ZERO ERO ZERO ZERO TIME 455 OUTPUT IS 14.FIRING DISPLAY ZERO ZERO ZERO ZERO Z-ERO Z-ERO ZERO ZERO 10 ZERO tZERO' —IRO ZERO — 00 E 10000 ZeOERO ZERO -ZERO \ZERO ZERO ZERO ZERO\ ZERO ZERO ZERO ZERO\1000. ZERO ZERO ZERO \ 1 ZERO ZERO I IZERO ZERO ZERO ZERO \ZE OZR ZERO ZERO ZERO ZERO ZERO ZERO 0 E Z ERO Z ERO ZERO ZERO\ ZERO ZERO EOREROZEO ZERO 1000 ZERO \ZERO ZERO ZERO ZERO\ ZERO ZERO ZERO 100\ ZERO I ZERO 1000 ) 100 ZERO 10000 ZERO \ZERO ZERO ZERO —- 1 TIME 456 OUTPUT IS 20 FIRING DISPLAY tRO ZO 00ZE IER ZERO ZERO ZERO 100 ZERO ZERO 1 ZERO 0000 -ZERO ZERO. ZER- - ZERO 00 - ZER - ZERO \ZERO ZERO 10CO ZERO \ZERO ZERO 1 10 ZERO ZERO 1 ZERO ZERO1C1000 ZERO ZERO \ ZERO ZERO.ZERO ZERO ZERO'ZERO ZER- -ZER O Z ZER - EROZZOR R 0 R ER R LLRUI ZERU LU ZERO iCOGO ZERO \ ZERO ZERO ZERO 110 Z. _ZERO ZERO ZERO_ ZERO_ 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO STIMULATED NEURONS 3l91 THROUGH 400 TIME 457 OUTPUT IS 34 FIRING DISPLAY -.- ---........ _ 111C0 11 1 ZERO\ ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO 1 1 ZERO.'1-Z'ERO'- - Z- Z —- ZERO ZE'R0 ZERO 0ZERO ZERO 10 0 ZE ERO ZERO I ZERO ZE ZERO ZERO\ 10 ZERO ZERO ZEROI 11'C0000 ZERO ZEROV ZERO ZERO' ZERO ZERO/ ZERO 10010 ZERO 10000 100 ZERO ZERO 1000\11OCo ZERO ZERO ZERO\ZERO 1 10000 ZE-ZO E RO' ZER 110 ZEROZER Z~~~~~~~~~~~~~~~~~'1 00'".ERo' Z... -..-ERO Z-E-R-O-1-OOC''RO... ZERO - 1I'-0' ZERO 1c00co ZERO ZERO ZERO TIME 458 OUTPUT IS 29 FIRING DISPLAY \_ZERO ZERO ZERO ZERO i ZERO 10 ZERO ZERO \ZERO ZERO ZERO ZERO\ ZERO 1000 ZERO ZERO ZERO ZERO 1 ZER"O 10 -~'i..O 10000 10 tZEROX 1100 ZERO 10000 1\ 100 1000 ZERO ZERO I 100 ZERO 1001 ZERO ZERO —-ZOZERO 1CCCO\100C C ZERO ZERO 1010 \ 100 10 100 1001 ZERO 11 ZERO ZERO ZERO ZERO ZERO 10001 ZERO 100 ZERO ZERO \ZERO ZERO ZERO ZERO \ ZERO ZERO ZERO 10 ZERO ZERO ZERO- ZERO ZERO ZERO -ZERO ZERO \ZERO ZERO ZERO 01C0 __ TIME: 459 OUTPUT IS 23 FI RING DISPLAY \ZERO ZERO ZERO ZERO 10 I 1O00O ioo\ ZERO 100 ZERO ZERO\ ZERO 10000 ZERO ZERO\ ZERO ZERO 100 ZERO \ZERO ZERO 1IO00 10- 10 ZE.10 i60 ZRO 1.001 ZERO ZERO 10000 ZEREFV7ERO E-R -ZERO ZERO I ZER O -T —TZERO ZERO \ZERO ZERO ZERO ZEROI ZERO ZERO 1 1000\10010 ZERO ZERO ZERO ZERO ZERO ZERO ZERO I ZERO ZERO 10 ZERO \ZFRO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO 10 - ZERO 10ZERO ZE-RO- ZERO — 10 ZERO ZERO ZERO ZERO TIME 460 OUTPUT IS 16 FIRING OISPLAY AZERO ZERO ZERO 10000\ ZERO ZERO 1000 10000 ZERO ZERO ZERO ZEROI ZERO 10 ZERO ZERO\ ZERO 1000 ZERO — E'RO-'i0o0 -Z —..ZERO-..6.l ZER —— O ZERO Z -ERO ZERO I ZERO ZERO ER 1001 ZERO ZERO ZEROI000t ZERZER.ZERO ZERO EROZER 10CO COCO ZERO \ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO \ZERO ZERO ZERO ZERO I ZERO ZERO ZERO 10 \ ZERO ZERO 100 ZERO \ZERO ZERO 1.i000 ZERO 1 ZERO -ZERO ZERO ZERO ZEROO ZERO ZERO \ ZERO ZERO ZERO ZERO TIME 461 OUTPUT IS 13 FIRING DISPLAY __iZERO -ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1010 ZERO. ZERO ZERO ZERO ZERO ZERO/ ZERO ZERO ZERO ZERO'ZERO ZERO ZERO ZERR O ZE OER-E RO ZERO ZR ZERO ZERO ZERO 0 ZERO Z"! R''.EROEi ZERO'ZERO ZERO ZERO ZERO ZERO 1\ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO)10000 ZERO ZERO ZERO \ ZERO ZERO ZERO ZERO\ ZERO 100 41 ZERO ZERO ZERO -1 ZERO 10000 ZEROZEO -ZERO ZERO ZERO ZERO ZERO TIME 462 OUTPUT IS 8 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO)ZERO 10 ZERO ZERO - ER I Ob ZERO ZERO ZERO AIS0- ZER LOZRO EO ZR ZERO ZERO lR-FI O ZE0F7 ZZZRR ER - ZERO ZERO JZERO ZERO 10 ZERO IZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO E ZERO ZERO 1E000 ZERO ZERO ZEROZE ZERO A ZER O ZERO I000ZERO ZER ERO 100 R E ZERO 1C000C ZERO ZERO

218 are given for comparison in Figure 7. 3, No overlapping paths developed as may be seen in Figure TA.4 where the firing patterns for twelve consecutive time steps starting at t = 451 are shown,. Although the experiment was not successful, there are several interesting sidelights worthy of mention. First, the effect of the stimulus seems to have been much like that of an AM radio signal upon the carrier wave, The steady-state pattern still predominates but it has been "modulated" by the signal (periodic stimulus), Notice, however, that the rigid periodicity of the steady-state behavior has been destroyed, This leads to the second observation; the behavior repeats itsel: after TO x rq = 6 x 17 = 102 time steps., F(t+102) E F(t), Zt+102 - Zt. This, no doubt, is a combinatoric consequence of the informal argument given in Chapter 6 explaining the origin of the periodicity occurring there), It is as though there were six additional ways each sequence F(t),..... F(t+16) might occur, giving 6 x 17 total combinations before repetitions of a given sequence F(t)9,. F(t+16) may occur. Whatever the cause9 this result once again implies zero information, recruitment is not possible, Expriment 2 Fatigue Present This experiment is basically a repetition of the preceding one, except that the fatigue mechanism is now operation., The function {(ti and tables A81i() A2(k) used are given in Figure 705, together with the EEG, The run was terminated at t = 598., At t = 0, neurons were distributed randomly and uniformly over fatigue states (as well as over recovery states), SO was increased to 10 synapse=values to compensate for this~

Figure 7.5. Experiment 2 (Section 7.3) - Fatigue Present. This is essentially a repetition of Experiment 1 (Fig. 7.4) with the additive fatigue machanism present. The curve 4(2.) and tables A1(0), A2(9.) used are given below together, with a segment of the EEG. Again.F(t) became periodic with period ~ x rq 6x17 = 102 (after an initial transient period of several hundred time steps). Overlapping paths were not present. A M(2. A (2) 2. t(2) A (2) 1 212 0 0 1416 32 2 1/16 A(2) l& 1 0 6/16 33 (12 unit 2 34 1 unit increases 3 35 1uit linearly J ( 3 1 synapse- as 2 4 36 value) 5 37 30- 6 38 7.1 I i 39 2 onnecting lines here are 8 40 1 2 synapse-value 9 1 41 merely guides to the eye, 10 2 42 4(2) is not a continuous 11 43 20- function, rather a function 12 ( 44 with "jumps", e.g. (38) = 28, 13 45 for 37 < 2. I 38, O(2.) - 28, etc. 14 46 315 47 I synapse-value s1 48 16 49 10 17 50 18 51 19 2 52 20 4 53 21 ( 54 r ~ I I 1 ~~ ~ ~ ~ ~~ ~~~~22 5 Is 23 56 02... 30 40 50 602. 24 57 25 58 26 59 27 60 28 61 29 62 30 1 63 1 0 31 4 1/16

Figure 7.5 (continued). F (t) 70 t = 1 - 100; no rigid periodicity yet apparent. Input times indicated by the A.. 60 50 40 30 20 10 0 10 20 30 40 so 60 70 so 90o h F(t) t = 400 - 502: The pattern from t = 400 - 502 is repeated from 502 on: i.e., F(t) has become periodic with period 102 time steps. 70 Notice that it appears as though F(t) were "trying" to be periodic with period rq 16 60 but perturbation by the input is just sufficient to destroy the exact periodicity. 50 40 30 20 10 400 450

221 No overlapping paths were present0 As before, F(t) repeated itself after -r x r 6 x 17 = 102 time steps0 It took several hundred time e0 q steps before this occurred, however, A number of variations (with and without fatigue) of the above experiments were carried out -- varying E0, SO, and T'o In no case did overlapping paths occur, Therefore, negative feedback was introduced into the networks, Phases I - III conducted. for the new networks, and the experiments of the next section carried out. 7o3,2 Series II — = Networks with Negative Feedback Experiment 3 - Fatige Inoerative The basic network used was that of Figure 6o9 with the threshold curve of Figure 7,6, Notice that this curve has a.dip at r = 5, 6, and 2 70 N = 400, p = 12 P k where 0 = p, 2 = wp1 Pi = 02 P/4= 0( E) E 0o, 0 = 20, F(0) = 37 To = 6, S0 = 10, tQ = 100l The first "on" period started at t = 0o The results are summarized in Figure 7,6, Figure 7o7 displays firing patterns from t = 419 through t = 431o It can be seen that some overlapping paths appear to be present~ Several experiments similar to Experiment 3 were conductede All involved threshold curves with dips, All yielded some overlapping paths, but no cycles, These experiments likewise were abandoned in favor of networks with distance-bias, 7o 33 Conclusions In general, the experiments of this section resulted in failure — no overlapping paths or cycleso An interesting periodicity of length rq X o was observed in the positive connections only case0 Success of q

Figure 7.6. Experiment 3 (Section 7.3) - Fatigue Inoperative. This is the same network as that of Fig. 6.9, Variant 3. The "primer" subset consisted of thirty-seven neurons of At,, l 20 (size of input subset). The first'on" period commenced at t 1. To= 6, t= 100, s0 - 10 (synapse-value units). F(t) 60 50 - 40 30 20 10 300 350 4 F(t) 60 -~ 50 40 30 20 10 400 450 500 t Note on this EEC: The interval t = 300 - 400 is an "off"-period for the stimulus, t = 400 - 500 an lon" period. Clearly, F(t) is underdamped, going to zero at t = 344 and t = 396 (marked by an asterisk) in the "off"-period. At these two time steps, the network was revived by stimulating the "primer" set (ZEo). This is the only time in this work that a "revival" artifact was used, since (according to the theory of Chapter 4), such behavior indicates a serious imbalance in the network parameters.

223 Figure 7.7. Overlapping Paths in Experiment 3 (Section 7.3). Interpretation: The firing patterns for t = 419 - 431 are given below. The neurons of E0 are enclosed within square brackets (neurons 1-20 of A,). Z0 was stimulated at t = 419 and t = 425. Neurons that fired at t then again at t+T0 = t+6.are shown encircled (excluding 10). These neurons are candidates for a path (in the sense of Chapter 4) from o0. They will be in such a path, if the connections from E0 to E1, E1 to 12, etc. exist, since it is just possible that these connections do not exist and their firing at t+T0 is a coincidence. Since the results of Experiment 3 in general are so poor, the matter was not pursued further, and attention was directed towards networks with distance-bias. In the future, however, when overlapping paths are displayed as below, it may be assumed that the connection-matrix of the network was examined and the appropriate connections from a subset 1 to its successor-set Z actually exist T Tr+1 t STIMULATED NEURONS 1 THROUGH 20 TIUE 419 OUTPUT IS 37 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 10000 ZERO 1 ZERO ZERO ZERO ZERO ZERO 100 ZERO 100 ZERO ZERO ZERO 10 ZERO 100 ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO. 1 ZERO 100 ZERO 1100 1000 ZERO ZERO ZERO 1000 ZERO ZERO ZERO ~ZERO 0 ZERO0 ZERO ZER ZERO 10000 ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO 1111 11111 111ll 1111 TIME 420 OUTPUT IS 47 FIRING DISPLAY ZERO 1001 10 ZERO 10 100 1 10000 1010 10 1 ZERO 1 1000 1000P 100 ZERO 10 ZERO 1100 1 1001 ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO 1000) 10000 ZERO ZERO ZERO 1000 1000 ZERO 100 1011 1 ZERO 110 ZE ZERO Z 100 ZERO ZERO ZERO 100l ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZR 10 ZERO ZERO 1 10100 1 ZERO OOO ZERO 1 ) ZERO ZERO ZERO ZERO 100 100 ZERO ZERO ZERO ZERO ZERO ZERO TIME 421 OUTPUT IS 59 FIRING DISPLAY 1100 100 1 1010 1000 ZERO 100 1 ZERO ZERO 10100 ZERO 100 ZERO 1000 ZERO 10000 100 ZERO 10 10010 ZERO ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO 1000 10000 100 ZERO ZERO ZERO ZERO 101 100 10 10 10001 ZERO ODO ZERO ZERO 10IQ ZERO ZERO 10 10010 ZERO 1000 ZERO 1000 1100 10 10000 110 ZERO ZERO ZERO 1000 ZERO ZERO 110 10000 ZERO 1110 11000 10 10101 10 ZERO 10 1 ZERO ZERO ZERO ZERO TIME 422 OUTPUT IS 59 FIRING DISPLAY 10 10000 10000 10100 10000 ZERO 1000 ZERO 100 ZERO ZERO ZERO 10 10 ZERO ZERO ZERO 10001 ZERO ZERO ZERO 10010 1000 10001 10 ZERO i000 ZERO ZERO 10 10000 110 1000 ZERO 110 1000 1010 10 10 10000 1100 10 10100 10000 110 11000 ZERO 1001 ZERO 101 ZERO ZERO ZERO 100 1 ZERO ZERO ZERO 1000 ZERO ZERO 1001 ZERO 10000 ZERO 10000 1000 1001 1 1 ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO TIME 423 OUTPUT IS 38 FIRING DISPLAY ZERO 10 ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO 1010 1010 ZERO ZERO 1 ZERO 10 ZERO ZERO ZERO 100 ZERO 101 ZERO 10001 ZERO 10 ZERO 11000 1000 ZERO ZERO 1 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO 1000 100 ZERO ZERO ZERO 1 110 ZERO 10 ZERO 1000 1 10001 10000 100 ZERO ZERO 10 ZERO ZERO 100 10000 ZERO ZERO 100 ZERO 10000 ZERO 1000 ZERO ZERO ZERO ZERO ZERO TIME 424 OUTPUT IS 27 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO 10000 10000 ZERO 1 10001 ZERO 100 ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO 11000 100 ZERO 10 1 ZERO 1000 10 ZERO ZERO ZERO 1 ZERO ZERO 100 ZERO ZERO ZERO i000 10000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO 100 ZERO 1 13 ZERO 10 ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO 100 ZERO ZERO ZERO ZERO STIMULATED NEURONS 1 THROUGH 20 TIME 425 OUTPUT IS 34 FIRING DISPLAY ZERO ZERO ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 1 ZERO 1 ZERO 10000 ZERO ZERO ZERO ZERO 1000 10 ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO 1 10100 ZERO ZERO 100 ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1j111 11111 11111 Illlfl TIME 426 OUTPUT IS 25 FIRING DISPLAY ZERO ZERO 1000 ZERO ZERO ZERO ZERO 10 ZERO 1000 ZERO ZERO ZERO 1001 10- ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO 100 100 ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO' ZERO ZERO ZERd ZERO ZERO R ZERO 10 ZERO ZERO 1 10100 ZER 10000 ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO X0O ZERO ZERO ZERO 10 ZERO ZERO 100 ZERO 100 1000 ZERO ZERO ZERO ZERO TIME 427 OUTPUT IS 19 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO 10000 ZERO 100 ZERO ZERO ZERO ZERO 1 ZU ZERO ZERO 1 ZERO ZERO ZERO ZERO 10001 ZERO ZERO 1 ZERO 100 10C0 ZERO ZERO ZERO ZERO a ZERO 10 ZERO ZERO ZERO 10001 ZERO ZERO ZERO ZERO ZERO ZERO 1010 ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZEkO ZERO ZERO ZERO ZERO TIME 428 OUTPUT IS 15 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO 1000 1 ZERO ZERO ZERO O ZERO ZERO ZERO ZERO 101 ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZO ZERO ZERO ZERO ZERO 1000 ZERO ZERO 100 ZERO ZERO ZERO 10000 ZERO 1 ZERO ZERO ZERO ZERO'-ERP ZERO ZERO ZERO 1000 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO TIME 429 OUTPUT IS 11 FIRING DISPLAY 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO 10000 ZERO ZERO ZERO ZERO ZERO 1000 1 ZERO ZERO ZERO.ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO TIME 430 OUTPUT IS 10 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 10 ZERO 1 ZERO ZERO ZERO ZERO ZERO ZERO 1000 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO STIMULATED NEURONS 1 THROUGH 20 TIME 431 OUTPUT IS 25 FIRING DISPLAY ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO I000 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 11111 11111 11111 11111

224 a sort was obtained using simultaneously threshold curves with dips and negative feedback~ However, the threshold curves so used appear to be too ad hoc for general use: while a given curve V(r) might work for To = 6, say, the same curve probably would fail for TO = 10o This certainly is an undesirable facet of V(r)'s with dips0 Unless some mechanism be introduced to "scale" the V(r) for use with any 0o, ra =< T 0 rq and such a mechanism is difficult to envisage - the use of these curves is to be discouraged0 Further effort might have been expended on networks with uniform random distributions of connections, Admittedly, the experiments of this section were in reality familiarization runs - the experimenter was learning about stimulating networks, Consequently the experiments were crude and often unrealistic0 Possibly raising the network density p and carefully selecting the X-distribution p = k' the set E0P etco would have yielded the desired results without the ad hoc threshold curve with dips mechanism. However, it appeared that networks with distance= bias (and negative feedback) offerred the same results far more readily, This is due to the "localization" effect found in such networks which prevents the effects of an external stimulus from dispersing itself too rapidly over the entire network, This idea seemed so attractive that the current networks were abandoned altogether and networks with distance-bias alone considered from this point on0 7o4 NETWORKS WITH DISTANCE-BIAS In the sequel only networks with negative feedback are considered. All stimulation experiments begin after completion of Phase III, usually at t = 200 or t = 400 First, experiments were conducted with fatigue

Figure 7.8. Experiment 1 (Section 7.4) - Fatigue Inoperative. The network of Fig. 6.18 was used for this experiment. L consisted of neurons 1-20, i.e., the bottom row of &(considered as a 20x20 grid). Through an oversight, simulation began at t - 09 instead of after a run-in period of several hundred time steps. The resulting EEG is shown below for selected time intervals. F (t 60 stimulus on 50 40 30 20 10 A ~ ~ ~ ~ AA A A A AA A 0 so 100 t F (t) stimulus on 70- stimulus off (run terminated) 60 - 50so (last half of first "off"-period) 40 30 20 10 0 A 150 20020 t

226 Figure 7.9. Experiment 2 (Section 7.4) - Fatigue Inoperative. This experiment continues Experiment 1, Section 6.3 (see Fig. 6.18) from t n 201 to t = 482, t = 100, TO = 6, SO = 8.3 (units of synapsevalues). The "on" and "off" periods were 200 - 300, 400 - 500,... and 300 - 400, 500 - 600,... respectively. o0 is shown disgramatically below. Since E0 forms a rather compact set, stimulation probably produced a refractory zone around and including Z0. N = 400 R = 4 (neighborhood radius), CR X 50 R p = 24.4 = 2 + P + + 2 s 0; p = 2 -I 0 2# Ps =~ s 8~;~o 2 V(r), 0(L), U(X), D(X) as in Basic Experiment III 0 = {168- 172, 188 - 192, 208 - 212, 228 - 232) (shaded on diagram) 20 400o....................381 380~.,..,,..361 3604;.!................ i341 340,.................'321 320...,.,.............. 301 I 300* 280........... 261 260, ~.... -...... 241 240......221 2204 l201 200.,. *...,, j 181 i80',,,,.,.1,,,,..,,61 i60q.,,,.................,.141 140.................,21 120............ lo 801......16511.. 4 61 60111210 9'` 41 40' 2121 19181716151413121110 9 8 7 6 S 4 3 2 1

Figure 7.9 (continued). EEG for t = 201 - 400. F(t)Stimulus on (t = 201 - 300). 60 - Notice that a stimulus "spike" is usually followed by a large drop in F(t). This suggests that the stimulus set | really did not control any successor sets; E% was stimulated, but itself caused few, if any, neurons to fire the 650 F succeeding time step. The very nature of this Lo and the negative feedback mechanism (see Chapter 4) imply this type of behavior. 40 30 20 10 200 250 300 Firing patterns given for these time steps. F(t) lStimulus off (t = 301 - 400) 60 - It is interesting to compare this behavior with the steady-state behavior for t = 1 - 200 in so50 Fig, 6,18, F(t) below is considerably less violent in its oscillations - apparently the effect of the stimulation has been to "smooth-out" 40 the input-free behavior. 30 20 10 300 350 400 t

Figure 7.9 (continued). Firing Patterns for t - 279 - 286. Neurons of E are enclosed within brackets, Notice there is no overlap whatsoever between the sets of neurons firing at t - 280 and t - 286:Y'0 controls no neurons directly. -"STIMULATED NEURONS 168 THROUGH 172 STIMULATED NEURONS 188 THROUGH 192 STIMULATED NEURONS 208 THROUGH 212 STIMULATED NEURONS 228 THROUGH 232 TIME 279 OUTPUT IS 37 FIRING DISPLAY...................... 110 ZERO 110 ZERO ZERO ZERO ZERO ZERO ZERO RZER Z ZER ZERO ZERO ZERO 110000 1 ERO Z 0 R O E R Z ________I________ERO.... i~i' —Z.-RO ZE-RO Z'RO..ERO'' ZERO ZERO ZERO 10 z'ER- o ZRO. —-E R --- — 1 ZERO. "Ag00 ZERO \ZERO 11 1100'3 ZERO \ZERO 11 1100 ZERO\ RO Z ER O ZERO ZER O ZERO ZERO ERO ZERO ZERO' 1 ZERO ZERO ZERO "ZERO ZERO ZERO ZERO \ZERO 10000 ZERO ZERO ZERO 1i00-D ZERO- ZERO ZERO 100- ZERO 1000 o1000 ZERO ZERO 10 TIME 283 OUTPUT IS 31 FIRING DISPLAY 10030 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO lo ZERO ZERO ZERO 10000 11 ZERO ZERO 10100 ZERO 10000 ZERO' ZERO ZERO 100 ZERO 1 ZERO 10o0 ZERO 190000 10 1001 1000 ZERO 10 10010 ZERO 10111 ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO ZEROZERO ZERO 11) ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 11i0 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO TIME 281 OUTPUT IS 21 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZE:O ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 1 100 ZERO ZERO 1001 ZERO ZE.RO 10010 ZERO.10' 1000 100" 10009 ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 1 ZERO 110 1000 ZERO ZERO ZERO ZERO ZERO ZERO 1003 ZERO ZER O ZERO ZERO ZERO 103 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZER O ZERO ZERO ZERO ZERO 1 TIME 282 OUTPUT IS 27 FIRING DISPLAY ZERO I ZERO ZERO ZERO ZERO 10 ZERO ll00 10000 ZERO ZERO ZERO 10900 ZERO ZERO 1000 10 ZERO ZERO 10 ZERO 10 10030J ZERO ZERO 100 ZERO 11 ZERO 10 ZER.O ZERO- ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO 11301 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO 1 1 10900 ZERO ZERO ZERO 10 ZERO ZERO ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO TIME 283 OUTPUT IS 26 FIRING DISPLAY 100 13030 ZERO 1'"3 ZERO ZERO 1003 10300 ZERO ZERO ZERO ZERO ZERO 10 10 ZERO 1 ZERO 10 10 ZERO ZERO ZERO ZERO ZERO 1 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO 1000 1l'" ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO 11000 10000 ZERO ZERO ZERO 100 ZERO 10000 1') ZERO ZERO ZERO ZERO ZERO ZERO ZERO 110 ZERO ZERO ZERO ZERO 1 ZERO ZERO 10000 ZERO ZERO ZERO ZERO TIME 284 OUTPUT IS 24 FIRING DISPLAY 1001 1000 ZERO 1 ZERO ZERO 101 ZERO ZERO ZERO 10000 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO 1000 ZERO ZERO 1000 ZERO ZERO.10000 1000 ZERO ZEROZERO ZERO'i ZERO ZER 1 E ZO ZERO ZERO ZERO 130) ZERO ZERO ZERO ZERO ZERO ZERO ZERO 101 ZERO ZERO ZERO ZERO ZERO ZERO 1300 ZERO ZERO ZERO ZERO ZERO ZERO 13 ZERO ZERO ZERO ZERO 100)0 ZERO ZERO ZERO 1003 ZERO ZERO- ZER -ER EO ZERO 10100 ZERO ZERO STIMULATED NEURONIS 168 THROUGH 172 STIMULATED NEURONS 188 THROUGH 192 STIMULATED NEURONS 208 THROUGH 212 STIMULATED NEURONS 228 THROUGH 232 TIME 285 OUTPUT IS 37 FIRING DISPLAY ZERO ZERO ZERO 100 13 11030 ZERO ZERO ZERO 1000 ZERO 1 1 1 1000 ERO 10 ZERO ZERO ~~~~~~~~~~~~~~~EO I 1 1~O _R 0ZERO ZERO ZERO ZERO 100 I. 13 ZERO ZERO ZERO 1000 ZERO 10000 ZERO ZERO ZERO 1 11000 ZERO ZERO --- -_11 I l~~~~~~~~~~lO0 ~~~~~~ ~ ~ ~ZERO1011110ZR 1AOO ZERO 1 0100 ZERO 103 11 11100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 -ZERO.ZERO 10 ZERO ZERO ZERO 10000 ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO TIME 286 OUTPUT IS 33 FIRING DISPLAY ZERO ZERO 1000) ZERO 10000 10 10000 100 10 ZERO ZERO ZERO 10010 ZERO ZERO [JRO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 10000 ZERO ZERO ZERO ZERO 1i) 1 I100oQ 10 ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZER._O 100 ZERO ZERO ZERO 100 ZERO 1 100 100 10000 ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO 11 1000 ZERO ERO ZERO ZERO ZERO 100 10 1110 ZERO 100 1 ZERO ZERO ZERO 1000

229 inoperative: Next, some of these experiments.were repeated and some new experiments performed with fatigue present' The input area %0 plays a crucial role iil these experiments, The selection of Z0 is almost as important here as was the selection of V(r) in Sections 602 and 6~3 of Chapter 60 Overlapping paths were readily obtained, using "normal" (monotone decreasing) threshold curves0 In one case, overlapping cycles were formed, On this basis, a "very" stable network was selected for a thorough test of cycle and alternating cycles formation, This experiment is de scribed in Section 7o5S 74ol1 Networks with Fatigue Inoperative Experiment lo The network of Experiment 1, Section 6. 3o3 of Chapter 6, was used, Through an oversight, stimulation began at t = 0 (before the run-in of Phase III). LvO was taken as an edge of the iterated square (Figure T78). to = 6; tz = 200, SO = 1o 7, ~() = 0 The results are summarized in Figure 708~ F(245) became zero in the second "on" period with large swells developing around t = 200, It was not possible to revive the network with further application of the stimulus, Recall that Experiment 1 (Section 6,3,3) appeared to be under= damped~ This run is of interest since it shows the need for selecting Zo and SO carefully. Exeriment 2; The network of Experiment 1, Section 5,5,3, at t = 200 (i,e,, after running input-free from t = 0 to t = 199) was used, E0 was chosen as a 4 x 4 subgrid of V (see Figure 7~9), 00 = 6, tQ = 100, S = 8 3 synapse-values~ The stimulus was turned on at t = 200 (first

Figure 7.10. Experiment 3 (Section 7.4) - Fatigue Inoperative. This experiment is a continuation of Experiment 4 (Section 6.3.3) of Fig. 6.21 from t = 200 on. 0o consisted of an 8x8 grid (shown diagrammatically below). Consequently, as is evident from the sampled EEG below, many of the comments on Experiment 3 of Fig. 7.9 apply here as well: in a word, So is too compact. 400..............381 380.........361 N = 400 360.......,.............341 R = 4 340...........321 320....................301 P P + 1 + P + P1 + = 24.4 where 300..........281 = p/4 (s = -2, -1, 0), p1 = 2 = p/8 280.........- 261 s 2 0.... 1 = {127-134, 147-154, 167-174, 187-194, 207-214, 240....241...I2 240.... E.0 0 * X...221 227-234, 247-254, 267-274} 220........201 200., 1...............!........2181 tz = 100, To 6, = 8.3 2.1 0.81 180.... 161 160... i..... 141 140..-.... 0 120.................... 101 100....................81 80...................61 60...................... 41 40..................... 21 2019181716151413121110 9 8 7 6 5 4 3 2 1 A" 0 stimulus "on" stimulus "off" (last 50 time steps of F(t) "off" period) 70 60- Once again, as in Fig. 7.9, the sharp drops in F(t) after 50 l the stimulus spike would suggest little control of Z0 over successor neurons. 40 - 3O20 350 360 370 380 390 400 410 420 430 440 t

231 Figure 7.10 (continued). Although overlapping paths were not expected in this experiment, there were some isolated examples as the following sample of the firing patterns shows. The sample is from time-steps t = 425 - 434 (see EEG). Neurons overlapping at t and t+T0 are encircled. Neurons of 0 are bracketed. STIMULATED NEURONS 127 THROUGH 134 STIMULATED NEURONS 147 THROUGH 154 STIMULATED NEURONS 167 THROUGH 174 STIMULATED NEURONS 18' THROUGH 194 STIMULATED NEU-RONS....20-1' THROUGH 214 STIMULATED NEURONS 227 THROUGH 234 STIMULATED NEURONS 247 THROUGH 254 STIMULATED NEURONS 267 THROUGH 274 TIME 425 OUTPUT IS 83 FIRING DISPLAY ZERO 10 ZERO 100 10 1000 ZERO ZERO Z>! EA ZERO ZERO 10 ZERO ZERO ZERO 10 11000 ZERO ZERO ZERO 1 1 ZERO 1000ij11 l1103 ZERO ZEROtI 1 II 1i7Xi io00 ZERO 1-11- 11110- ZERO ZERO Cii i 1110I_- ZERO ZERO C1111 111101 ZERO ZERO ~1111 11110i ZERO ZERO~.1111 11110 ZERO ZERO t11 11110C ZERO 10000 ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO 10 1 ZERO 100 ZERO I ZERO ZERO ZERO ZERO ZERO ZERO ZERO TIME 426 OUTPUT IS 26 FIRING DISPLAY 1003 100 ZERO ZERO 1 ZERO ZERO ZERO ZERO 100 ZERO ZERO 10000 Q ZERO I ZERO ZERO ZERO ZERO........ I-"ZE'RO 10 ZERO 1000 ZERO Z ZOZ ERO -ZERO'Z'ERO ZERO-10000 1000L-OC ZERO -1ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 1001 ZERO ZERO 100 ZERO ZERO ZERO 100 lO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO ZERO 10 10 ZERO ZERO 1 100 ZERO I TIME 427 OUTPUT IS 19 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO 10000 ZERO 10 ZERO 10CCO ZERO ZERO ZERO 100 1101 ZERO ZERO ZERO 1 ZERo ZERO ZERO — ZERO ZERO ZERO ZERO- ZERO.ZERO ZERO ZERO ZERO Z-ERO- ZERO ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO ZERO ZERO ZERO CO 0 ZERO 1 ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO 1 ZERO 10000 1 ZERO ZERO 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO TIME 428 OUTPUT IS 24 FIRING DISPLAY 100 ZERO ZERO ZERO ZERO 1 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO 100 1010oo- Z.ERO ZERO ZERO O ZERO ZERO 1'-00 ZERO ZERO 100 ZZERO ZERO ZERO ZERO ZERO ZERO...I ZERO ZERO ZERO ZERO 6 ZERO ZERO 11 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10001 ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO 100 ZERO ZERO ZERO 1000 100 ZERO iOO10 10000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO TIME 429 OUTPUT IS 17 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 110 ZERO ZERO ZERO ZERO ZERO 10000 10000 ZERO ZERO.ZEROZ ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO- ZERO ZERO 101 10000 ZERO ZERO ZEROq. ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO 10000 ZERO 1 10 ZERO ZERO ZERO 1000 ZERO ZERO ZERO 1CCO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 1 1000 ZERO TIME 430 OUTPUT IS 11 FIRING DISPLAY I ZERO ZERO 1 ZERO ZERO 10000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZEZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO 10 1000 ZERO ZERO 1000 ZERO ZERO ZERO -ZERO ZERO. 100o ZERO 7Fpn ZFRO ZERO 100 ZERO ZERO ZERO 10000 ZERO ZERO STIMULATED NEURONS 127 THROUGH 134 STIMULATED NEURONS 147 THROUGH 154 STIMULATED NEURONS 167 THROUGH- i74 STIMULATED NEURONS 187 THROUGH 194 STIMULATED NEURONS 207 THROUGH 214 STIMULATED NEURONS 227 THROUGH 234 STIMULATED NEURONS 247 THROUGH 254 STIMULATED NEURONS 267 THROUGH 274 TIME 431 OUTPUT IS. 70 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO I ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO J111l 11110 ZERO ZEROS 1111 11110 ZERO ZEROl 1111 111102 ZERO ZERO _1111 iiillO ZERO ZERO'1111 11110O ZERO ZERO Cllt 111116 ZERO ZERO 1TM 1111 ZERO ZEROLil0l, 1 110 ZERO ZERO ZERO ZERO ZERO 1010 ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO 1 1 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO TIME 432 OUTPUT -IS 18 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO 10 ZERO I ZERO ZERO 1 ZERO ZERO i ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1 ZE R,_ ZERO ZERO ZERO ZERO ZERO ZERO ZERO Z ERO ZERO ZERO I ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERO 1 10000 10 ZERO ZERO ZERO ZERO ZERO 10000 ZERO' ZERO ZERO ZERO ZERO ZERO 1000 ZERO Z E ZERO ZERO 100 100 ZERO 10 ZERO ZERO TIME 433 OUTPUT IS 25 FIRING DISPLAY ZERO ZERO, ZERO 10 ZERO ZERO ZERO 1 100 ZERO 1101 ZERO ZERO ZERO ZERO 10000 ZERO ZE5B — ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO &[_> ZERO ZERO ZERO 10000 ZERO ZERO ZERO ZERQ COO I 10000 1 ZERO ZERO ZERO 10000 ZERO ZERO 10O ZERO ZERO 1000. ZERO ZERO ZERO 1L ZERO'-ERO 10000 ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO 10000 1 ZERO 101 ZERO TIME 434 OUTPUT IS 1....6 FIRING DISPO-LAY ZERO 10000 ZERO ZERO ZERO ZERO ZERO 1C10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO 1 1000~10010 ZE 100o ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO O0'00,.E10ZEOZEOZERO ZER ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 100 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO 1000

Figure 7.11. Experiment 4 (Section 7.4) - Fatigue Inoperative. (a) Identical to Experiment 3 (Fig. 7.10), except % is spartially distributed as shown below. (a) (b) Identical to (a), except rT 7. 400.., * *, ** a..381 N = 400, R a 4, etc. 380. s * * * a * o * e * * * * * * * o361 360. *** * ** *os 341 Overlapping paths were present, albeit very scantily so. 340...321 The input set is too diffuse over A to consistently 320. e.30i recruit neurons for successive successor sets E1, 12, etc. 300. *.281 (i.e., build a path from E0 -* E1 -* 12 +...). This case, 280.. D... E).261 therefore, represents an opposite extreme to that of the 260. * * 241 preceding two experiments in which 10 was too compact. 240. *..221 220. G..201 Neurons of 10 lie on the rows consisting of neurons 81 - 100, 200..181 141 - 160, 201 - 200, 261 - 280, and 321 340 respectively 180..161 of s These neurons are encircled. 160..141 140. o.121 120. * o o o 10) 100t. ) G) (D 81 80. e 61 60. j 41 40.j * o e 21 20. 2019181716151413121110 9 8 7 6 5 4 3 2 1 EEG for last half of an "on" period and first half of an "off" period. F(t) 80 - stimulus "on" stimulus "off" 70 Notice that the spikes are not as pronounced as Input-free behavior is bounded by 60 in the preceding two experiments. 7 j F(t) 39, becoming gradually smoother after t a 350 up to next #ion"' period (t w WI= so 40 30 20 10 250 300 350 t

Figure 7.1. (continued). (b) EEG for the same period as in (a), but with TO o P. The results were similar to those of (a). stimulus "on" stimulus "off' F(t) / —70Spikes slightly less pronounced than in (a>; The behavior of F(t) in all off 60- periods was relatively smooth, i.e., no violent or unusually large oscillations. 40 30 20 10_ l250 3 0I I I I I I I 250 300 350 t

234 "one period), off at 300, etc, The run was terminated at t = 482, Samples from the EEG and firing patterns are given in Figure 7,9, No overlapping paths were detected in this experiment. It appeared that the input area created a refractory zone which did not allow direct successor neurons of L0 to fire after stimulation (the danger of this occurring was noted in Chapter 4), The input area was therefore enlarged for the next experiment, Ex eriment 3: The network of Experiment 4, Section 6,603, at t = 200 was used, F0 was taken as an 8 x 8 subgrid of1.T-0 = 6, SO = 8, 3 t = 100l The run was terminated at t = 442: The results are summarized in Figure 7T10, As is drawn in the firing pattern samples, some overlapping paths were present, Nonetheless, the input area seems too large and too dense for effective path formation, This led to the spatially distributed input set of the next experiment. Experiment 4: Again the network of Experiment 4, Section 6. 33, at t = 200 was taken as the starting point of this experiment, Everything is as in Experiment 3, except that 0 is "spatially distributed", ie,, fixed neurons are chosen for L0, but they are not clustered tightly about each other as in the preceding experiments (see Figure 7.11(a))., The ex: periment was terminated at t = 444 in the second off periods, Overlapping paths were detected., The results are summarized in Figure 7,11(a)> E eriment 5 This was a repetition of Experiment 4 above with TO = 70 The results are summarized in Figure 7,11(b)C Again, overlapping paths were present: Resume of Experiments - 5 (741,) The above experiments clearly show that, using "normal" (monotone decreasing from V to V ) threshold curves V(r), overlapping paths may max q

Figure 7.12. Experiment 6 (Section 7.4) — Fatigue Operative. The network of Fig. 6.27, Experiment 8 (Section 6.3), at t = 400 was used. V(r) is that of Fig. 6,21, )(Z) - Fig. 6.23, AI(Z) and A2(9) - Fig. 6.26. N = 400, R 6, p = 549 w2ps, with PO P1 a P-2 = P/ P2 = P1 = p/8. Lo as in the preceding two experiments, TO 7, SO = 8.3, t = 100. The EEG for a complete F(t) "on-off" cycle t = 1101 - 1300 is given in (a), the firing patterns for t - 1215 - 1228 displaying overlapping 60 cycles is given in (b) r (a) EEG "off" period; stimulus off; F(t) bounded by 9' F(t) ~ 39 40 30 20 10 I I I I I I I I I, [o00 1150 1200 t U' "on"-period; stimulus applied at time steps marked by the carat. 60_ Firing patterns Notice that the spikes are not followed by the sudden drops as in Fig. 7.10, suggesting tnat for shaded recruitment may be taking place. Also observe that the effect of fatigue is to decrease the area given in response to the stimulus from the inmid-point of the cycle on. 40 30 20 I,5 / 100L 15 1300/ t 1200 1250 1300 t

236 INPUT AREA""ST'IMULATED TIME 1215 OUTPUT I 58 FIRING DISPLAY 10000 ZERO ZERO 10ZERO ZE Z E-R0 10000 -ZE ERO 1 EO 0 /ip/iq15 0 1010o ZERO 1100 1100 ZERO ZERO ZERO 10 1ZERO jIJO 0 ZERO 100 ZE R Z RD ERO ZERO 1 0 (OD ~0030 1000 ZERO Zg&Q ZE]O O o0-00 RD ZERO ZERO ZERO 1 000 10000 ZE O 1000 100 U0 LE 0 1(b z ~ ooO 10 10 ZO L3w00 Mo ~ 0 1000 ZERO ZERO 001 Q)00 ]000 10 10 ZERO ZE'~O _-R 1 ZERO ZERO 10000 1000 ZERO ZERO ZZERO 11'0 0O TIME 1216 OUTPUT IS 44 FIRING SPL0 -ZE RO Z ZERO Z fERO ZERO ZERO ('0 OOft00 1-0 l' -- ( ZERO' Z_ 10000 ZERO ZERO ZERO ZERO ZERO 10-o Z O ZERO ERO 1000 ZRO 1000 ZO 10 1. ZERO 1 ZERO 10 ZERO ZERO (1~ 100 zZ thok1~7 ~0 00 Z (7LQ0/1'{lA0 110(aO00 1000 ZERO ZEROo00OO ZERO ZERO ZERO LR ZERO ZERO 1 ZERO "ZERO 09 tb o Zo~ ERO Z-ERO 100 ZERO 10Z%0000ZE ZERO ZER O ZER00 \,I~ ZEROZERO 1 RD'-KE R-0 ZERO VZERO I ERO I 0RO (y LRO LO ZERO (1o0'1000 ZERO..... TIME 1217 OD PUT IS 0 FIRING DISPLAY ZERO 1000 ( ZERO 10 ZERO ZERO 1. 0 10000 1 ZERO ZERO 100 0.RD 1000 ZERO —-ZERO ZERO ZERO ZERO ZERO 100.1 ZERO ZERO 0 ZERO 1000 ZERO ZERO 1C00 D1o 0 I ERO ZERO Yo000 ZERO ZERO ZERO ZERO 1 10 10 0OJRO ZERO ZERO 0 6 rRO YO 1 (b EOZROZER OTF0ZERO 10 P31 ZERO 1ERO 1001 1 ZEE ZERO 10 ZERO ZERO ERO ZERO O ERO ZERO ZERO 1 10000 ZERO TIME 1218 OUTPUT IS 32 FIRING DISPLAY I ZERO ZERO ZERO (1000 ZERO ZERO ZERO ZERO ZERO R0 ZERO ZERO ZERO ZERO ZERO ZERO.ZERO ZERO 100 3 100 ZERO1 10 ZERO 1000 ZERO ( 000 ZERO 101 0000 ZERO ZERO _ZRD 1000 100 ER ZERO ZR ZERO ZERO ZLR IW 0 ZERO ZERO ZERO ERO ZERO ZERO'IERO O ZERO RnDYQR O ZERO ZERO'{ ZERO ZERO ZERO 10010 ZERO 10o0 ZERO ZERO IOO ZERO 1 ZERO \00 ZERO ZERO TIME Y219 OUTPUT IS 25 FIRING DISPLAY 1? 10 ZERO ZERO ZERO 1700 1000 100 ZERO ZERO ZERO ZERO 1000 ZERO ZERO ~ERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO ZERO ZERO 10000 ZERO ZERO 11 ZERO ZERO 1000 ZERO i 100 ZERO ZERO ZERO, 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO Z;O ZERO ZERO ZERO' ERO ZERO ZERO ZERO ZERO ~~~~~~~~~~~~ZERO ZERO ZERO ERO ZEnd ZERO ZERO;ZERO,COC ZERO ZERO ZERO ZERO ZERO ZERO ZERoa000 100 ZERO 100 ZERO ZERO ZERO ZERO I ZERO 1 "D000 ZERO ZERO TIME 1220 OUTPUT IS 18 FIRING DISPLAY ZERO 1 ZERO ZERO 1 ZERO 10 0 ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZE ZE ZERO ZERO ZERO ZERO 100 ZERO ERU ZERO ZERO ZERO ZERO 1000 100 ZERO ZERO 131 ( Z)E ZERO ZERO ZERO~1000 ZERO ZERO ZERO lIO00 ZERO ZERO ZERO ZERO ZERO 1 ZERO ZERO 1033 ZER ZLE ZERO ZERO ZERO ZERO ZERO ZERO ZERO ERO ZERO 10 13000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO TIME 1221 OUTPUT 15 22 FIRING DISPLAY ZERO ZERO 1000g ZERO ZERO ZERO ZERO ZERO ZERO ZERO 11000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10000 ZERO ZERO ZERO.RO 10000 10 ZERO ZERO 1000 1 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 1/0 ZERO 1 10 ZERO? ZERO 1000 ZERO ZERO ZERO E ZERO ZERO ZERO 1000 ZERO ZERO ZERO 00 ZERO ZERO 1 10 10000 ZERO 103 ZERO ZERO 19&b INPUT AREA STIMULATED TIME 1222 OUTPUT IS 43 FIRING DISPLAY ZERO ZERO ZERO ZERO ZERO 100 l0 Uo ZERO ZERO ZERO ZERO ZERO O0 0 10 Z ZO G'Oj ZERO ZERO 1 0 1000 ZERO Z O ZERO ZERO _300 Is; ~ el)-j ZERO 7-a 4RPf ZROZEOZEO TOI 1 cu 1 0030.7 13030 ZERO ER ZERO ZERO ZERO _ERO ZERO ZERO ZERO 3Q9 oDuCo00 100 1CO ZERO ZERO ZR ZERO ZERO 1000 100 ZERO ZERO 1000 ZERO ZERO ZERO ZERO TIME 1223 OUTPUT IS 43 FIRING DISPLAY!003 ZERO C~~LR ZR EO EOZR 1003 ZERO ZE FRO ZERO ZERO ZERO ZERO ZERO ZERO Z 10 ZERO.ZERO 4ER ZERO 1 000 ZE~ Z~RD ~ ZERO 10J 100~ ZERO ZEERO ZE2O ZE ZERO ZERO 000 ZERO 000,~9\ZERO I~u \( ZERO F1i)3,Z EO 10 L000 (jI( "96'019 1000 P1ro 00 ERO ZERO ZERO InJ 130 ZERO Z~E~T~O 10 RU ZqU 1 10 ZERO Z R o0 0'0 ERO ZERO ZERO 10000 ICOZERO E ) z r>m... Do 10 ZERO TIME 12 T4 OPUT IS 39 FIRING DISPLAY ZERO 1 100 _ERO 100 ZERO LERO ZERO ZERO 1000 ZERO ZERO 1 1 Z0 Z O ZEO EO E0 ER ER ZRZERO ZERO Z ZERO ZR0 ZER X.JO IZERO ZERO _ 0 ZERO D R ZERO ZERO RERO ZERO ZERO I' EO EO ) 1ZER ZERO ZERO C ZER0 EO 00 E Z 01~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~l —ZERO ZER_'O& ZERO ZERO I ZEREO 11 ZERO ZERO ZERO E ZO ZERO ZEO ER I 1Z VO ICZROZEO ER TIME 12P0 TPUT is 32 FIRING DISPLAY ZERO ZERO go-')00 ZERO ZERO ZERO 1 ZERO ZERO 00 Z4RO ZERO ZERO 1 1000 ZERO 1000 I ZERO RO 1t. ZERO R100 ZERO ZERO ZERO ZERO 1 0 1 ZERO - ZERO ZERO REO ZERO ZERO 1_1 ERO ZERO 0 t) ZERO 10001000 ZERO ZERO ZER RD Z Ieo Z3ERO 1 ZERE8 (ib00 ZERO ZERO 03, 00 1 RO ZERO ZERO ZERO 1 0 ZERO ZERO ZERO 1,0 ZERO REO ZEO EO 1000 R ZERO ZERO - TIME 1226 OUTPUT IS 28 FIRING DISPLAY ZERO ZERO ZERO I 1033 ZERO ZERO ZERO" 1003 101 ZERO L 10001 ZERO ZERO __.00J 1 100 ZEBO ),] ZERO ZERO 310('U0 ZERO ZER-ZE-RO-...' — ZE RO —ZER[-'-ER- - -`ZOE —-ZERO 10 ZERO 00 ZERO ZERO /lb ZERO ZERO ZERO 10C0 ZERO ZRO ZRP 1000 1 ZERO ZERO ZEO ZERO ZERO ZERO YERO ZERO ZERO ZERO ZERO1C'1 LtRO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZfRO 101I ZERO ZERO ZERO ZERO ZERO (1 ZERO ZERO ZERO Z.ER 0.U Qfj0 TiME 1227 UU 2 TIME 1227 0UTQUT~1;c' 25 FIRING DISPLAY ZERO ZERRO _1_ 10 ZERO ZERO ZERO ZERO ZERO ZERO 1000 ZERO 10100 ZERO ZERO 10 10 ZERO 100: ZERO ZIO 10 ZERO ZERO Z O ZERO ZERO ZERO ZERO 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 ZERO ZERO ZERS 10 ZEROQD010 ZERO ZERO 00 ZE ZEO 1000 ZERO 10000 ZERO_ ZERO 10 ZERO ZERO ZE ZERO ZERO ZERO 10 ZERO ZERO ZERO 10 ZERO 1 ZERO ZERO 100 ZERO ZERO 10 ZERO ZERO TIME 1228 OUTPUT IS 24 FIRING DISPLAY...-...... 10000 ZERO 1 10000 100 10 ZERO 10000 10000 ZERO ZERO 10000 ZERO ZERO 1000 1000 ZERO ZERO ZERO ZERO ZERO ZERO ZERO 10 1000 1-000. ZERO. ZERO ZERO ZERO 1000 ZERO ZERO 1 ZERO ZEO ZERO ZERO ZERO ZERO 10 ZERO ZERO 10010 100 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZE0 ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO ZERO 100'0 ZERO ]0000 ZERO ZERO 100

Figure 7.13. Input Set E0 of Experiment 7. 2 400..............., 381 N = 400, R = 6, CR X 13, p = 54.9 = 380..........361 360.,,..341 = P P P/4 Pp2 1 /8 Po P1 -2 2 340..................,3210 320,,.,.,,.,....l,., 301 E {1, 3, 5, 41, 43, 81, 83, 85} 300.,.. o...281 280... *,.............261 E0 can be covered by a single neighborhood. It is 260......a.....241 more compact than the 20 of the preceding experi240....................221 ments, but more diffuse than the earlier experiments. 220.,....... 201 The neurons are spaced sufficiently far apart from 200,..,...,,.,.........181 one another that hyperreoovered regions should not 180.,.......,........161 develop, yet sufficiently close that some neurons 160.,..,,......... 141 in i- E0 will be directly controllable by t0 140......,..121 (again, in such a fashion as not to create a hyper120,,,.,,.............,,,,,.......101 recovered "pocket" around I;). 60. ~ ~ ~ ~., ~ ~.~.~.641 401 11. X 1 1 o12110 8 21 20................. 20191817161514131211109 8 7 6 5 4 3 2 1

Figure 7.14. Experiment 7 (Section 7.4) - Fatigue Operative. The network of Fig. 6.28, Experiment 9 (Section 6.3) at t = 4002was used. V(r) is that of Fig. 6.28, - 6.23, Al(Z) and A2(Z) - 6.26. N - 400, R a 6, p = 54.9 =E p with po p-1 P-2 p/4, p2 = p1 = EO as in Fig. 7.13. The EEG for a complete "on-off" cycle is given in (a). (a) F(t) EEG 50 Stimulus "off", 14 4 F(t) 4 44 40 30 20 10 1 00 1350 1400 t 00 F(t 50 stimulus "on" 40 30 20 10 1400 1450

Figure 7.14 (continued). (b) Fragment of an embryonic cycle. The neurons of Eo pil ~2 *.. are given together with the connections from a subset E to its successor-set C Notice that overlaps occur for T 0 - 6, but the connections persist only down through t 3. This is a typical situation. Prior to the "closing" of paths PC(EO _+ ET) into cycles C(Z0), such overlaps occur with the connections either absent or present, but with a moderate synapse-value. Through gradual recruitment and reinforcement, the path eventually will close into a cycle. (A specimen of a cycle is given in Section 7.5). E0 E1 2 (E4) E5)( 119 -~ —N 5 156 74371 41 379 43 -f3 35 318 275 365 ~ 305 45 synapse-values (overlap) (overlaps) (overlaps) 139 moderate 81- synapse-values moderately large Here is a typical phenomenon a segment comes 83 about that may later be incorporated into earlier segments, eventually forming a cycle. 85 Some of these connections have very large synapse-values.

240 be formed for several values of T0 provi.:cd En and SO are judiciously chosen; It now remains to introduce the fatigue mechanism as preparation for more realistic Iong-run experimentation. 7 4A2 Networks with Fatigue Operative ExIe'rimen.t 6 i- The network of Experiment 8, Section 6:3..4, at t = 400 was used: 0O was spatially distributed as in Experiments 4 and 5 above, 0 = 7S og = 80 3, t = 100. The run was terminated at t = 1399, The 0 = 7 28 results are summarized in Figure 7.12. Overlapping cycles appear to be present. This experiment was repeated for lighter stimuli SO, once for SO 2- 75 again for SO = 6,25. As is to be expected. the lighter stim.. uli did not produce as strong results as the original value S0 = 8..3. However, in both cases, some overlapping paths were present, The results of this experiment gave the first real hope that overt lapping cycles C(EO) could be produced. Ex eriment 7; The network of Experiment 9, Section 630.4, at t = 400 was used as the starting point for this experiment:. g0 was chosen as shown in Figure 7.133 The neurons of L0 are spaced regularly over a 5 x 5 grid: To 7 0 S 12e25, t 100. The network was run through t = 2799, Overlapping cycles were produced, as shown in Figure 7,14 which contains a summary of results for the runt This run would have been used for further detailed work, However, a minor error in the D(A) and U(A) table handling was uncovered, This error only occurred when a xji would reach.max, therefore it was not uncovered until after the lengthy run above,; It did not invalidate the basic result; cycles will form in this network,

Figure 7.15. Input Set E for Variant of Experiment 7. 400.. * a. *, *...... a.....381 380....................361 = {l1 2D 3, 21, 22, 23, 41, 42, 43} 360 a * Is *. * i. *.. i. la 341 340..................321 This EO is very compact, thus giving rise 320.. *.. 0....... * * * *..301 to the objection that pockets of hyper300...................281 recovered neurons will develop, poss'bly lead 280............. I.. I. l.261 ing eventually to undesirable (violently 260.......... I...... a I.241 oscillatory) behavior. At the same time, 240. *. *. lb, 0...........22 1 however,!o is sufficiently small with respect 220.. *...... a........201 to N that this effect. should be negligible. 2001, a $ If * I$, a, * ~ ~, * Is 181 1801 a 0 0 0 0.0 * a 0 1 It i i * sl11 l 1600 I Is # 0 i I* 0 i I ~ i i 141 1400 0 0 0 i I* i * iI 121 120.6. Is. 4 4 *... 101 10... 4 4 4 4. 4 4. 4. 4 4 6 4 * 8019181716151412 110 9#*A* 8 7 6 I I5 4 3 61 60~ 1 ~ ~ r ~ ~ r~ E ~ 4 14 40* s6 0.,.,0 201918171615141312111O 9 8 7 6 5 4 3 2 1

A variant of Experiment 7 was performed using the "tighter" input are 0 of Figure 7, 15$ As is to be expcc.cd. the results were fratr weake than those of the original experiment: overlapping paths formed. but not cycles, This demonstrates quite conclusively that E0 must not be eitrher too diffuse or too compact to achieve the best results, 7, 4, 3 Conclusions The experiments of this section show that networks 7 exist in which closed cycles C (O) may be produced as a consequence of a patterned stimulation nof the subset Z0 C1 W. 7e5 CELL ASSEMBLY EXPERIMENTS n this sh crion, the couse of a par ticula r'X:e:at:;::,,,:; i described in detail, First, the very stable noetwork usead s;i:.. s-'.. the experiment was selected (network of Experiinment 9. Cnante-r V, Sect:i o... 5This was followed by a "training" period of approximately th ree thonisand time steps during which a periodic stimulus was applied in an "'on-off" envelope to a subset ZOE During this training period overlapping eycles came into being, However, towards the end of the period, undamped oscillations arose, resulting in that scourge of neural-network simulation namely F(t) going to zero. Consequently, to determine and to eliminate the cause of these fatal oscillations, a detailed series of control experiments was conducted by returning to an earlier point in the experiment and proceding from that point on with new parameter settingsl The difficulty, essentially This represents an instance in which the "roll=back" or "back-up" feature mentioned earlier proved to be absolutely indispensable, The cost of this experiment was prohibitive to begin with: It would have been impossible'-o repeat the entire sequence anew were this even desirable. Since,tt'e entire state of the experiment was recorded periodically on magnetic taccpc, it was possible to retumrn to an earlier point, adjust whichever para.meter seemed pertinent and continue on from there, Thus, the necessity.WKt mrulti: pie (and exhorbitantly costly) reruns of the entire experimenta: snk_.ence was sidestepped.

243 an i mbalance in thi fate i ue functrion.'as n ctw ovucs rec and rect i:i cd Foliowi.ng -a.1 r'lis thrc.neT w-rvM was s,;,j) lectua to an aiternating periodic stimulus sequence, one of the stimuli being applied to the 10 of the earlier phase of the experiment at the same rate as before, the other to another disjoint subsot -Z at a different rate, Unfor'nla'tetIy it was not ossible to cartry this sequence cut to its conclusion. Precious funds for computer-time were exhaustud, However, it did produce (a) a closed cycle C(ZO) as a consequence of the earlier training portion of the expcri.lrent; t(2) the beginn ini n g: A o iosu, y c cci) as a consequence of the iast phase; (3) strong symptoms of the develo.opment of cross;. inhibition between the cycle C(LO) and the embryonic cycle C(L'C It the author's contention that: the results of the;e experiments are suftfic ient\ s strong to wasTant more oxtensive investiration in the future using considert i. targer- axno sore olnPi tex ntet r:s c tInwa W 5 ssi) le here,. This could be acnioved via the superior ciputer facilities enmergrng today. 7.65-1 An A:dte-Suate Network The most impr - Iss of: t'n. steaiy- stzate;xpr iments (usin,-s netwsorks with distancebbias maxid negative feedback was Experiment 9 of Section (S6b, 3. For the network of that experiment. som& of the pertinent param: eters are N: 400 R = 0 i ene. 6i = k'fu (hlencet Cr't' tw kl cVur)ons) iR p wp + p 1* p + ~P + EP 55 where pO 2 i /4* 2 = l 1 = p/8 (see Figure 6b28, Chapter 6) Fatigue was operative,

244 Figure 7,16, Example of a Closed Chain within a Cycle C(ZO) (a) Neurons of the respective successor-sets are given, together with the synapse-levels at t = 2000 (above the arrows) and t = 2900 (below the arrows), Notice that all synapse-levels have increased, that from Z0 to Z Z0 the least, 0 1 2 Z3 4 5 6 7 0 0 56 44 46 40 40 52 33 5 --- >30 -395 -; 393 -3 31 -- 151 -- 50 — 5 S 63 63 47 47 50 61 34 Note: Scaling of the synapse-levels A synapse-level of 32 corresponds to 0 synapse-value, 36 to +1, 40 to +2, etc, (b) The chain given above is shown in the geometry offi, The neurons of Z0 are encircled; neurons of the chain are marked with an "X". Recalling that R = 6, one notices that several of the neurons of the chain are not directly connectable to any neuron of E 400o. o e a e o 36 81 380. 361 360e o341 340e o321 320~ ^301 300o e281 2800 261 2606.241 240. 221 220c.201 2000. 181 180 \ 161 1600 141 140. e121 120e \ 01 100o 81 800. 61 60t e 41 40t o 21 19181716 5141121110 9 8 4 2 1 20191817161514131211109 8 7 6 5 4 3 2 1

245 This experiment was run from t = 1 to t = 400 and appeared to be very stablec F = 36, F = 8, Synapse levels drifted of course from their max min initial settings, but the net drift based on a sample of synapse-levels appeared to be zero, The slight tendency towards underdamping, it was reasoned, should be suppressed by action of the fatigue mechanism, since all neurons were fully recovered with respect to fatigue initially, 7o,52 Single Periodic Stimulus Series This series basically is a repitition and a continuation of Experia ment 7 of Section 7 4 with the relatively minor error in the D(X) and U(X) tables -corrected, The network of the preceding section at t = 400 was taken as the starting point, Just as in Experiment 7, Section 7~4, -0 consisted of nine neurons spaced over a 5 x 5 grid in {(see Figure 713), TlO = 7 (the interval between successive stimuli), S0 = 12,5 (constant external stimulus added every T0 time steps to total input stimulus for each neuron of E0), and t = 100 (length of "on-off" envelopes) The experiment was run through t = 3721 with the stimulus "on" (being applied every T0 time steps) in intervals t = 401 = 500, 601 ~ 700, 801 = 900, and "off"' (no external stimulus) otherwise (t - 501 = 600, 701 800, 901 = 1000, o oo) As in Experiment 7, the results were quite good, By t = 2900, over. lapping cycles had formed, A closed chain within one of these cycles is displayed in Figure 7616, Included in the figure are the synapse-levels from each neuron of this chain to its successor which were sampled at t = 2000 and t 2900, Notice that they have all increased, as predicted by the theory0 It is also interesting to observe that the synapseslevel increasing the least is the last "link" of the chain, i eo0 from S to

Figure 7. 17. Selected EEG's for the Cell-.Assembly Experiment (Section 7.5.2). (a) t = 2950 - 3050. The interval t a 2950 - 3000 is the last half of an "off" period; t = 3001 -35 h is half of an "on" period. The sequence is typical of all but the last several hundred time steps of theru.Tesih F~) und~rdaimpt-nj referred to in the text is noticeable here. The abrupt dip at t= 3049 suggests that an excesvnme 70t of neurons have been overfatigued and are not available for recruitment. 70 60 2! 50 2960.2 9 70 2980 2990 3000 3010 3020 3030 304035 70 (b) t = 3620 - 3721, [:(3721) - 0. The interval t = 36w. - 3699 is the last part of the last "on"peid 3668 and over t = 3694 to 3721, large spikes being absent, suggests again that a fatally excessive numbe fnern 60- have been over-fatigued,, not recovering sufficiently fast to sustain steady-state when the stimulus istunuofa 50 3620 3630 3640 3650 3660 3670 3680 3690 3700 3 71032

247 E ~ Typically the strongest (greatest increase) occurs in the first 0 half of the cycles C(0)>, the weakest in the last half, especially in the link from Z to The behavior of the network in this experiment appeared to be quite normal up to approximately t = 3500 From then on, progressively more and more violent oscillations developed, resulting in F(t) going to zero at t = 3721 (an "off" period of the stimulus)> EEG's for selected inters vals of this experiment are given in Figure 717, contrasting the early behavior of the network with the laterO Since all network parameters had been quite carefully chosen, this was a most unexpected result, However, analysis revealed that the most probable culprit was the fatigue mechanism, It appeared that large groups of neurons were getting "trapped" in lower fatigue states, ~9, (higher fatigue values, ~(Q)) and not recovering sufficiently fast so as to prevent pockets of hyper-refractory neurons fiom developing, Thus a hypers refractory pocket develops, If F(t) does not go to zero, sooner or later the neurons of this pocket will recover, Then, the entire group could be forced to fire by a relatively small number of neurons firing the preceding time step, Once again, as suggested in Chapter 4, Section 4, violent and possibly fatal oscillations are possible0 Assuming the fatigue mechanism to be the source of the difficulty, the following questions then arises (1) what defect in the fatigue mechanism caused this "trapping"? (2) how can this defect (if one is discovered) be corrected? (3) would it be possible to back up to an earlier point in the experiment, making the necessary changes to eliminate the defect, and continue the experiment?

248 In the next section, the assumption above is shown to be valid, the defect is isolated and rectified, Most important of all, the answer to (3) is shown to be in the affirmative. 7,5.3 Analysis and Control Experiments To show the assumption that the fatigue mechanism is primarily responsible for F(t) going to zero in the experiment of Section 7,.,2, it is first necessary to examine carefully all basic assumptions regarding fatigue made so far. Sufficient details of the relationship between 4(Q) (fatigue value), X(fatigue level or state), and A1(Q) and t2 () have been given in Chapter 2 and need not be repeated here, What is lacking so far is a calculus for the fatigue function analogous to that of Chapter 4 for the threshold function~ This calculus would relate the functional forms of O(Z) and V(r) with the distributions k(t) of neurons over recovery states and-t(t) of neurons over fatigue states which preserve a steadystate behavior F(t) (E(F(t)) = Fb), Such a calculus was not developed here.,, Instead the following heuristic reasoning was made (which, until the last experiment, worked quite well). Consider the distribution of intervals iq between firings for all neurons i of _, In steady=state, one would expect these to be distributed approximately normally about the mean rq = F The variance 2 rq would be determined empirically by the variations in F(t) from Fb. This would seem to imply the following general characteristics ofh (t) (distribution of neurons over fatigue levels), in steady-state, the neu - rons of u tend to cluster normally about a mean ~ with a variance o2 again determined by the variation of F(t) from Fb, Periodic stimulation of a subset wo Cult would tend to make this distribution bimodal --

249 the neurons of Z0 and those affected by E0 going into lower fatigue states (higher fatigue values), clustering about a level Z' (determined by the stimulus-rate E0 and period t,)o After cessation of the stimulus, the distribution should gradually "drift" back to its normal unimodal state, The rate of "drift" back to Z and the rate of increase to the second mode Z" are determined by Ai(Z) and A2(Z) respectively 1 The choice iof ~(Z) (lacking our fatigue calculus) is still made on intuitive grounds as described below, There are then two problems; (1) the determination of 9 and (2) determination of O(Z), The following procedure was used. set i.(O) for each neuron i e to Zmax0 Set ~(Q) so that it gradually increases from zero (as Z decreases from max) at a constant rate (see Figure 7~5) until l 5 1 (O 38) = Zmax/2, then let 4(Z) increase at a much greater rate, The reasoning was that 4(t) should tend from its initial distribution (all Z is equal to Zmax) to the one postulated above around some L with 2. < 9 < 2ma.. The sharp upward bend in 4(Q) at Z1 was to ensure that the rare neurons (lower tail of the distribution rq about rq) firing at much greater rate than once every rq time steps will eventually be "damped, out" and forced back to the background rate, This reasoningworked quite well in earlier experiments and possibly 1At the risk of being pedantic, al observation is in order here, If = N is very large relative to E0, >> EO, there is a serious doubt in the authorts mind that such a shift to bimodality would be observable, since the relative number of neurons of affected would be quite small0 In the %,present situation, with %O 2, the shift definitely is observable0

250 would have worked in the experiment of Section 7 5,2 above had the author been more careful, For, detailed study of the distributions a(t) for that experiment and for control experiments performed by "backing-up" to earlier points in the experiment revealed the following facts; (1) ~(t) did not settle down as expected to a distribution about a stationary P, rather Z seemed to be gradually decreasing (toward k1)> (2) The interval tz was too short to allow the neurons of Z0 (hence the neurons of q driven by E0. etc,.) to completely recover with respect to fatigue during the "off" period of the stimulus. Thus these neurons became progressively more and more fatigued, eventually creating a large zone of hyper-refractory neuronsP (2) is easily seen a priori from the following calculations* if a neuron of 4[ fires at an average rate of I/r0 (once every T0 time steps). then the net change in 2 over T time steps would be: T (2 TO 1 (1 T ) Assuming TO = 7 A1 = 1/16t A2 = 1*0 (assuming A1 and A2 constant for all Z for the moment), this gives A/T X 1/1112 For r = tz = 100, then, A = 8.99' Thus, in an "on" period, Z would go from Y. to 0 A =0 8..,92 With the stimulus "off" on the other hand, ( To X 1/ = 0) 1 A /r 0 2 -(1-0) 1/16 and for X = t, = 100, AX = 35

Figure 7.18. O~(t)'s for Cell-Assembly Experiment (Section 7.5.2). The fatigue distributions g(t) for t = 400, 1100, and 2000 are given below. Notice the shift in the apparent ~ from approximately L = 58 at t = 400 to 9 = 40 at t = 2000. Also, observe the secondary peak around 9 = 36 at 9 = 2000. This is typical and represents the bimodality mentioned in the text arising from stimulating %0. This secondary peak is most pronounced after an "on" period, less so after an "off" period (since neurons affected by E, have partially recovered with respect to fatigue). t = 2001 begins the next "on" period. Interpretation: The left-hand vertical ordinate indicates (9.) in units of synapse-value; the right-hand ordinate the number of neurons of with fatigue-state 9. 0(9) 17 16 -i 80 15 I I r 14 70 13 - 4(9.) 1I 12 -I I 60 t 4 0 0 11 f I t 2000 I 10 50 lo~~~~~~~~~~~~~~~~~~~A 9 ~~~~~~~~~~~~~~~I'I 9 9 8 40 7 / 61 \ I 9 t- 1100 / 30 5 1~ 4 / I 20 / / 2 - 1 0 6 20 25 30 35 40 45 50 55 60 639

Figure 7.19. _(t)'s for Cell-Assembly Experiment Liection 7.5,.2) - Continued. The fatigue distributions f(t) for t = 2100 (end of an "on" period), t = 2200 (end of an "off" period), and t = 2300 (end of an "on" period) are given below. Interpretation: same as Fig. 7.18, q.v.. Additional neurons appear to be drifting from large values of 9 towards the apparent 9 = 40. The bimodality is perhaps most pronounced at t - 2300 at the end of the last stimulus period. 17 16 -- 80 15 14 O) 70 13 12 60 11 10 t 200 50 9 tO 8 I 40 7 6' 30 t 2300 -4..20 2 10 20 25 30 35 40 45 50 55 60 65 2.

Figure 7.20. g(t)'s for Cell-Assembly Experiment (Section 7.5.2) - Continued. The fatigue distributions 6J(t) for t = 2400 (end of an "off"' period), t = 2500 (end of an "on" period), and t = 2600 (end of an "off" period) are given below. Interpretation: same as Fig. 7.18, q.v., Notice that additional neurons appears to be drifting towards = 40 from higher values of t. Notice also that the bimodality is more striking, especially at the end of the "on" period t a 2500, the more highly fatigued neurons recovering somewhat by t = 2600. 17 16 80 15 14 \ ~() 70 13 t 2600 12 60 11 10 t 240 50 9 8 )/ 40 6 30 5 4 t 2500 20 3 20 25 30 35 40 45 50 55 60 63 45 50 55 60 63~~~~~~~~~~~~~~~~~~~~~~~~~~9

7.52)pCen ined~3100 (end of an ~OI Ceton 7.5-2) -in fact 7~21~ ~3(t(S fr Ce~~Asenioy Exeri~enti eve c'o'~ Priad) and t 10 Ouly sgges ),S fo Cell"SS 0 ore Oviou (argerfatigue these ern (t f~t) for t ~ t) is even,40 -- ie., of neurons figue 7at21' rib bmodltiYin s ~t lowrer f atigue values O f 9. ate p~, tigue dist j'he bimodality'10n' S a -he WPearent % eadY.S Fig.7.8,The fd a t&ie below* de~earth ol neur ive b v t. le to the St as 18, ~V' peio) regie,6 and the avha e been d" - les ccssb erretati" sm the peal,~. nyne g ocessiv l a ~ eavior in dt Bt,IssivelY )ecomin state beh5sie~Y les Iner that e)01 p a aY arneeded5 to urne 17~~~~~~~~~~~~~~~~~~~~~~~~~~~7 16 13 t~() 2900 5 j 12 10~~~~~~~~~~~~~~~~~~~~~~~~4 7 t 2 55 1 2 I ~~~~~~~~~~~~~~~~~45 5 1 r ~~~~~~~~~~~~40 30 3 20

Figure 7.22. Terminal B(t) for Cell-Assembly Experiment. The fatigue distribution f(t) at t = 3720, one time step before!(t) went to zero, is given here. Notice that the largest peak occurs at Z = 37, the majority of neurons of Ahave fatigue state < 40, hence are excessively fatigued (+(Z) large). This suggests the "trapping" defect of the fatigue mechanism mentioned in the text. (2) f(t) 12 60 11 [ 10 -50 9 ~~~~~~~~~8 ~~40 / X \ / —— t =3720 6 I - 30 5 3 4 -20 3 2 10 30 35 40 45 50

256 Over a complete "on-off" cycle, then, Z would vary from Q0 to g0 - 8,92 + 6,35 = l0 - 2,57, Thus, neurons of Z0 (and those controlled by E0) will tend to accumulate larger and large fatigue valuesl (1) is more subtle, However, the basic problem appeared to be (t) not reaching its stationary form by the time (t = 401) the periodic stimulation was begun, Thus, the "drift" in Z toward Z1~ One final factor greatly aggravated the effects of (2)o As noted previously A1(Z) and A2(Z) vary with Q, although A1(Z)/(Ai(Z) + A2(Z)) = constant for all a, It was hoped that by this mechanism, an hysteresis could be introduced in 4(Z)> Unfortunately (aside from making the analysis more difficult), this tended all the more to allow large groups of neurons to obtain high fatigue values (~(Z) large), since as Z decreases, A (Z) and A (0) increase, Figures 7,18 - 7'22 show a sample of the sequence of distributions of f(t) from the experiment of the preceding section, The bimodality of?(t) is apparent, and towards the end of the experiment (as more neurons become fatigued) becomes more pronounced, It is evident from these figures that progressively more and more neurons are being forced into higher fatigue values c(2), A control experiment was performed with the stimulus suppressed from t = 2001 to t = 2800 using the network of the ce,llassembly experiment at t = 2000, A sample of the fatigue distribution for this experiment is given in Figure 7,23, A study of this figure reveals that the highly fatigued neurons tended to recover to the (apparent) 2 at L = 40, The few neurons remaining in the lower fatigue states This is a case of experimenter error, pure and simple. Notice, however, that the 1l(Q)'s and A2(L)'s actually vary with,, so this effect could become more or less pronounced as Q varies, It was hoped that in this way the A's would tend to cancel out,

Figure 7.23, Sample of Fatigue Distributions for-First Control Experiment. The network of the cell-assembly experiment at t - 2000 was run from t = 2001 to t = 2800. The fatigue distributions q(t) are given for t = 2000 (from the cell-assembly experiment), t - 2400, and t = 2800. Notice that the two peaks at Z = 40 and I = 35 seem to be closing in on an intermediate i. Thus, while some neurons are recovering, some neurons still are seeking a lower value of ~. Recall from the discussion of the text that the network is slightly underdampled a stable value of ~ has not yet been obtained. 17 - F(t) 16 -\ 80 15 14 _ - 70 13 ) 12 -_60 11 10 s ~ I 8 I ii40 5 4 3 - I \- / 20 25 30 35 40 45 50 5 60 63 20 25 30 35 40 45 50 55 60 63

sets oi LO, Ti'he behavior FMt) for this experiment appeared stable, i.e. no violent fluctuations. although L(F(t)) exceeded the theoretical value Fb = N/rq = 400/17 a 23,5, thus revealing another contributing factor to the drift in.; the behavior of the network appears to be slightly underdamped, At this point the reader is reminded that a slight underdamping had been noticed in -:oe initial steady-state phase of the experiment (see Section 75,1),A However, it was reasoned there that this should only continue to the point that 9(t) had stablized about a t Regrettably, the stimulus was turned on before this had occurred, superimposing an additional drift on L to that caused by the mismatch in the lengths of the "on" and "off" periods of the stimulus. Presumablvy, had the control experiment been conftinued. the behavior eventually would have been damped so that E(F(t)) i' as i finally converged to a stable value, At this point, it was decided to eliminate the hysteresis feature (A1 and A2 being functions of the fatigue state Q) from the fatigue mechanism: The immediate reason for this is that a detailed study of the (t) from the cell-assembly experiment and various control experiments revealed that groups of neurons with the same fatigue-level would recover (from higher values of.(X)) abruptly to lower fatigue values 4(Z ), thus forming a hyperrecovered pool of neurons in A(with all the attendant dangers),. This occurred around I = 36, a region of abrupt change in ~(t) as well as A1(Z) and A2(k): lhus, a significant distortion was introduced into the long run behavior of to To test this, the first control experiment above was repeated from

Figure 7.24. EEC's for First, Second, and Third Control Experiments. F(t) (a) First Control Experiment. Original fatigue mechanism. so 40 30 20 212,00 2350 2400 t 50 - (b) Second Control Experiment. A-I Oscillations in this run are levelling out as t increases 30 20 CJ 10 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10 2300 2350 2400 t 50s (c) Third Control Experiment. A1 = 1/4, A2 2 4.0. Oscillations in this run remain somewhat 40 1 n stronger than in (b). 30 20 10 2300 23S0 2400 t

Figure 7.25. g(t)'s for Second and Third Control Experiments (Section 7.5.3). (a) Second Control Experiment. This fatigue distributions 5(t) at t = 2000 (from the cell-assembly experiment) and at t - 2400 are given below. The second control experiment was a modification of the first in which Ai 1/16, A2 = 1.0. Interpretation: same as Fig. 7.18, q.v.. +(Q) 17 \.. ~~~~~~~~~~~~~~~~~~~~~t~t) 16 _ 80 15 14 — (1 70 13 12 t 32400 60 11 -- I' 10 1 ~~~~~~~~~~I ~~t 2000 5 — ~ ~ ~~1 Z 1~01 9 ~ 8 ~4 7 26 25 30 35 40 45 50 55 60 63 4 1 20 3 2 1 01 20 25 30 35 40 45 50 55 60 63

Figure 7,25 (continued), (b) 5 (t)'s for Third Control Experiment. This experiment was identical to the second control run, except now A1 = 1/4, A2 = 4. The!{t)'s are given for t = 2800, 3200, and 3575. g (t) appears to be stablizing about ~ = 36. 17 l \(t) 16 \ 80 15 14 \ 70 13 t t 3575 12 t 2800 60 II 9 ~ I1 r 1\' —B I 20 25 30 35 4( 45 50 55 60 63 R 8 3 / 2 __10 20 25 30 35 40 45 50 55 60 63

F (t) Figure 7.26. EEG for Fourth Control Experiment 60 so50 40-3 3O 20 I I I I 3300 3310 3320 3330 3340 3350 3360 3370 3380 3390 3400 t

263 t = 2001 to t = 2400 with A1(Z) E A1 = 1/16, a2(Z) E A2 1 0, The EEG's for the two experiments are given for t = 2301 - 2400 in Figure 7,24 (a) and (b), j'(t) at t = 2000 and t = 2400 for the second control experiment is given in Figure 7525 (a), The second control experiment was repeated using larger values of A1 and A2: AI = 4/16, A2 = 4.0o As is to be expected, such large changes in tended to produce very uneven behavior, F(t) tended to oscillate more violently than in the second control run above, The EEG for t = 2301 - 2400 and F(t) for selected time steps for this control experiment are given in Figures 7.24 (c) and 7o25(b) respectively, An interesting contrast tp the second control experiment above was obtained by continuing the cell-assembly experiment from t = 2901 on without stimulus and with A1 = 10, A2 = 1/16, The behavior, up to t = 3400, as shown in Figure 7.26, was considerably less stable than in the earlier control run with F(t) going to zero at t = 3509, the stimulus being turned back on at t = 3401 until t = 3500,,q(t)'s for this run are given in Figure 7,27, Analysis of this run suggested that a sufficient number of, neurons did not recover in time to sustain the steady-state behavior, bearing in mind the fact that an excessive number of neurons were highly fatigued at t = 2900 in the original experiment, At this juncture, the harsh reality of a nearly-exhausted budget raised its ugly head, There simply were not sufficient funds to allow a complete rerun of the run-in and cell-assembly experiments of Sections 75ol,1 and 7o,5,2 above, The positive reuslts of Section 75o 2 seemed genuine, the difficulty arising from the combined effects (1) and (2) mentioned earlier and of a transient (though fatal, in this case) nature,

Figure 7.27.'(t)'s for Fourth Control Experiment..(t) is given for t = 3300 and t = 3508 of this run. f(t) went to zero at t = 3509. Notice that the neurons of A appear to be clustered around ~ - 36 at t - 3508, some neurons still at lower values of L(hence higher values of +(1)). The effect of the stimulus (t = 3401 - 3500) was responsible for the peak at E = 23. Since the Al1() and A2(M) tables still contain the hysteresis of the cell-assembly experiment, there appear to be three factors operating here: (1) the trapping of too many neurons in lower fatigue states (larger +(Q)) due to the stimulus; (2) the distortion in F(t) due to the hysteresis; and (3) the presence yet of a slight underdamping of F(t), resulting in neurons "hunting" for ~. 80 14 15 \ A() 70 13 12 60 t = 3300 11 ilt 0t t 3508 0 0 10 8 4\ Xl40 6 30 1 0 0 5 10 15 20 25 30 35 40 45 50

265 Therefore the decision was made to attempt to salvage from the forth control experiment a network for the alternating periodic stimulus series, To this end, an artifact was introduced into V(r) in an attempt to encour-. age the more highly fatigued neurons of C to fire, without creating excessively violent fluctuations in F(t). This artifact consisted of allowing V(r) to take on negative values for r = 21. A control experiment was performed using such a V(r) (see Figure 7T28) continuing the fourth control run above from t = 3301 to t = 3400. The EEG is displayed in Figure 7,29 and S't) at t = 3301 and 3400 in Figure 7.,30, Notice that F(t) appears to be levelling out towards t = 3400 and that t seems to gradually be conveying to Z = 38, The results of the fifth control run fanned the sparks of hope that the alternating periodic stimulus experiment could be performed without further difficulties due to the fatigue mechanism, This last sequence of experiments will be described in the next section, In conclusion, a simple. cycle C(Z0) formed in the cell-assembly experiment, Unfortunately, toward the end of the experiment, fatal oscillations in F(t) arose, These were correctable by adjusting an imbalance in the fatigue mechanism and the introduction of an artifact into V(r), 7,5.4 Alternating Periodic Stimuli Sequence *0 was chosen as shown in Figure 731, E0 as in Section 7o52,, The network of the fifth control experiment of Section 7,,5>3 at t = 3400 was used as the starting place fo this sequence. T= 7 (as before), TO = 8, tz = 10o (as before), t = lS0 For the first run, 6 = O, giving the stimulus sequence I I* I I* I the interval in which the stimulus to is "on" that to "off", I* the interval in which the stimulus to

Figure 7.-28. V(r) for Fifth Control Experiment. The V(r) used in the cell-assembly experiment where it differs from the new V(r) with negative entries, is indicated by a dotted line. Notice that in the range r a 12 to 17, the new V(r) is less than the old. This was just another V(r) artifact to bring more neurons back into circulation, as it were. Likewise, V(r) (synapse- in the range r * 3 to 11 has been raised to discourage neurons from firing too values) rapidly, hopefully stemming somewhat the drift in ~. 15 10 5t I,/-~New V(r)'Old V(r) 00 60 r ~ ~ ~ ~ ~ ~~~~0 C-~~~~~~~~~~~~~-s -2-10

Figure 7.29. EEC for Fifth Control Experiment (V(r) with Negative Entries). F(t) Initially, the behavior is highly oscillatory, but the negative feedback mechanism appears to level it out 70 - towards t = 3400. The large oscillations are most likely due to a slight initial excess of neurons made available by the artifact in V(r). 60 so F(t) smoothing out here 40 30 20 10 3300 3310 3320 3330 3340 3350 3360 3370 3380 3390 3400 t

Figure 7.30. A(t)'s for Fifth Control Experiment. The fatigue distributions IA(t) at t = 3301 and t = 4000 are given here. I seems to have settled around Z = 37, although 5(t) has broadened out somewhat by t = 3400. (t) 80 15 14 3-t 3301 70 13 12 60 11I 20510 4 _so50 8 430 I I I 1 I 2 25 30 35 40 45 50 55 60 63

269 Z0 is on", that to E0 "off" This network was run through t - 4000. The experiment was then continued from t = 4001 through t = 4350 with the sequence I* I 6 I* I 6,,e and 6 = 50. 6 was introduced to allow t to recover from the effects of stimulation of the two areas E0 and Z0 (see Chapter 4, Section 4,5,2), The experiment was terminated at t = 4350 because funds for computer time were exhausted, In spite of the somewhat premature termination of this run, some interesting results were observed, First of all, the cycle C(z0) seemed firmly establisheds the connections from El to ZrT = 1, 2,.., To -0 were all close to their maximal values, while the "back" connec= tions, eg, connections from E0 to ZT 1 were moderate or inhibitory, 0 Part of C(0O) is given in Figure 7,32 together with the synapse-values in the directions T-1 to To Thus, the artifact in V(r) and the changes in 1(Z) and A2(Z) did not destroy C(Z0)o Secondly, an embryonic cycle C(Z0) appeared to be in the process of formation, Fragments of this cycle are shown in Figure 7,33. Both of these results were expected, Given time, no doubt, C(%o) would have become as well formed as C(Z0)o The next result, however, seems to give great hope to future, deeper study of cell-assembly theory via computer simulation, (thirdly), connections between the cycle C(Z0) and C(Z0) (not yet a cycle) tended toward negative (inhibitory) values. That is, crosseinhibition as predicted by the theory - appeared to be evolving between C(Z0) and C(g0)o This is illustrated in Figure 7T34, In fact a control run, without stimulus, starting at st = 4001 was conducted0 F(t) went to zero at t 4126G A study of thef(t)'s bore out the claims of Chapter 4, Section 4.5 2, about the necessity of the 6,

270 Figure 76316 LO and ~ for Alternating Periodic Stimuli Sequence6 Notice that Zo and ~ are'disjoint and cannot affect each other directly (R = 6)6 Yet they are sufficiently close that the paths PE (ZO - Z) and PE (Z0 + ~*), hence eventually the cycles C(Zo) and C(;~), should be "proximate" in the sense of Chapter 46 40 06e cce o c c, o.... 381 380 o C o t 6 361 360 o e 0 t t- t e 341 340 t o e o o o 0 o f c e o J e, 321 320 o 9 a o e. e c 301 300 c e 3 ( teoee 281 k {216, 280, o, e, o 261 218-'220, 260060 60 0 241 2 240 6 6 o o o e 221 256,, 258, 260 296' 1220 o o 6 6o o ~ o o c e o e 201 2600 200 6 -66666 666 6181 298, 300) 80 161 160 6 6 6 6 6 6 e66 C 6 6 141 1840 o e o e e o E e o c o e 121; Z0 = ~ChE ~ 6a~CB~c~{13, 5, Sp 80 6 6 6 6 6 e r c o c o 0 61 ( 0 41, 43, 45, 80 ~o c o e e e o ~ o o o o o o c o6 ) 41 60 6o 6 666 e 6 6 oe 0 6 4 4 e e f o o ooe 1 o 81, 83, 851 40 Q c21 2019181716151;413121110 9 8 7 5 Figure 76326 Part of C(Z0) from the Alternating Periodic Stimuli Experiment, Only two branches emanating from, and returning to, neuron 1 ~ Z0 are shown, These are respectively: E0 E1 Z2 E3 4 E5 ~6' o 1 + 63 - 61 -+ 56 1+ 57 + 73 + 36 + 1 (light lines in diagram) 1 - 63 + 62 -180 + 155 -+117 + 36 1 (heavy lines) 4006 o o, e o oC o 0. o o o Co C 381 Some of the 3800,6 6,e C e o o 0 o C o C C e 6 361 branching within 360, 6 e. e o 6 o c e 0. 6 6 o 6 o 0 341 these' two bran. 3406 6 o6 C e C G e C C. c c, o C c e 321 ches has been 320 eo 6 6 c. 6 C C 6 6 30,1 omitted to make.. 30c,.j.. C!'..........r 281 the diagram leg* 1 2806 6 e' 6 C 0.,.... C -\ C 261 ible, The syn0" J / 26U, r 6 6 6 C C 6e \ e 6~ 241 apse= —- ZT to ZT+1 6 24002 6 C 6 6 6 4 221 are all, maximal ~22 0, 6. 201 (ie0 +80) except 200{\ c 6 6 — e c ce 6 C 5 6 6o6 C 0 C 0 181 EZ1 to which is 180 6 e o 0 6 C 6 C C C 6 c, 161 + 310 having inAs in cusded 1606 6 e e 6 C 6 6 6. 0 141 creased from g 0 (doin this diagram 1 I40 e 6: 6 6 6 6 6 6 _6 121 The synapse-value (dotted lines) 1201. 6 fromneuron I are parts of 0,, - ioe from neuron 1 the embryonic 0 0 81 to neuron 36 C1 are*ofFig 80, d, B C ~ ^ ~ O O f a d e d a 6116 has decreased to C(E*) of Fig. 8 )g ~ 60.6. 0 6 6 6 C 41 This is typical of 7o3 C 40,, o o e o e o 6 6 6 21 the "back" connec209117 1 C o e e c C b 8 t *' > 9 <) tions from 8T to 20 1918 171 1413121110 9 8 7 6 4 4,/ C,1T

Figure 7.33. Fragments of an Embryonic C(Z0). Several parts of what appear to be an evolving C(Z*) are shown. (Overlapping neurons are not shown unless part of a path). Notice the fragm8nt of a path from E to o The synapse-levels from neurons of ZT to ET+l are shown above the corresponding arrows. Parts of these paths are shown in Fig. 7.32 (dotted lines). 1 2 3 4 6 7 0 220 (58). 159 (62) 100 (623 141 _._, 6) 220 63) (63) (63) 2205 221 303 165 i~2130 -... 258 (83 195 -5) 112). 300 (

272 Figure 7 34, Development of Cross=Inhibiting Connections Between C(E0) and (the partial) C(Z0) > The arrangement in this figure is the following~ In (a), the neurons of C(LO) are arranged in order according to the successor E* in which they fall, Opposite each receiving neuron of C (0*) is the neuron of C(E0) which transmits to it, The synapse values of this connection at t = 4000 and t = 4350 are given at the right, Similarly for (b), Perusal of the changes in the synapse-values does not show a strong downward trend, Nonetheless, a slight trend is present and the connections tend to remain negative (inhibitory)0 Given the extremely short interval over which the experiment was performed, this is perhaps the best that could be obtained0 Incidentally, recalling the discussion of Chapter 4, it is probably not desirable that all the interconnections to become negative, since some positvbe ones may assist the revival of one cellassembly as the other becomes fatigued, (a) Connections from C(Z0) and C(Z),o Receiving Neurons Transmitting Neurons Synapse-Values of C[*) of C(E) t = 4000 t = 4350 * 220 180 i E3 -8000 -7,75 155 C Z +3o25 +7o25 E 159 62 E 2 +0o50 +0o25 144 ~ E1 -8,50 -850 155 c Z4 0 00 -0,75 157 C E4 1,50 -2.25,.2 100 36 e E6 1,o25 -lo00 61 ~e 2 0o00 -025 62 E C2 -0o50 -0o75 155 c Z4 -20o75 -3o00 157 e Z4 -0o75 -0050 180 ~ s3 -2o25 -0075 144 e E1 -2a25 -250 E3 141 62c E 2 -025 -0075 157 E 4 +0o50 +0o75 117 Es 0o00 0,00

273 Figure 7034 (continued) (b) Connections from C(Z) to C(0) Receiving Neurons Transmitting Neurons Synapse-Values of C(%0) of C(OJ) t - 4000 t = 4350 I, oIo None o o E1 63 159 E C1 +1 75 +1O00 63 100 6 E 2 -0e25 0 2Z 63 141 c.3 -3o00 -2650 144 100 cE 2 +1o25 +175 144 159 C +0o25 -0o25 144 220 C 0 -0650 -Oc75 2 2 61 100 C ~2 +0650 +0,00 62 141 3 E' -3r00 -2,75 62 159 C E1 +0e50 +0 75 E3 56 159 E1 -2,25 -2C00 180 197 C E1 0600 +0,25 Cq 157 100 E2 -2,00 -2o25 4 159 C Z, -1o25 -1625 197 e E1 1,050 0o00 ~g 117 141 E -2,25 -1650 117 197 C 3 -1o00 -1600 " 36 100 C -0 75 -1625

2 74 7 5 5 Conclusions A structural characterization of the formation and development of cell-assemblies has been given in terms of the model, Behind this forma= tion is the synapseslevel change law, expressed in the model in terms of the probabilities U(X) and D(X) and the balancing equation relating these quantities and Fb; D(X) Fb 1 U X) + [)(A)T N b q The experiments bear out the fact that Hebb's law (as expressed in the model) solely is responsible for the structural changes giving rise to the cycles C(%0) and C(E0) and to the trend toward cross-inhibition be= tween C(%O) and C(O), No other mechanism of the model can account for these empirically observed phenomena, By no means does the author claim that all has been done as originally planned or hoped, Many things remain; For example, development of a threshold curve-fatigue curve calculus analogous to that of Chapter 4 for V(r)9 improving the Boolean relations of Chapter 4 to yield more readily estimates of Pk, a study of dynamic relationships between cycles C(%0) and C(E)0, etc- However, the following have been done: (1) a basic stability calculus relating N, o, and V(r) was devel. oped and tested empirically (Chapters 4 and 6); (2) this calculus was modified to include the presence of negative (inhibitory) connections (Chapters 4 and 6); (3) closed cycles C(E0) were formed (Chapter 7); (4) partial formation of a possible cross=inhibitory pair of cells assemblies C(E0) and C(E^) was effected (Chapter 7)o

It is hoped that these will form the basis of more extensive work in the future involving much larger and more complex networks using the more versatile computers emerging today

APPENDIX: A NOTE ON THE GENERATION OF CONNECTION SCHEMES 1 Random Number Generation In the approximation to the urmodel (Chapter 4, Section 4,2,1) a number n is drawn at random from the unit interval, To effect this in the model, a pseudo-random number generation is used, The qualifier "pseudo" suggests that, in fact, the output of the generator is not truly random. However, the validity of all the arguments in Chapter 4 concerning the number of connections received by a neuron i e Ak from subset M CG stems back to the uniform randomness generated by the connection assignment schemes of Section 4~,2,1, Consequently, some evidence will be advanced in thenext section to show that the generator used in this study was adequate, the effects of non-randomness being essentially negligible. To the best of the author's knowledge, all contemporary random number generators used on digital computers are defective in some sense or another: (a) some produce all zeros after a certain number of samplings (ePg:~ the "mid-square" method, see Greenberger [8], reference )O;; (b) some cycle after a large number of samplings (eg, the power residue method); (c) some leave gaps or sets over which no sample points can ever occur (ege, a some forms of the power residue method, again see Greenberger [8]); (d) there may be correlations of different orders among successive samples of a sampling sequence; etc. The generator used in this study is definitely defective in the sense of (b), though apparently not in the sense of (c) It is (:,: the form X ni+1 -(nim)modulo 235, i = 1 2, 3, 276

277 where m = 5s and n. is the i-th number sampled, Clearly, this generator is not random at all; however, it has a large period and the modulo 235 reduction scrambles the digits of n. somewhat in 1 is obtained, Although, as observed by Greenberger [8], there exist examples of this general type of generator having defect (c), it appears that this specific one is free of it (Crichton [3]), Moreover, this ~, like Greenberger's example, has the enormous advantage of being implented by a very short and rapid subroutine on the IBM 7090O In any case in agreement with Greenberger - the guiding principle in favor of selecting the generator His g (1) the period of, is essentially infinite; (2) it appears reasonably free of defects like (c); (3) it requires little computer storage and is fast, (4) it appears to fulfill the requirements of the problem at hand, The next section will be devoted to showing the validity of (4) for this study, Consequently, the (above is adequate. There exists a relatively substantial volume of literature on randomnumber generation, including some that specify very sophisticated tests for measuring the degree of randomness of any particular generator, No one test, however, appears to be sufficient and one is forced to rely on criteria such as (4) above to judge the merit of any one particular generator, 2,c Tests of Distributions As mentioned in the preceding section, the specific generator used in this study was

2 78 15~i. 35 a ni -(n 515) modulo2 i = 1 23, 3 }o For a given nl, an entire sequence/ = {ni is determined, nl is called the random starter of the sequence:, For different choices of nl 15 (relatively prime to 5 ), different sequences/,. result, hence different connection schemes for (given fixed N =, P), This is all right, provided these different schemes possess the same statistics, Here "the same statistics" was taken to mean that the distributions pass the 2 test (of a sampled population against a theoretical population with known mean) for reasonable levels of significance,< Let NK be the expected number of neurons of receving exactly K connections (from>), Then, for fixed p, NK - PKN, K = 0, 1, 2, # C a where PK is the probability that a neuron of receive exactly K connections from K PK = e "- K = 0, 1, 2, -.. For the given o, N, let a connection-scheme be generated by and the algorithm (Figure 4.3) of Chapter 4e Let N* be the actual count of neuK rons of t receiving exactly K connections from Th lhe ~12.test wil. i hb. applied now to the theoretical distribution INK }K= and the actual, {N }"Oa The sample size is N. the sample mean o* is given by K K=0 Kmax - N KNK N k= K and the population mean by i N O NK N0K N 0KpKN N K K k= PK = LzKPK KA0 IC=0

279 _ e ~ K 4 as it should, The results of some typical calculations are given in Tables 1 and 2 below (uniform random distribution case), In these tables, N* reprer sents the actual count in the r-th class interval, pr (p), the theoretical frequency that the random variable lie in that interval, It is clearly a trivial modification of the above reasoning to test the distribution of connections received by a neuron of m from some sub= set AC'~ (uniform random case only) merely replace p by PA and proceed as before, Typical results of such a calculation are given in Table 3, There only remains the inherently less tractable case of distancebias distributions, Here again, the same basic tests of Tables 1 and 2 apply with appropriate modifications, the test of Table 3 being considerably more difficulty For all practical purposes, however, it is sufficient to test the distribution of connections within a desk CR This was done by the usual "Petri plate" technique of imposing a cellular grid overi, sufficiently fines then counting the number, NK, of cells, in the neighborhood of a fixed neuron, that transmit exactly K connections to the neuron, then comparing this number with the theoretical NK by means of the -2 test, An example of this procedure is shown in Figure 1 and Table 4 below, A glance at these tables shows that the hypothesis~ "the actual distributions are approximately Poisson (with the respective means)" is acceptable at a level of significance usually in the 70o-90% range Considering the

280 relatively crude generator used, this is quite acceptable, Figure 16 "Petri-Plate" Sample of Connections in a Neighborhood CR0 The network of Figure 6,28 (Experiment 9) was used: N = 400, 2 -w p = 55, R = 6, CRZTR = 113, A typical neighborhood is shown with the neurons transmitting to the receiving neuron (at the center) encircled, The grid used to count the N*Is is shown, K 610~~1e~~l~~~la a61 0 0 0 0 0 ) 0 ~ ~ 0 o oooQQQQIoo o~o\0 0 0 o 0o ~ o ~ ~ //o 0 o G) G) (D Oc. - a 13 Vnl~l~ 0lo 0 0 (~ 0 0 0 00 O 0t 0 0 6 0 Q0 0D 0 0 0 0 0 0 0 0 0 0 6 0 6 6 6 0 0 0 C) 0 0a ON6 0 0 0 0 Table 1, )'-Test Applied to a Network (Uniform Random Case) Nr, Nr a Pr(P) are as defined in the text, N = 400, 0= 6, ur = Nr- Nt, The values of 2 were obtained from standard tables,f K r N N Ur u,/N? 0,1,2 1 30 24,3 5o7 1,345 3 2 38 34,9 3,1 0,275 4 3 52 52,5 =0o5 01005 5 4 67 63~0 4,0 0,254 6 5 60 63,0 -3~0 0o143 7 6 54 54,0 000 0 8 7 35 40,5 -5o5 0,746 9 8 18 26~9 -8,9 2,940 10 9 21 16,2 4,8 1,420 11 10 17 161 0~9 0,051 m = 9 (degrees of freedom) = 2- =,7,o179 N2 This satisfies the -2-criterion at the 70% level (nine degrees of freedom) See, for example, Burington, R, So, and May, D0 Co, Handbook of Probabiltya and Statistics with Tables, Handbook Publishers, Inc., Sandusky, Ohio, Ta'exIi~v7

281 Table 2, f,2Test Applied to a Network (Uniform Random Case), The same network was used as in Table 1, except that a different random starter n1 was used, This was one of the worst cases sampled, the error perhaps stemming from the counts N* which were made by hand, r K r Nr N Ur 2/N r UrNr 0,1,2 1 13 6o8 6,2 5,650 3 2 18 17,5 005 0,014 4 3 29 34~9 -5,o9 0,996 5 4 60 5205 7,5 1,070 6 5 60 6300 -3,0 00143 7 6 72 6300 9o0 10280 8 7 51 54,0 -30O 00166 9 8 35 4500 -5,5 00751 10 9 19 2609 47,9 2,310 11 10 17 16,2 0,8 Oo,039 12 11 18 16,1 1o9 0,224 m = 10 (degrees of freedom) A= 12,643 This satisfies the %2'criterion at the 20% level (ten degrees of freedom), Table 3, X2ZTests Applied to Subsets of 0 In the tables below0 N* is the count of neruons of K receiving exactly K connections — from A c S with A = 20 for several values of p, p = 12B X = 0,6 K r N Nr ur u2/Nr 1 2 118 113,2 4,8 0,20 The 2'-criterion is 2 3 32 39,5 -,5 lo40 satisfied at the 30% 3 4 10 9,3 0,7 0005 level, m = 3 (degrees of freedom) %2 = 3,65 p 9-6 X = 0,3 0 1 300 296,0 4 00054 1 2 83 89,0 -6 0,320 The.2- criterion is 2 3 17 15,0 2 0,270 satisfied at the 70% m = 2 (degrees of freedom) %~2 = 0,644 level, p = 6g X = 0,3 (different nl) 0 1 314 296,0 18 1,09 The %2-criterion is 1 2 70 89,0 19 4,06 satisfied at the 5% 2 3 16 15,0 1 0,o07 level, The network used was the same as that of Table 2 above,

282' Table 4, 721-Test Applied to the CR of Figure 1o Let N* be the number of squares falling in the r-th class r interval, the corresponding theoretical Nr is given by Nr = pr(X)N, N = the number of squares, pr(A) the theoretical frequence that the random variable lie in the r-th class interval Here X is given by (p/CR)-4 = 2,0, K r N* N ur u2/N r Ur /Nr ft"'ameft"M s~ir~R~lk~ ~::.. —: -'......_...:,...~,,... I.__,?Et. 0 1 4 3,8 002 o01 1 2 603 7,6.1l3 0,22 2 3 5.5 746 -16l 0O16 3 4 9,0 9 05 -0 05 0 m = 3 V2 = 0,39 This satisfies the 42-criterion (three degrees of freedom) at the 90% significance level,

BI BL IOGRAPHY 1o Burns, B0 De, The Mammalian Cerebral Cortex, London, Edward Arnold Ltd0 (1958) 20 Crichton, Jo W., "The Principles of Living Beings: An Exploratory Essay," Doctoral Dissertation, The University of Michigan, Ann Arbor, Michigan, 1964o 3.. unpublished personal communication (1966). _4.__ _____, and Finley, Jr,, Me R,, "Programmed Simulation of Nerve Net" (letter to J. He Holland) (1961), 5., and Holland, Jo Ho, "A New Method of Simulating the Central Nervous System Using an Automatic Computer," Technical Memorandum 21441195M, The University of Michigan, March 1959. 60 Eccles, J. Co, The Neurophysiological Basis of Mind, Oxford, Clarendon Press (1953) 7, Finley, Jr, Me R,, "Experimental Study of Neural Networks by Means of a Digital Computer Simulation" in Research in Theory of Adaptive Systems, Interim Engineering Report, The Unersity of Michigan, January 1965, 8o Greenberger, Martin, "Method in Randomness" in Communications of the Association for Computing Machinery Vol0 8, No0 3, March 1966, pp, 17-179, 9. Hebb, D 0o, The Or anization of Behavior, New York, John Wiley and Sons, Inc, (1949)o 10, Milner,, P0 M0,, "The Cell Assemblyo Mark II" in Psych Review, 64, 4 (1957), pp o 242-252? 110 Rochester, NO, Holland, JO HI, Haibt, LO H,, and Duda, Wo Lo, "Tests on a Cell=Assembly Theory of Action of the Brain Using a Large Digital Computer" in Transactions on Information Theory IRE, IT-2, No0 2 12o Sharpless, SO K., and Halpern, Lo M,, "The Electrical Excitability of Chronically Isolated Cortex Studied by Means of Permanently Implanted Electrodes" in Electroencelpho Clin0 Neurophysiolo, 14 (1962), ppo 244X255, 283

UNIVERSITY OF MICHIGAN 3 9015 02826 811111 3 9015 02826 8111

ERRATA Patge Line Correction 1 11 Delete the period after "behavior" and insert: "(but see page 4, paragraph 2)," 3 6 up "assemply" should read "assembly" 5 9 "inter-cellular" should read "intra-cellular" 6 9 up "dendritic branching" should read "few dendrites" 7 10 "unknow" should read "unknown" 9 4;* 5 Place to the right of equations: "(symmsetri'c form of Hebb's postulate)" 12 12 "assemply" should read "assembly" 20 11 up "X.i(t)" should read "Xj i(t)" 20 8 up "change. up" should read "increase" 20 7 up "tchadge;down" should read "decrease in" 21 2'models of Hebb's synapse-growth law." should read *"models- of a symmetric forms of Hebb's synapse- growth law " 21 4 "connectijn.. o " should read "connection, n.... mJ i 22 6 up "product" should read "sum" 28 8 up "change up of"' should read "increase in" 28 6 up "change down" should read "decrease" 33 7 "change" should read "increase" 33 8 "change down" should read "decrease" 43 9 up "down" should read "up"

ERRATA (Concluded) Pa'e Line Correction 47 6 "'intervenining" should read "intervening" 48 2 "stable-steady-stable" should read "stablesteady-state" 60 14 "[d]" should read "p[d]" 66 8 "in o" should read "in off 68 4 up "fata:" should read "fatal" 69 5 up "at, now" should read "at t and" 71 8 up Insert I before "r- 0, 1, oO,, rq} 95 3 "surpress" should read "suppress" 99 5 up "Eudidean" should read "Euclidean" 104 2 "Eudidean" should read "Euclidean" 125 4 "tranma" should read "trauma" 263 3 up "reuslts" should read "results" 265 1 "forth" should read "fourth"

UNIVERSITY OF MICHIGAN 3 901115 02826 8I111 3 9015 02826 811 1