ENGINEERING RESEARCH INSTITUTE THE UNIVERSITY OF MICHIGAN ANN ARBOR THE LOGIC OF AUTOMATA AFOSR TN-56-539 ASTIA AD-110-358 * ~..?..-y. W.,..: WAHING, D.. C.,,,;. t j*''i'':~t-' Project 2512 ~MATHEMATICS DIVISION, CONTRACT NO. F 18(603)-72, FILE 4.12; WASHINGTON, D. C. December 1956 J

q?e~f A m

The University of Michigan * Engineering Research Institute TABLE OF CONTENTS Page LIST OF FIGURES iv ABSTRACT v OBJECTIVE v 1. INTRODUCT I ON 1 2. AUTOMATA AND NETS 2 2.1 Fixed and Growing Automata 2 2.2 Characterizing Tables and a Decision Procedure 11 2.3 Representation by Nets, the Coded Normal Form 20 3. TRANSITION MATRICES AND MATRIX FORM NETS 31 3.1 Transition Matrices 31 3.2 Matrix Form Nets 33 3.3 Some Uses of Matrices 38 4. CYCLES, NETS, AND QUANTIFIERS 45 4.1 Decomposing Nets 45 4.2 Truth Functions and Quantifiers 49 4.3 Nerve Nets 53 B IBLI OGRAPHY 55 DISTRIBUTION LIST 57 iii

The University of Michigan ~ Engineering Research Institute LIST OF FIGURES Figure Page 1 Normal form net. 22 2 Decoded normal form net. 34 3 Matrix box of order 4. 37 4 Matrix form binary counter. 39 5 Net of degree 3. 47 6 Two simple nets. 53

The University of Michigan ~ Engineering Research Institute ABSTRACT Classes of automata are distinguished: fixed and growing, deterministic and probabilistic. Then we present methods for analysing and synthesizing fixed, deterministic automata by four kinds of state tables. Our use of these tables gives a decision procedure for determining whether or not two automaton junctions behave the same. Matrix theory is applied to some of the state tables, and theorems are proved about the resulting matrices and a corresponding normal form automaton. Finally, we analyze fixed, deterministic automaton nets in terms of cycles. OBJECTIVE The aim of this paper is to develop systems and techniques of mathematical logic which are useful in analyzing the structure and behavior of automata.

The University of Michigan ~ Engineering Research Institute 1. INTRODUCTION* We are concerned in this paper with the use of logical systems and techniques in the analysis of the structure and behavior of automata. In Section 2 we discuss automata in general. A new kind of automaton is introduced, the growing automaton, of which Turing machines and selfduplicating automata are special cases. Thereafter we limit the discussion to fixed, deterministic automata and define their basic features. We give methods of analyzing these automata in terms of their states. Four kinds of state tables —complete tables, admissibility trees, characterizing tables, and output tables-are used for this purpose. These methods provide a decision procedure for determining whether or not two automaton junctions behave the same. Finally, a class of well-formed automaton nets is defined, and it is shown how to pass from nets to state tables and vice versa. A coded normal form for nets is given. In Section 3 we show how the information contained in the state tables can be expressed in matrix form. The (i,j) element of a transition matrix gives those inputs which cause state Si to produce state Sjo Various theorems are proved about these matrices and a corresponding normal form (the decoded normal form or matrix form) for nets is introduced. In Section 4 we first show how to decompose a net into one or more subnets which contain cycles but which are not themselves interconnected cyclically. We then discuss the relation of cycles in nets to the use of truth functions and quantifiers for describing nets. We conclude by relating nerve nets to other automaton nets. *We wish to thank Irving M. Copi, Calvin C. Elgot, John H. Holland, and especially Jesse B. Wright for many helpful suggestions. 1

The University of Michigan ~ Engineering Research Institute 2. AUTOMATA AND NETS 2.1. FIXED AND GROWING AUTOMATA To begin with we will consider any object or system (e.g., a physical body, a machine, an animal, or a solar system) that changes its state in time; it may or may not change its size in time, and it may or may not interact with its environment. When we describe the state of the object at any arbitrary time, we have in general to take account of: the time under consideration, the past history of the object, the laws governing the inner action of the object or system, the state of the environment (which itself is a system of objects), and the laws governing the interaction of the object and its environment. If we choose to, we may refer to all such objects and systems as automata. The main concern of this paper is with a special class of these automata: viz., digital computers and nerve nets. To define this class as a subclass of automata in general, we will introduce various simplifying and specifying assumptions. It will become clear that in adopting each assumption we are making a deliberate and somewhat arbitrary decision to confine our attention to a certain subclass of automata. For example, by altering some of the decisions we arrive at the rather interesting concept of indefi — nitely growing automata, which include the well-known Turing machines as particular cases. A. Discrete Units.-The first decision we make is to use only discrete descriptions; this means that between any two moments and between any two elements (particles, cells, etc.) there is a finite number of other moments or elements. This decision is a consequence of our interest in digital computers. It carries with it a commitment to emphasize discrete mathematics in the analysis of the systems under investigation: recursive function theory and symbolic logic. Hence our problems differ from the more common ones in which time, the elements of a system, the states (color, hardness, etc.) of an element, and the interaction of an object with its environment are all treated as continuous. When this is done, the emphasis is naturally placed on classical analysis and its applications. In contrast with digital computers, which use discrete units, analog computers simulate in a continuous manner, and for the study of them continuous (nondiscrete) mathematics is especially appropriate. It should be remarked that, though discrete mathematical systems are generally more useful for the investigation of discrete automata, a very common (and perhaps at the present time even primary) scientific use of digital computers is to represent (approximately, of course) continuous mathematics (e.g., to solve differential equations). In effect, this procedure involves finding discrete mathematical systems which adequately approximate the 2

The University of Michigan ~ Engineering Research Institute particular continuous system at hand. B. Deterministic Behavior. —We will not deal with elements capable of random (nondeterministic) behavior. Rather, we will assume that at each time the complete state of an object is entirely determined by its past history, including the effects of its environment throughout the past. Statistics could be employed to treat deterministic automata containing large numbers of elements (cf. the kinetic theory of gases), but we will not do this here. C. Finitude of Bases. —We will always exclude an "actual infinity" of states: each system will contain a finite number of elements at each time, and each element can be in only one of a finite number of states at any time. We will reserve for an independent decision the possibility of a "potential infinity" of elements and states, i.e., whether or not the number of elements or states of each element may change with time and in particular increase without bound. The finitude of time requires separate treatment. The discreteness condition (A) implies that there are never infinitely many moments between two given times. But this still leaves open the question as to how many different times are to be considered. For a general theory it would be inelegant to take a definite finite number, say, 1018 or 1027, as an upper bound to the number of possible moments. It seems desirable to allow time to increase indefinitely and to study the behavior of an object through all time. The one remaining choice is the question of an infinite past. The assumption of an infinite past has the advantage of making the entire time sequence homogeneous, thereby destroying a major part of the individuality of each moment of time. However, in the presence of our deterministic assumption an infinite past would be inconvenient (for reasons to be given in Section 3.3), so we stipulate that there is a first moment of time (zero). The internal state of the automaton at time zero will be some distinguished state. Speaking arithmetically, the two alternatives are to represent mo- ments of time by positive (or nonnegative) integers or by all integers (including the negative ones). (C1) An infinite future but a finite past: every nonnegative integer represents a time moment and vice versa; the number zero represents the beginning of time. (C2) An infinite future and an infinite past. Our decision is to adopt the first of these two alternatives. D. Synchronous Operations. —Some computers contain component circuits which operate at different speeds according to their function, location, 5 J

The University of Michigan ~ Engineering Research Institute the time, or the input at the time. Such circuits are called "asynchronous," in contrast to synchronous circuits which work on a uniform time scale (usually under the direction of a central control clock). There are really two aspects to asynchronous operation: (1) the actual intervals of time between operations vary with the time; (2) different parts of the computer pass through their states at different rates, these rates depending, in general, on the information held in the circuits. We will assume that all elements of a given system (net) operate at the same rate, and we will call this the "synchronous" mode of operation. In particular, this means an element. may operate at each nonnegative integer t. This assumption does not imply that all time intervals are equal; e.g., the interval from time 7 to time 8 can be one microsecond, while the interval from time 8 to time 9 is 12 hours. Thus our assumption does not exclude possibility (1) of the preceding paragraph. It involves some restriction with regard to point (2), but not as much as one might think. For example, we can represent different parts of an asynchronous machine by subsystems operating at different rates, interconnecting these with logical representations of interlocks. And just as discrete systems (e.g., digital computers) can be used to simulate continuous systems [cf. the second paragraph under the discussion of assumption (A)], so synchronous computers can be used to simulate asynchronous ones. E. Determination by the Immediate Past and the Present. —The stipulation that the behavior of an automaton at time t+l is determined by the past leaves unsettled the question as to how the remote past influences the present. We will assume that such influence occurs indirectly through the states at intermediary time moments, so that to calculate the state of the automaton at t+l it is not necessary to know its state for any time earlier than the preceding moment t. Thus we assume that for the present to be influenced by an event which happened in the remote past a record of that event must have been preserved internally during the intervening time. This postulate corresponds closely to the actual mode of operation of automatic systems. Of course, a computer with, e.g., a microsecond clock, may have delay lines, e.g., 500 microseconds long, so that a stored pulse is not accessible to the arithmetic unit at every clock time, but such a delay line is naturally represented as a chain of 500 unit (microsecond) delays, with the input and output of the chain connected to the rest of the computer. Indeed, the assumption we are now making causes no significant loss in generality, because for each fixed N such that an automaton A can always remember what happened during the N immediately preceding moments (but- not more), we can easily devise an automaton A' which, though capable of directly remembering only the immediate past, simulates completely the automaton A. When we think of automata which have unbounded memory or, in particular, ones which can remember everything that has happened in the past, we encounter a basically different general situation. In such cases, the 4~

The University of Michigan ~ Engineering Research Institute -information to be retained increases with time, whence for any automation of fixed capacity there may come a time beyond which it can no longer hold all the information accumulated in its life history. Thus, for a machine to remember all past history it is necessary and sufficient that it grow in some suitable fashion. Such growth can be accomplished by the appearance at each moment of a new delay element for each automaton input. In short, in the presence of the postulate of determination by the immediate past, the alternative of remembering all past history is best studied in connection with growing automata. Another problem connected with determination by the immediate past is the role of present inputs in determining the outputs. It would seem natural to stipulate that the environment at time t+l cannot influence the outputs of the automaton at time t+l but only at time t+2. That is the case with neural nets, since each neuron has a delay built into it. Strictly speaking, it is true in computer nets, but because the delay in a switch may be small compared to the unit delay of the system, it is convenient to regard this switching action as instantaneous. Thus the well-formed nets of Burks and Wright,l* which will be discussed further in Section 2.3, permit outputs which are switching functions of the inputs. F. Automata and Environment. — Supposedly the change of state of a solipsist is independent of the environment, and the environment is not affected by the solipsist; cf. Leibnitz's concept of a "monad." A "solipsistic" automation would be one which (1) changed its state independently of any environmental changes and (2) whose output did not influence the environment. We are not primarily interested in such automata. Rather, we will consider how the environment (the inputs) affects the automaton but not how the output of an automaton affects its environment. The last point is related to the ordinary method of representing inputs and outputs. It is well known that there are significant and useful logical symbols for the internal action of automata (see Section 2.3). The standard method of representing inputs and outputs is in terms of the binary states of input and output wires. This is not directly applicable in simulating such "inputs" as light and sound waves., physical pressures, etc., and such "outputs" as physical actions, Theoretically, we can, however, just as well interpret certain standard binary elements as representing these. For some purposes, we may want to add as new primitives representations of the lights and keys commonly used on computers (see Burks and Copi,2 p. 306), as well as symbols representing additional methods of sensing and other methods of acting on the environment that automata are capable of. Von Neumann has done some work along this line, but it has not been published (see Shannon,3 p. 1240). It might be suggested that one ought to devise a symbolism for *References are to the bibliography at the end of the paper. 5

The University of Michigan ~ Engineering Research Institute magnetic or paper tape input and output to a computer; but that is unnecessary because such devices are very well represented by net diagrams for a serial type of storage (see Burks and Copi,2 p. 313, ftn. 9). We will not here attempt to devise separate notations for the various kinds of interactions possible between an automaton and its environment, but will content ourselves with the customary way of simulating inputs and outputs by binary states of wires. Even subject to this restriction there are a number of alternatives to consider. The most general case would be to identify the environment partly or wholly with certain automata so that interaction occurs among these and the particular automaton under study (cf. the many-body problem of mechanics). A simple case would be to identify the whole environment with another automaton (cf. the two-body problem of mechanics). Accordingly, we have the following alternatives. (FO) An object changes its state automatically, independent of the environment. (F1) An automaton changes its state in accordance with its structure and the inputs (the environment). (F2) Different automata interact with one another. We will be primarily concerned with (F1). In other words, we will assume that the automaton has no influence over what inputs it receives, and that in general the inputs do have effects on the internal state (i.e., state of internal cells) of the automaton. As a consequence, we can define the units or atoms of which an automaton is compounded into two classes: input cells and internal cells, or input and internal wires, or input and internal junctions. The situation under (FO) becomes a special case of that under (F1) when either the number of input cells (or wires) is zero (a limiting case) or the effects of the inputs are more or less canceled so that the automaton behaves in an input-independent manner. The latter case is exemplified by a logical element whose output wire is always active regardless of the state of the input wire. (F2) may also be regarded as a special case of (Fl). Since the inputs and outputs of an automaton are wires, two automata may be interconnected to produce a single (more complex) automaton, of which the original automata are parts or subsystems. Thus we regard (F2) as a special case of analyzing a complex machine into interrelated submachines. A common application of this concept is to be found in the design of a general-purpose computer. Typically such a computer is divided into Arithmetic Unit, Storage, Input-Output Unit, and Control (see, for example, Burks and Copi,2 p. 301). The utility of making such divisions lies partly in the relatively independent functioning of these units and partly in the (related) fact that it is conceptually easier to under6 L

The University of Michigan * Engineering Research Institute stand what goes on in terms of these parts. The kind of structuring under discussion usually occurs at more than one level; e.g., the Parallel Storage (of Burks and Copi,2 pp. 307-313) divides naturally into a switch and 4096 bins (each storing a word), and the bins are in turn "composed" of cells (each storing one bit of a word). G. 7xclusion of Growth. —While we have adopted the postulate of finite bases, we have yet to decide whether the structure of an automaton, the number of its cells, or the number of possible states of each cell are to be allowed to change with time. If changes are permitted but are confined by a preassigned finite bound, we might as well have used a fixed automaton which embodies this bound to begin with. Hence the really interesting new case is that of a growing automaton which has no preassigned finite upper bound on the possible number of cells or cell states. Structural changes (e.g., rewiring a given circuit) do not seem to generate unbounded possibilities, although in special studies, such as investigations into the mode of operation of the human brain (cf. Rochester et al.4), the use of a structurally changing automaton is more illuminating than the use of the corresponding fixed automaton. In any case, we can, theoretically, reduce all three kinds of growth to increase in either the number of cells or the number of possible cell states: given any growing automaton, we can find another which functions in the same way but grows only in the number of its cells (or, alternatively, only in the number of possible states for each cell). For every and all forms of growth, it seems natural (in the context of our deterministic assumption) to require that the process be effective (recursive). We will therefore assume once and for all that each definition of a growing automaton determines an effective method by which we can, for each time t, construct the automaton and determine its state for that time. An important particular case corresponds to primitive recursive definitions, each of which yields a method by which we can construct the automaton for t = 0 and, given the automaton and its state at t, we can construct the automaton at t+l. The growth may not depend on the state of the inputs, but the possibility of its doing so is provided for. Moreover, "growth" is taken to include shrinkage as well as expansion. Thus we could have a "growing" computer which expands and contracts as the computation proceeds, having at each time period just the capacity needed to store the information existing at that time. Two types of automata, fixed and growing, can be characterized as follows: (G1) The structure and cells of the automaton are fixed once and for all, and each cell is capable of a fixed number of states, (G2) The automaton may grow (expand and contract) in time in a predetermined effective manner.

The University of Michigan ~ Engineering Research Institute In this paper we will be concerned entirely with fixed automata, except for some remarks on growing nets in this subsection. These remarks are intended to elucidate. the concept of a growing net and to indicate why we think it is important. But before beginning on them we wish to specify (G1) further by stipulating that each cell, junction, or wire is capable of two states, on and off, firing and quiet; we will later correlate these with one and zero, and with true and false. We could of course allow each cell to have any fixed finite number of possible states and different cells to have different numbers of states. But it is better to fix the number of states at the constant two. There are a number of reasons for this. The wires and cells of many automata and most digital computers do in fact have two significant states. When this is not the case we can always represent a cell with q possible states by p two-state cells for any p ~ log2 q (e.g., ten of the sixteen different states of four binary net wires can represent ten discrete electrical states of a single circuit wire), so by adapting our system to the commonest case we do not lose the power to treat the nonbinary cases. This commitment to two-valued logic need not blind us to the fact that there may be cases where multivalued logic is more convenient; the point is that our logic can handle these cases and we have no interest at the moment in exploiting whatever advantages multivalued logic might have here. We return now to growing nets, mentioning first some special cases of them already known. A Turing machine (see Turing;5 Kleene;6 Wang7y8) may be regarded as an automaton with a growing tape. Usually the tape is regarded as infinite, but at any time only a finite amount of information has been stored on it, so it is essentially a finite but expanding automaton net (of Burks and Copi,2 p. 313, ftn. 8). If, in a Turing machine, we take as input cells the squares included on the minimum consecutive tape position which contains all marked squares at the moment, then the growth consists simply of the expansion and contraction of the tape. Or if we use the formulation of Wang7 which eliminates the erasure operation, a Turing machine is a growing automaton with an even more limited type of growth-namely, an expansion of the tape. In contrast, a growing automaton may in general grow anywhere, not only at the periphery but also internally (by having new elements arise between elements already present). Though a Turing machine is a special kind of growing automaton, it has as much mathematical (calculating) ability as any growing automaton; for every type of computation can be done by some Turing machine, and the mathematical ability of an automaton is limited to computation. In view of this situation one might wonder why the general concept of a growing net is of interest. Its importance can be shown by the following considerations. John von Neumann has developed some models of self-reproducing machines (von Neumann;9 Shannon,3 p. 1240; Kemeny,10 pp. 64-67). These are machines which grow until there are two machines, connected together, the 8

The University of Michigan ~ Engineering Research Institute original one and a duplicate of it; the two machines may then separate. Hence they are clearly cases of growing nets. The basic process to be simulated or modeled in the growth and reproduction of living organisms is the complete process from a fertilized egg to a developed organism which can produce a fertilized egg. For this purpose we would need to design a relatively small and simple automaton which would grow to maturity (given an appropriate environment) and would then produce as an offspring a new small automaton. Von Neumann's models can be construed either at the level of cells or at the level of complete organisms, but in either case they seem to provide only a partial solution. The process of cell duplication is only one component of the complete process described above and the self-reproduction of a completely developed entity omits the important process of development from infancy to maturity. Hence the model we suggest is-a type of growing automaton not yet covered in the literature, A second novel type of growing automaton is a generalized Turing machine in which growth is permitted at points other than at the ends of the tape. A typical Turing machine, although logically powerful, is clumsy and slow in its operation. Consequently, to design a special-purpose Turing machine or to code a program for a universal Turing machine is a complicated and laborious process (although a completely straightforward one). What complicates the task is the linear arrangement of information on a single tape, which requires the tape or reading-head to be moved back and forth to find the information. That movement may be reduced somewhat by shifting the old information around to make room for the new information, but this operation also contributes to the complexity of the whole process. To develop this point further, we will discuss in more detail the relation between recursive functions and Turing machines. Turing5 worked in terms of computable numbers; Kleene6 and Wang718 work in terms of recursive functions. Since our discussion has been in terms of functions, we will use Kleene's and Wang's works as our references. The basic mathematical result underlying the significance of Turing machines is the following, Mathematicians have rigorously characterized a set of functions, called partial recursive functions, and this set of functions is in some sense equivalent to what is computable (Kleene,6 Ch. XII). Each partial recursive function is definable by a finite sequence of definitions, each definition being of one of six possible forms (Kleene,6 pp. 219 and 279)> It is known how to translate each sequence of definitions into a special-purpose Turing machine and into a program for a general-purpose Turing machine (Kleene,6 Ch. XIII; Wang7). This translation, while rigorous and straightforward, is often complicated for the reasons, among others, mentioned in the preceding paragraph. Simpler and more direct translations can be made by using growing nets in which growth is allowed to occur whenever it simplifies the construction, not just at two places, i.e., at the ends of the tape, 9

The University of Michigan ~ Engineering Research Institute where it is allowed to occur in the conventional Turing machine. Such growing nets will be generalizations of a Turing machine. We can arrive at a third novel kind of growing automaton by generalizing a general-purpose computer in the way we generalized a Turing machine in the last paragraph. The usual general-purpose computer consists of a fixed internal computer together with one or more tapes. As in the Turing machine, these tapes may be regarded as expanding at the ends whenever needed; in practice the expansion is handled by an operator replacing tape reels, using either blank tape or tape reels from a library of tapes. In writing programs for such a machine, the programmer needs to keep track of two things: (1) the development of the computation, in terms of the growth of old blocks of information and the appearance of new blocks of information; (2) shifting the information from one kind of storage to another (e.g., from a serial to a parallel storage) and moving the information about within a storage unit. Both of these components of computation are essential. But (1) seems more basic for understanding the nature of the computation, and at any rate it is helpful to be able to study each of the components in isolation. This can be done with growing nets, for we can eliminate (2) by providing for growth wherever it is needed to accommodate new information or new connections to old information. We feel that the study of growing automata would contribute to the theory of automatic programming. The development of a powerful theory of automatic programming has so far been impeded by the many details involved in actual computation; by eliminating (2) we would eliminate many of these details and would focus attention on the more basic component (1). We turn now to fixed automata which satisfy the assumptions (A), (B), (C1), (D), (E), (F1), and (G1). In summary, we arrive at the following definition of a (finite) automaton: Definition 1: A (finite) automaton is a fixed finite structure with a fixed finite number of input junctions and a fixed finite number of internal junctions such that (1) each junction is capable of two states, (2) the states of the input junctions at every moment are arbitrary, (3) the states of the internal junctions at time zero are distinguished, * and (4) the internal state (i.e., the states of the internal junctions) at the time t+l is completely determined by the states of all junctions at time t and the input junctions at time t+l, according to an arbitrary preassigned law (which is embodied in the given structure). An abstract automaton is obtained from an automaton by allowing an arbitrary initial internal state. *This condition applies only to those junctions whose state at a given time does not depend on the inputs at the same time; cf. condition (4) following. 10

The University of Michigan ~ Engineering Research Institute Several aspects of this definition call for comment. In it automata states have been defined in terms of junction states. This follows Burks and Wright,l where each wire has the state of the junction to which it is attached, and the nuclei or cell bodies are not regarded as having states but as realizing transformations between junctions or wires. An alternative would be to define automata states in terms of cell states. Condition (4) places some restrictions on the way automata elements are to be interconnected, but it does not completely specify the situation; this will be discussed further in Section 2.3. The initial state of the internal junctions also calls for discussion. In the definition of an abstract automaton this is taken more or less as an additional input which can be changed arbitrarily. As a result, two abstract automata, to be equivalent, must behave the same for each initial state picked for the pair of them. On the other hand, for most applications to actual automata, it is best to assume a single initial state. The word "structure" in the above definition can be avoided if we speak exclusively in mathematical terms and consider the transformations realized by automata and abstract automata. We will do so in the next subsection, returning to a more detailed investigation of the structure of automata in the following subsection (viz., 2.3). 2.2. CHARACTERIZING TABLES AND A DECISION PROCEDURE Consider for a moment automata whose internal states are determined only by the immediate past and hence are not influenced by the present inputs. Let there be M possible input states, I., Io,..., IM_1, and N possible internal states, So, S1,..., SN-_. Even though each junction of an automaton is capable of only two states, we do not require M and N to be powers of two. For one thing, when an automaton is being defined, the values of M and N are stipulated and are not necessarily powers of two. Also, when an automaton is given, not all possible combinations of internal junction states may occur because of the structure of the automaton, and not all possible combinations of input junctions may be of interest (because, e.g, the automaton is to be embedded in a larger automaton where not all of the possible inputs will be used). We will assign nonnegative integers to the input and internal states. Let I and S range over these numbers, respectively. A complete automaton state is represented by the ordered pair <I,S>o (If the automaton has no inputs, then there are no I's and the complete automaton state is just S.) Let SO be the integer assigned to the distinguished initial internal state; So will usually be zero, but not always. An abstract automaton differs from a nonabstract one just in not having a distinguished initial state. 11

The University of Michigan ~ Engineering Research Institute Since the input states are represented by numbers, a complete history of the inputs is a numerical function from the nonnegative integers O, 1, 2,... (representing discrete times) to integers of the set [I). That is, it is an infinite sequence I(0), I(1), I(2),..., I(t),...; it may be viewed as representing the real number {I(O)+[I(1)/K]+[I(2)/K2]+...}, in which K is the maximum of the set {I3. By our convention that the initial internal state is SO we have S(O) = SO. By the assumption of complete determination by the immediate past we have for all t S(t + 1) = T[I(t), S(t)], where T iS an arbitrary function from the integer pairs j<I,S> to the integers jS}. Or, in other words, as the input function I and the time t are the independent variables, T is an arbitrary function of two arguments (one ranging over functions of integers and another ranging over integers) whose values are integers. It follows by a simple induction that for each infinite sequence I(O), I(1),..., I(t),..., repeated application of the function T yields a unique infinite sequence S(O), S(1),..., S(t),..., with S(0) = So. Since for many purposes we are interested not only in the existence of values of the function T, but also in finding them, we will assume that v is defined effectively, though actually much of our discussion would be valid without this restriction. We next broaden our theory so as to include automata whose internal state at t+l depends also on the inputs at t+l. To do this we allow P "output"t states 00, 01,..., Op-1 such that O(t) = [I(t), S(t)] where X is again an arbitrary effective function. In general, the complete state of an automaton at any time is given by the ordered triad <I,S,O>. In specific cases I, S, or 0 may be missing. We can now give an analytic definition of automata and abstract automata by means of these transformations. Definition 2: An automaton is in general characterized by two arbitrary effective transformations (T and x) from pairs of integers to integers. These integers are drawn from finite sets {I3, {S}, and f0o. (CS contains a distinguished integer So. The transformations are given by S(O) = So S(t+l) = r[I(t), S(t)] O(t) = 4[I(t), S(t)]. 12

The University of Michigan ~ Engineering Research Institute If we omit the condition S(O) = So, we obtain an abstract automaton. Thus, speaking analytically, the study of finite automata is essentially an investigation of the rather simple class of transformations T and x in the above definition. The definition of the class as thus given is superficially very general in allowing T and X to be arbitrary calculable functions. On account of the very restricted range and domain of these functions, however, that generality is only apparent. We can find a simple representation of the class of automaton transformations in the following way. Since T is effective, and since its domain and range are finite, we can effectively find for each pair <I,S> the value of T[ I(t), S(t)]. Hence we can produce a table of M x N pairs <<I,S>, SI> such that if <I,S> is part of the state at t, then S' is part of the state at t+l. We shall call this set the M-N complete table of the given automaton. Each complete table is a definition of the function T. In a similar way we can construct an output table for the automaton, each row being of the form <<I,S>, O>. Such a table defines the function X. It is important that the function T and the complete table involve a time shift, while the function X and the output table do not, Hence for an investigation of the behavior of an automaton through time the complete table is basic, the output table derivative. That is, by means of the complete table we can compute S(1), S(2), S(3),... from the inputs I(O), I(1), I(2),... and leave the determination of 0(0), 0(1), and 0(2),... for later. Note that to stipulate that the output at time t cannot be influenced by the input at the same time is to require X to be such that 0(t) = k[S(t)]. When this is the case the states O(t) and the output table can be dispensed with since O(t) is completely dependent on S(t). (When the behavior of individual junctions is being investigated, the output table may nevertheless be convenient.) For these various reasons the state numbers {I} and tS} are more basic than the state numbers (03, and the complete table is much more important than the output table. This being so, we will often concentrate on automata whose internal states are determined only by the immediate past and ignore the output table. Since the states S at t are so defined that they do not depend on the inputs at t, any I can occur with any S, and hence there are M x N possible pairs <I,S> in the complete table. There are N possible values of S'. Hence there are TNM x N possible complete tables for an M-N automaton. We will say that two abstract M-N automata, in whatever language they may be described, are equivalent, just in case they have the same complete table. That this definition is proper can be seen from the following considerations. If the complete tables are the same, then for the same in13

The University of Michigan * Engineering Research Institute itial internal state and the same input functions, the initial complete states in the two automata are the same, and the same complete state at any moment plus the same input functions always yield the same complete state at the next moment. On the other hand, if two complete tables are different, there must be a pair <<I,S>, S'> in one table but not in both, such that by choosing suitable input functions and a suitable internal state represented by <I,S>, we can have the complete state represented by <IS> realized at time zero, and yet the complete state at time one will have to differ. Since we can find effectively the complete table of a given automaton (the details of this process will be explained in the next subsection) and compare effectively whether two complete tables are the same, we have a decision procedure for deciding of two given abstract M-N automata whether or not they are equivalent. The situation is more complex with automata which have predetermined initial internal states, for two M-N automata with different complete tables may yet behave the same (be equivalent). This is possible because it can happen that for every pair <I,S>, S'> which occurs in one table but not in the other, we can never arrive at the internal state S from the distinguished initial internal state SO, and hence can never have the complete state <I,S>, no matter how we choose the input functions. In such a case, two M-N automata with the same prechosen initial internal state may behave the same under all input functions, despite the fact that they have different complete tables. Hence, identity of complete tables is a sufficient but not necessary condition for equivalence of two automata. To secure a necessary and sufficient condition, it suffices to determine all the internal states which the given initial internal state can yield when combined with arbitrary input words, and then to repeat this process with the internal states thus found, etc. If the two complete tables coincide insofar as all the pairs occurring in these determinations are concerned, the two automata are equivalent. To establish that such determinations will always terminate in a finite time requires an argument: since there are only finitely many pairs in each complete table, the process of determination will repeat itself in a finite time. To describe the procedure exactly, we introduce a few auxiliary concepts. We can think of a tree with the chosen initial internal state SO as the root. From the root M branches are grown, one for each possible input word Ii with the corresponding internal state at the next moment Soi at the end. These M branches can be represented by <<Io, S>, SoO>,..., <<IM..Y SO >a So,M.1>, which all belong to the complete table. If all the numbers SOO,... So,M.1 are the same as So, the tree stops its growth. If not M branches are grown on each Soi such that Soi f SO, and such that Soi does not equal any Sop for which M branches have already been grown, and we arrive at SoiO, *.., SoiM-1. If all the numbers Soi; (ij arbitrary) already occur among SO, So1, 0 0 So,M-i, then the tree stops its growth. If not, then M branches are grown on each Soij such that it does not equal So, or any of the Sop, or any Sopq for which M branches have already been grown. That is, whenever 14

The University of Michigan ~ Engineering Research Institute in the construction we come to an S, if it is already on the tree we stop, else we grow M branches on it, one for each I, This process is continued as long as some new internal state is introduced at every height. Since there are altogether only M a priori possible internal states, the height (i.e., the number of distinct branch-levels) of the tree cannot exceed M. For any M-N automaton, we can construct such a tree which will be called the admissibility tree of the automaton. We can, of course, start with any state S as the assumed initial state, and this gives us an admissibility tree relative to S for an abstract automaton. Those values of S (including So) which appear in the admissibility tree are called admissible internal states. All other values of S are inadmissible. If we collect together the ordered pairs which represent branches of the admissibility tree, we obtain a (proper or improper) subset of the complete table, which we shall call the M-N characterizing table of the automaton. (As in the case of the admissibility tree, it is easy to define the concept of a characterizing table relative to S for an abstract automaton.) In order that two M-N automata be equivalent (i.e., behave the same), it is necessary and sufficient that they ha-ve the same characterizing table. Since there is an effective method of deciding whether two M-N automata have the same characterizing table, we have a decision procedure for testing whether two M-N automata are equivalent. Quite often we are not interested in the whole automaton, but rather in the transformations which particular cells (junctions, wires) of an automaton realize. To discuss this aspect of the situation we need to correlate states of automata with states of the elements of automata. We will do this in two stages; first, by putting state numbers in binary form (in the present subsection), and, next by correlating zero and one with junction states (in the next subsection), The state numbers I and S are nonnegative integers. The binary representation of states is made simply by putting each state number in binary form, making all the I the same length, and making all the S the same length (by adding vacuous zeros at the beginning when necessary). Let m,n be the number of bits of I,S, respectively, in a characterizing table or complete table in binary form. Clearly m is the least integer as large as (or larger than) the logarithm (to the base two) of the maximum I; similarly with n. Let the bits of I be called Ao, Al,..., Am_l, so that I = Ao Al... Am-_. where the arch signifies concatenation. Similarly, let the bits of S be called Bo, B1,..., Bnl, so that S = Bo B1... Bnl. In the next subsection we will associate the A's with input junctions and the B's with internal junctions. We speak of the characterizing table in binary form as an m-n characterizing table 15

The University of Michigan ~ Engineering Research Institute It follows from our discussion of characterizing tables that the function T of Definition 2, given by S(O) = So S(t+l) = T[I(t), S(t)] is a rather simple primitive recursive function (with a finite domain) and that the function S(t) is defined primitive recursively relative to the input function I(t). If our interest is in the transformation realized by a particular internal junction, we use another primitive recursive function oa such that ai(n) gives the i-th binary digit of n (i = O, 1,...). Hence each such junction realizes a transformation oi[S(t)] or Bi(t) which is primitive recursive relative to I(t) (Burks and Wright,l Theorem XIV; Kleene,ll Theorem 8). Since the magnitudes of m and n affect the number of junctions of the corresponding automaton, it is of interest to obtain a minimal representation in terms of bits. Given a characterizing table, one can so rewrite the state numbers as to minimize m and n. That is accomplished by so assigning the numbers that the largest I is smaller than the least power of 2 greater than or equal to M, and similarly for S and N. A special case occurs when the states Io, I1,..., IM-1 are assigned the numbers 0, 1,..., M-l, respectively, and the states So, S,..., SN-_1 are assigned the numbers 0, 1,..., N-l, respectively (note that the distinguished state So is assigned the number zero). A characterizing table put in this form is said to be in coded normal form. Automata nets corresponding to this form will be discussed in the next subsection. (Note that minimizing a complete table does not suffice here, because the number of inadmissible states may be such as to require more bits for representing the set of states than for representing the set of admissible states.) Another special type of automaton is the decoded normal form automaton; it is of interest in connection with the application of matrices to the analysis of nets. In a decoded normal form characterizing table the input words are coded as for the coded normal form, but the internal states SO, S1,..., SN.1 are assigned the numbers 20, 21,..., 2N-1; here an N bit word is needed to represent the N internal states. For six internal states we would have the numbers 100000, 010000, 001000, 000100, 000010, 000001; notice that So has a one on the extreme left, i.e., for So, Bo is one and all other Bi are zero. Automata nets corresponding to decoded normal form characterizing tables will be presented in Section 35 Each of the A's and B's (bit positions of the binary representations of I and S) is a binary variable. Hence the complete table and, more importantly, the characterizing table are (when put in binary form) kinds of truth tables. Thus we have to large extent reduced the problem of automata description and analysis to the theory of truth functions. Of course the S' in 16

The University of Michigan ~ Engineering Research Institute <<IS>, S'> is the state at t+l, while S is the state at t, so we need to distinguish different times here and hence to use propositional functions (see Section 4). Nevertheless, as the characterizing table shows, we need only a very special form of the theory of quantifiers, in which each time step is a matter of the theory of truth functions. So great is the advantage of this partial reduction to the theory of truth-function logic that we will hereafter assume that all characterizing tables are in binary form. Consequently, we may henceforth use any of the techniques of the theory of truth functions which are applicable, not merely the (often cumbersome) truth-table technique. We return now to the transformations realized by individual elements of the automata, which involves considering the bits of S, i.e., the B's. In the next subsection each B will be associated with an internal junction, so the analysis is also in terms of junctions. The basic problem is to compare the behavior of two bits or junctions, which may or may not belong to the same automaton or characterizing table. If the two junctions to be compared belong to the same automaton, then they realize the same transformation (behave the same) if and only if the corresponding bits in the S (or S') entries of the characterizing table are everywhere the same. (The state SO need not appear in the S' column of <<,S>, S'>; every other state which is in the S column is in the S' column and vice versa. Of course, all bits are the same in So.) This is so because the values of S are the admissible states of the automaton, and at each moment the internal junctions of the automaton are in just one of these states. Hence the question as to whether two junctions of an automaton behave the same can be decided effectively. If the two junctions are in two different automata, then it is in general not necessary that the automata have the same number of junctions, i.e., that the characterizing tables have the same number of columns, for them to behave the same. Since the transformations depend ultimately only on the time and the inputs, the number of internal junctions need not be the same; since the behavior of an internal junction may be independent of some inputs, even the number of input junctions may be different. Suppose the two junc- I tions belong to an ml-nl and an m2-n2 automaton. Then a necessary and sufficient condition for these junctions to realize the same transformation is that there exist some new ms3-n automaton, with n3 = nl+n2 ml+m2 >~ m3 = max(ml, ma), which is obtained from the two given automata by connecting a subset of the inputs of one to a subset of inputs of the other in a one-one fashion, and in which the internal junctions under consideration realize the same transformation. That supplies an effective procedure because there is only a finite number of inputs to each automaton and hence only a finite number of ways to interconnect them, and for each way the question of equivalent behavior can be decided effectively. When the process is conducted 17

The University of Michigan * Engineering Research Institute on the characterizing tables it involves identifying certain of the columns of the I part of the tables. It is allowed that the subset of inputs which are interconnected may be null, in which case m3 = m1 + m2 and the resulting automaton is just the result of juxtaposing the two original automata. For just as the behavior of a junction or cell may be independent of one of the inputs, so it may be independent of all of the inputs. In this case the junction changes from one state at t to another at t+l in a uniform manner independent of the states of the inputs at t. In other words, it realizes a transformation which is independent of the input functions; we will call such a transformation an inputindependent transformation (it was called a "constant transformation" in Burks and Wright,l p. 1358) and speak of the junction as an input-independent junction. The number of internal states of an automaton is finite, and an automaton is completely determined by the immediate past, hence all inputindependent transformations must be periodic (Burks and Wright,1 Theorem I). Therefore no automaton can realize the simple primitive recursive input-independent transformation which has the value one if and only if t is a square (0, 1, 4, 9,...) (Burks and Wright,1 Theorem II; Kleene,ll Section 13). A very special type of automaton is one whose internal junctions are all input-independent junctions. In such a net, which we call an inputindependent net, there may be input junctions, but these cannot influence the internal state at any time. For each such automaton, complete and characterizing tables can be found which have no input states. The admissibility tree provides an effective means for deciding whether the behavior of an internal junction or cell is independent of a specified input and hence for deciding whether the behavior of a junction is independent of all inputs (i.e., realizes an input-independent transformation). For this purpose it is helpful to identify all occurrences of a given state on the admissibility tree. Then one can trace the behavior of the automaton by proceeding in cycles around the tree. We will not describe the procedure in detail, but will make a few comments about it. By a direct inspection of the characterizing table we can tell whether a change in an input junction A at t can make a difference in B at t+l. Repeating this process we can find all the junctions that A can influence directly, all that these can influence directly, etc. Since the net is finite, this process will terminate. That is, because of the finite nature of the net there is an interval of time q such that if A can influence the behavior of B, it can do it within the time interval q; this interval may be determined from the structure of the net. If no input junction influences Bj, then Bj realizes an input-independent transformation, which has been already stated to be periodic. This special case of input-independence can be discovered directly from the characterizing table, for a junction B8 realizes an input-independent transformation 18

The University of Michigan ~ Engineering Research Institute if and only if for each S there is a unique value of Bj in S', no matter what I is. The behavior of the input-independent transformation during its initial phase and during its main period can be found from the admissibility tree. (The problem of deciding whether or not two junctions Bj and Bk realize the same transformation is really a special case of the problem of deciding whether a junction realizes a particular input-independent transformation; for we can have Bj and Bk drive an equivalence element, whose output will be the simple input-independent transformation 11111... if and only if Bj and Bk realize the same transformation. See the next subsection.) In our discussion we have for some time ignored the output table. It too can be put in binary form, and since both the O and the S entries refer to the same time, the result is a straight truth table (in contrast to the characterizing table, where some columns refer to time t and some to t+l). Hence the preceding results are easily extended to include the case of output tables. We have not yet considered methods of minimizing the labor required to calculate the admissibility tree and the characterizing table. In many cases it is convenient to work with the equations describing a net by means a of variables (see the next subsection) rather than with the values of these variables. In some cases one can go directly from such equations to the characterizing table. It is also possible to decompose many nets so as to reduce greatly the number of states to be considered (see Section 4.1). Other methods of simplifying the work will occur to one who is engaged in it and to one familiar with the methods for simplifying truth-table computation. Before proceeding further let us briefly summarize the concepts introduced in this subsection. Definition 3: An automaton is in general characterized by state numbers I,S and 0. The complete table of an automaton is the set of all pairs <I,S>, S'> such that, for given I(t) and S(t), S' is the value of S(t+l). The characterizing table of an automaton is the subset of its complete table such that each S and S' in it is admissible. A state S is admissible if and only if it is the distinguished initial state SO or it can be arrived at from the initial state by choosing a suitable finite sequence of inputs. An admissibility tree is a graph used in computing the admissible states, beginning with SO and proceeding systematically. An output table is a table of pairs <I,S>, O>, stating a value O(t) for given values of I(t) and S(t). An input-independent junction realizes a transformation whose values are independent of the inputs. We will conclude this subsection by commenting on the relation of the decision procedure described above for testing whether two junctions realize the same transformation to other decision procedures. Recently a 19

The University of Michigan ~ Engineering Research Institute decision procedure for Church's formulation of computer logic has been announced.* We are not acquainted with this decision procedure and hence cannot compare it with ours. However, we can prove the equivalence of Church's system to ours, from which it follows that the two decision procedures accomplish the same result. We do this in two steps. First, the definition of automata given in Section 2.1 is in all essential respects equivalent to that of a well-formed net in Burks and Wrightl (this will be shown in the next subsection). Church's simultaneous recursion is a slight generalization of the second definition of determinism given in Burks and Wright,l p. 1360; the difference lies in the fact that Church's "A's" and "B's" are independent of each other, whereas in the Burks-Wright definition of determinism each Ai has a certain relation to the corresponding Bi. Because of this relation between the two definitions, it follows directly that every transformation realized by a well-formed net is definable by a Church recursion. The converse may be shown by a net construction in which for each i a net is made for Ai and for Bi, and the outputs are combined to give Ai for t = 0 and Bi for t > 0. It is perhaps worth noting that our decision procedure may be extended to give a method for deciding whether the transformations defined by a set of equations (Burks and Wright,l p. 1358) are deterministic or not. This may be done by going through all the states <I,S> and seeing if for each of these the equations yield a unique S'. If a given S is admissible (by the admissibility tree) and does not yield a unique S' for each I, then the net (i.e., the transformations it realizes) is not deterministic. There is, in any event, only a finite number of cases to consider, so the procedure is effective. We remark finally that since monadic propositional functions of time may be used in describing net behavior, it might seem that known decision procedures for the monadic functional calculus directly apply here. However, the exact relation between quantifiers and net theory is not known, and in any event when quantifiers are used they required bounds, which are essentially dyadic (see Section 4.2). 2.3. REPRESENTATION BY NETS, THE CODED NORMAL FORM We turn now to the representation of automata by diagrams (called automata nets) which show the internal structure of automata. For this purpose we need to correlate the binary digits zero and one used in the preceding subsection to the physical states of wires, junctions, or cells. On the normal interpretation zero and one are associated with the inactive and active *Joyce Friedman, "Some results in Church's restricted recursive arithmetic," The Journal of Symbolic Logic, 21, 219 (June, 1956); this is an abstract of a paper presented at a meeting of the Association for Symbolic Logic on December 29, 1955. 20

The University of Michigan ~ Engineering Research Institute states, respectively. A dual interpretation (zero to active, one to inactive) is also possible, and the two interpretations may be interrelated by the wellknown principle of duality. It is clear from the developments of the preceding subsection that we need net elements capable of performing two kinds of operations: truth functions and delays. For these purposes we adopt two distinct kinds of elements: switching elements for truth-function operations and a delay element for the delay operation, Some standard logical connectives of the theory of truth functions are: *, (two representations of conjunction, "and"), v (disjunction, "or"), ("neither-nor"), I ("not both"), - ("if and only if"), D("if...then..."), and —,,' (three representations of negation, "not"). Circuits for realizing all of these are common. As is well-known, all truth functions may be constructed from the dagger (1) or from the stroke (1), so we shall in general assume sufficient primitive switching elements to realize these. Sometimes it is convenient to have an infinity of primitive switching elements, one for each truth function. Of course, in practice complicated switching functions are realized by compounding simple switching elements, but by representing such circuits by single net elements we can separate the problem of compounding these circuits from other problems in net analysis (see, for example, Fig. 1). A switching element consists of a nucleus together with input wires and an output wire. The termini of these wires are called junctions. Switching elements may be interconnected in switching nets in ways to be discussed subsequently. For examples, see Fig. 1 and other figures of this paper. Propositional variables are associated with each junction of net. Corresponding to each switching element there will be an equation of the theory of truth functions which describes the behavior of that element. For example, if a conjunction ("and") element has the variables Ao and Al attached to its input junctions (and wires) and the variable Co attached to its output junction (and wire), it realizes the equation Co(t) m [Ao(t) 8 Al(t)], or, more succinctly, Co 0 (Ao & Al). The theory of switching nets corresponds to the theory of truth functions and is well developed (see Shannon;12 Burks and Wrightl). One aspect of the equation Co(t) - [Ao(t) & AL(t)] needs discussion; it is that the value of the output is given at the same time t as the inputs. In the physical realization of a conjunction this, of course, cannot happen; the output will occur slightly later than the inputs. This suggests putting a delay in at the output of each switching element. Such a delay does in fact exist in each nerve cell. However, for purposes of theoretical analysis it is best to isolate the logical, nontemporal functions of automata from* the temporal aspects of their behavior. Hence we can first construct the theory of switches, basing it on the theory of truth functions, and we can later augment this theory to deal with the additional complications brought in 21

, —I E0 = EI Direct- 2 - transition switch T20~~~ Transition o part E2 -E AO __Delays Inputs m~~~~~~~~~~~~~~~~~~~~ Ai _. f~ a1 Output T switch Yb Co CIl Fig. 1. Normal form net.

The University of Michigan ~ Engineering Research Institute by delays. This organization of the subject has practical bearings as well as theoretical value, for to a certain extent the design of switches does and should proceed independently of the design of those parts of computers which produce the transitions from state to state. Hence our switching nets have no delays in them. When we come to formulate the rules governing their interconnection (formation rules for well-formed nets), we will take this factor of idealization into account and not permit interconnections that could lead to trouble because we have ignored it. Hence we will return to this topic at that time. The delay element consists of a nucleus with an input and an output wire; see Fig. 1. It delays an input signal one unit of time; i.e., its input wire state at time t becomes its output wire state at time t+l. We assume that its output wire is inactive (in the zero state) at time zero. If Ao is the variable associated with its input and Eo the variable associated with its output, its behavior is defined by the equations/ E0(O) 0- O Eo(t+l) - Ao(t) Another way of expressing this is Eo(t) - (t>O) & Ao(t'l), where "'" signifies the primitive recursive pseudosubtraction x'y = x-y if x > y x'y -= 0 if x <y. Each switching element corresponds to a symbol (or complex of symbols) of the theory of truth functions. We will introduce the symbol a to correspond to the delay element. If the input and output to the delay element are Ao and Eo as before, then Eo(t) b 8[Ao(t)] or, more succinctly, Eo 5Ao. Hence, 8[A%(O)] _0 b[Ao(t+l)] - Ao(t) In itself the 8 operator does not, strictly speaking, take us beyond the theory of truth functions (see Section 4.2), but the 8 operator together with a cycle rule which allows an output of a net to be connected back to an input of the same net does take us beyond truth-function theory to quantifier theory (see Section 4). We need now a set of formation rules such that all nets constructed by these rules represent automata, and all automata defined by characterizing tables and output tables may be represented by these nets. We will use the 23

The University of Michigan ~ Engineering Research Institute rules given in Burks and Wright,l p. 1361, extending them to allow an arbitrary set of switching elements Z. Definition 4: A combination of figures is a well-formed net (w.f.n.) relative to the set 7 if and only if it can be constructed by the following rules: (1) A switching element or a delay element is w.f. (2) Assume N1 and Nz are disjoint w.f.n. Then, (a) the juxtaposition of N1 and N2 is w.f.; (b) the result of joining junctions Fql,..., Fqj of N1 to distinct input junctions Gpl,..., Gp of N2 is w.f.; (c) the result of joining input Junctions Fp and Fq of N1 is w.f.; (d) if all the wires connected to Fp of N1 are delay-element input wires, then the result of joining any Fq of N1 to Fp is w.f. The ends of wires which do not impinge on a switching-element circle or a delay-element rectangle are called junctions. A junction with no output wires attached to it is called an input junction; all other junctions are called internal junctions (these are sometimes called output junctions). One can label each junction of a net with a variable. We will usually use Ao, Al,... for input junctions, Co, C1,... for switch output junctions (junctions driven by switching elements), and Eo, El,... for delay output junctions (junctions driven by delay elements). A well-formed net (diagram) with every junction labeled with a variable is called a labeled net. One can also label the input junctions with functional constants designating particular input functions (e.g., 000..., lll..., 0101.., 0100100001000001...) and the internal junctions with functional constants naming the functions they actually realize (e.g., if the inputs to a conjunction are labeled with 111... and 010101..., the output should be labeled 010101... ). The result is called a net history; cf. the concept of a net state in Burks, McNaughton, et al.,13 p. 207. Consider the net of Fig. 1. Every net of this form (with arbitrary numbers of delays and switching elements) is well-formed (relative to a sufficiently rich set of switching elements) by our rules. We say that the net of Fig. 1 is in normal form. A normal form net is organized as follows. It has a direct-transition switch, fed by the net inputs and the delay outputs, and driving the delay inputs. It has an output switch, fed by the delay outputs and the net inputs, and not driving any delay elements. Given a sufficiently rich set of switching elements, we can construct for each well-formed net a normal form net which behaves the same. We first place the delays of the original net in an array like that of Fig. 1. Then, for each delay element Ei, we analyze the original net to determine what switching element Ti will produce the same result at en as is produced by the 24

The University of Michigan ~ Engineering Research Institute switching circuitry of the original net. In the same way we find those switching elements Ta, Tb,... whose outputs behave the same as the switch output junctions of the original net, or those switch output junctions we are particularly interested in. (The latter can be indicated by labeling them with triangles.) Similarly, given a set of switching elements Z rich enough to represent all truth functions, we can translate a normal form net into a w.f.n. made of those switching elements; e.g., if Z contains only the stroke element, we replace each T of Fig. 1 by an equivalent stroke-element switch. (Note that while a switching element Ti receives inputs from all the net input junctions and all the delay output junctions, its output need not depend on all of these. For example, if in the original net the input to delay element E2 was the net input junction Ao, then T2 has the property that E2(t) = Ao(t) for all values of Al(t), Eo(t), El(t), and E2(t).) (Note also that any well-formed net can be arranged somewhat in the form of Fig. 1, if we allow the switches to be of other forms and allow the direction-transition switch to have junctions which do not drive delay inputs.) At this point we wish to make two comments about our representation of switches. The first concerns a topic we have mentioned earlier, the fact that physically it takes time for information to go from the inputs of a switch to the output, while in calculating the behavior of a switch we assume that the output occurs at the same time as the input. The reason for this assumption is that in many applications the switching time is much less than the delay time, so the logic of switching is treated separately from the logic | of delay. We wish our theory to accommodate this case. The reader can imagine a small delay in the output of each switching element, with extra delays put in at various places to make the phasing correct. He can then imagine that each unit delay of Fig. 1 is reduced by the accumulated amount of delay in the switch driving it, so the total delay from delay output back to delay output is one unit. (The concept of rank as defined in Burks and Wright,l p. 1361, is useful here.) The concept of well-formed net has been so defined as to make this always possible, as is evident from Fig. 1. This way of regarding the matter conforms with practice in designing some machines; see De Turk et al.l4 Those automata with delay built into each switching element (e.g., neural nets) can also be accommodated within our theory; they correspond to special cases of w.f.n. and can be defined by modifying the formation rules. The second comment is connected with the fact that our switching elements represent the flow of information in only one direction; i.e., inputs and outputs are not interchangeable. There are many devices that permit information to flow in only one direction (vacuum tubes, transistors, etc.), but not all do; relays are one notable exception. Relay contacts permit information to flow in either direction, and hence bridge circuits can be made from them. Relays are electromechanical devices and hence are relatively slow. For this reason they are becoming less important as much faster electronic and 25

The University of Michigan ~ Engineering Research Institute solid-state devices become available and competitive in price. Further, because of the combination of a coil and contacts, relay automata present special problems, and no formation rules for them which take full account of all the uses that can be made of contacts and coils have been published. However, a new and promising device, the cryotron, also permits the information (in this case, current) to flow in either direction and hence can be used in bridge circuits (see Buckl5). We will not attempt here to devise formation rules for all uses of relays, cryotrons, and whatever other devices there may be which are not unidirectional. It should be pointed out, however, that every well-formed switching net can be realized by a relay and by a cryotron circuit. Since every truth transformation (i.e., every switching function) can be represented by a wellformed switch and vice versa (Theorem XII of Burks and Wright1), our diagrams do represent ways of realizing all truth functions with nonunidirectional elements. Since our diagrams represent a unidirectional flow of information, it follows that the power of relays and cryotrons to pass information in two directions does not add to their power to do logic. It does make a difference in the number of elements needed. Thus a relay bridge circuit may do a certain job more economically than a relay contact network in the form of a wellformed switch. We return now to the problem of correlating w.f.n. and automata. As a first step we will define a set of state numbers Do, D1,..., Dq. Each D will express the states of the delay output Junctions. Let these junctions be labeled Eo, El,..., Eq. Then D is the binary number Eo^E1. o..Eq. Since a delay output is assumed to be zero at time zero, D(O) = 0. Let Do be this initial state, i.e., D(O) = Do. We wish to justify this decision, but before doing so we need to discuss a question concerning the identity of an automaton. We may run a machine from Monday to Friday, turn it off at 5:00 P.M. Friday and then turn it on again at 8:00 A.M. the following Monday. Should we regard it as one machine or two? There is a similarity between this and a human (automaton?) going to sleep at night; however, when a human wakes up in the morning he still remembers quite a bit of his past history, while often (though not always) a computer starts a new life every time it is turned on anew. To preserve the identity of the machine before and after the gap of inaction, we can think of some simple ad hoc device such as a special input cell whose sole function is to turn the machine on and off in such a manner that, when a machine is in operation, stimulating this input cell will put the machine into a unique initial state; such an operation is often called an initial clear. In this way we can preserve the identity of a machine through all the different runs it makes. 26

The University of Michigan ~ Engineering Research Institute On this assumption there is only one initial state of an automaton, and we can identify it with the all-off or all-quiet state. Such an identification is natural since neurons, vacuum tubes, etc., are usually inactive when first turned on, and even if they are not this identification can be made by a suitable convention without much loss of generality. The situation is somewhat different if we choose to regard each machine run as a new automaton, but even here there will probably be a single initial state for all runs and it is convenient to identify it with the all-quiet state. Note that this does not commit us to identifying So with Do. In fact, we shall not always do so (the complete decoded net of Section 32 is anexample). Hence one can handle other initial states by identifying Do with some value of S other than SO. We now proceed to establish the equivalence of w.f.n. and automata. We show first how to derive a characterizing table and an output table for each w.f.n. We translate the given net into a normal form net., Label the inputs of this normal form net Ao, Al,... and let I = Ao A1...; for a two input net we would have, for example, Io Ao L A1, I - Ao &A1, I2 Ao & TlV and Is 3 Ao & Al. Label the delay inputs Eo, El,... and define A = Eo El.... Label the delay outputs Eo, El,...; then D = Eo E1.... Let To, T1,.o. be the truth functions realized by the direct-transition switch. Then we have ci(t) - Ti[Ao(t), Al(t),...; Eo(t), El(t),...] Ei(t) -- i(t) for each i. Finally, we let Do be So and each other D be an S., and thereby get a complete table. By the use of the admissibility tree we can construct the characterizing table. This procedure takes care of the transition part of a normal form net. To complete the analysis, we perform a similar construction for the switching elements of the original net, or for those switch outputs we are interested in as final outputs. Let Ta, Tb,... be the truth functions realized by these outputs. Then we have Cj(t) - Tj[Ao(t), Al(t),...; EO(t), El(t),...] for each j, and this gives us the output table. A coded normal form net is a normal form net whose characterizing table is in coded normal form. When the complete table is derived from a net, there will be a bit position for each input junction and each delay output junction. In this case the numbers m and n (of Section 2.1) are the numbers of input and delay 27

The University of Michigan ~ Engineering Research Institute output junctions, respectively. An m-n automaton has then 2m+n possible complete states and 2n possible internal states. If all input states are considered, we then have (2n)(2m+n) different m-n automata complete tables. Many of these are the same except for the permutation of columns (i.e., of input and internal cells or junctions). Clearly, there is little significance whether a particular junction is labeled the lst, 2nd, or the m-th. In other words, if we can find a way of identifying one-to-one the input junctions of two m-n automata so that they behave the same, they are equivalent even though they may have different characterizing tables. Analogously, permutations among the labels for the delay output junctions make no essential difference. It follows that there are actually only (2n)(2m+n)/(mt)(nt.) rather than (2n)(2m+n) distinct abstract m-n automata complete tables. Similarly, characterizing tables which are obtainable from one another by permuting columns are to be identified. There will be fewer than (2n)(2m+n)/(m!)(n!) distinct m-n automata characterizing tables, for some of the distinct complete tables will differ only with regard to inadmissible states. In designing the transition part of an automaton it is in general desirable to maximize the number of admissible internal states (i.e., to minimize the number of inadmissible states), since the total number of states is a rough measure of the parts needed for construction, while the capacity for doing different things is in general proportional to the number of admissible states. We will call an automaton complete if all states S are admissible. (A stronger condition would be that all states S are admissible relative to every initial state, instead of just the distinguished initial state SO; we will not give to automata satisfying the stronger condition any special name.) If the number of admissible states of an automaton is less than 2n-1 we can always replace the automaton by a simpler automaton by using the coded normal form. In a coded normal form characterizing table n is the least integer as large as or larger than the logarithm of N to the base two (similarly for m). For automata with the same number of admissible states it seems desirable to maximize the "recoverable" ones. Following Moorel p. 140, we shall call an automaton strongly connected if and only if it is possible to go from every admissible state Si to every admissible state Sj (i may be equal to j). An alternative definition can be given in terms of an admissibility tree in which all occurrences of a given state Si are identified. An automaton is strongly connected if and only if for any ordered pair of states <Si, Sj> (i may be equal to j), we can pass from Si on the tree to Sj on the tree by a continuous forward route (plus backward jumps from a given state to the same state located lower on the tree). Since the possibility of repetition is important for an automaton, any admissible state which cannot be recovered adds rather little to the capacity of the automaton. Thus it would seem best in general to design a machine which is both complete and strongly connected. Hence, from a practical point of view, complete, strongly connected, and coded normal form automata are the most important. For the theory of 28

The University of Michigan ~ Engineering Research Institute automata, however, many nets falling outside this class are of interest. In particular, we will find decoded normal form automata nets of interest in connection with the use of matrices to analyze nets. It remains to show how to construct a well-formed net for any given characterizing table (or complete table) and output table. There are various ways of doing this, one of which is to identify So with Do (if So is not equal to zero, its value must be changed to zero) and to let every other S be a value of D. The general process of going from nets to tables is then just reversed. There are various ways of constructing the switches needed. Let us consider the matter with regard to the characterizing table <<I,S>, S'>. A single column of S' is to be identified with a particular Ei (and Ei). Delete all other columns of the S' part of the table. We then have a truth-table definition of our function Ti, such that i(t) - Ti[Ao(t), Al(t),...; Eo(t), El(t),...] Given sufficient primitives, this can be realized by one switching element, as in Fig. 1. Given switches for "and,' "or," and "not," it can be realized by using disjunctive normal form. Consider each row of the truth table. If Ei is zero, do nothing; if Ei is one, construct an element to sense I S of that row. The desired switch for Ei is obtained by disjoining (using "or" on) all the outputs so obtained. We have thus established our first theorem. THEOREM 1: Given a well-formed net, we can construct a complete table, a characterizing table, and an output table describing its behavior. Given a complete table, characterizing table, and an output table, we can construct a well-formed net realizing these tables. This theorem establishes the equivalence of automata and nets, and since nets are idealizedrepresentations of digital computers, it follows that for most theoretical considerations automata without any special sensing and acting organs can be viewed as digital computers. We conclude this section by noting the similarity of well-formed net diagrams and flow diagrams used in programming. This similarity is what one would expect, since a net diagram describes the structure of a computer, and a flow diagram describes its behavior during a certain computation, and both symbolize recursive functions. While a program is stored in a computer, part of it (the coded representation of the operations) usually remains invariant through the computation; this means that during the computation not all states of the computer are used. For each such fixed program one could devise a special-purpose machine which would perform the same computation. This is a special case of the general principle that there is a great deal of flexibility with regard to what a machine is constructed to do versus what it is instructed 29

The University of Michigan * Engineering Research Institute to do. This suggests that there should be one unified theory of which the theory of automata structure and the theory of automata behavior (ie., the theory of programming) are parts. Each program is in effect a definition of a recursive function. Since there is no effective way of deciding whether two definitions define the same recursive function, there is no effective way of deciding whether two programs will produce the same answer. Two different programs, each finite, may nevertheless produce the same answer because the feedback from the computation may be different in the two cases. 30

The University of Michigan ~ Engineering Research Institute 3. TRANSITION MATRICES AND MATRIX FORM NETS 3.1. TRANSITION MATRICES The transition part of a net controls the passage of the net from state to state and is therefore the heart of the net. In this subsection we introduce "the transition matrix," a table which describes a net by showing the various ways in which it may pass from one delay state to another. We use a characterizing table with M input (state) words, I, N internal-state words, S, and M x N rows, each of the form <<I,S>, S'>, to define N2 direct-transition expressions Iij as follows. Iij is a disjunction of all those Ik such that <<Ik,Si>, Si> is a row of the characterizing table; if there are no such Ik, then Iij is( "the false"). That is, Iij is a disjunction of all those input words (if any) which can cause the net to pass from state Si at time t to state Sj at time t+l; it is allowed that i equals j. It is clear that each Iij is a disjunctive normal form expression of the input function variables. Note that the direct-transition expression "0" is distinct from the direct-transition expressions "0," "o00, "000," etc.; the former means that no direct transition between the two states is possible, while the latter mean that such a transition is brought about by making all the inputs zero. A direct transition from Si to Sj in a net is a passage from state Si at t to state Sj at t+l. Such a transition is possible only in the case where Isij # 0. We say that <Ik, Si> (or Ik Si) at time t directly produces Sj at time t+l only in the case where the net makes a direct transition from Si to Sj under the influence of input tk at tn. A transition from Si to Sj in a net is a passage from state Si at t to state Sj at t+w (w > O). Such a transition is possible only where there exists a sequence of direct-transition expressions, none of which are 0, of the form Tial Tala2'..., Iawj We say that Iial Si(t), Iaa(t+l),... Iaj(t+w-l) produces Sj at t;+w if and only if there is a transition from Si(t) to Sj(tow) under the direction of the listed inputs; this is a transition of w steps (or, alternatively, a transition of length w). 31

The University of Michigan ~ Engineering Research Institute It is convenient to arrange the information in an M-N characterizing table in a direct-transition matrix of order N by arranging the N2 directtransition expressions in square array. The following is a direct-transition matrix schema of order four: Ioo Iol Io2 Io3 Ilo Ill I12 I13 I20 I21 I22 I23 I30 I31 I32 I33 It is clear that a direct-transition matrix presents the same information as a characterizing table, but in a different way. For many purposes this form is more convenient, because it reflects the fact that the basic behavior of an automaton consists of a succession of transitions from one state to another. (Since a transition matrix is equivalent to a characterizing table, the formulae given in Section 2.3 for the number of M-N automaton characterizing tables apply here also.) The information contained in a complete table can also be expressed in matrix form. Since for an abstract automaton the complete table is the characterizing table, the matrix derived from the complete table is a directtransition matrix. We give an example of a transition matrix. A matrix for a fourstage cyclic counter is So S1 S2 S3 S0 Io I, S1 Io I'1 S2 0 Io I1 S3I1 I Io (We have added the S's as a mnemonic aid, but, given a conventional ordering of them, they need not be written in.) Thus, an input Io (e.g., O) causes the counter to stay in its given state, while an input I1 (e.g., 1) causes it to advance to the next state (modulo 4). All other entries are O's since they represent cases where direct transitions are impossible. 52

The University of Michigan ~ Engineering Research Institute 3.2. MATRIX FORM NETS In this subsection we will present a net form closely related to the transition matrix. Our presentation is in two steps; the first is to construct a decoded normal form net. Consider for a moment a coded normal form net in relation to its coded characterizing table. Each S is associated with a D, and in general any arbitrary number of bits of D may be unity. Consider in contrast a decoded normal form characterizing table. Exactly one bit of each S is unity, which suggests associating each state S primarily with a single junction. That can be done for an N state decoded normal form table as follows: Let S = Bo B1... BN-1 as before, and form a net with N delay elements so that D Eo01'.. EN_-1. Of the 2N delay words D, we use only N+l, namely, Do (= 000...) and the N words having exactly one bit which is unity and all other bits zero (100..., 010..., etc.). We next construct a junction C which is the output of a disjunctiv.e element ("or") fed by Eo and by the input-independent transformation 100.... Hence, C(O) -- C(t) = Eo(t), for all t > 0 We now associate Bo with C, and each Bi with Ei for O<i<N; that is, we equate B B, -..- BN_1 and C E1... EN_1. Hence we have N junctions (C, E1,..., EN-1) such that each state S is associated primarily with one junction; namely, that junction which is active when the net is in that state. These junctions are called the state junctions of the net. See Fig. 2, where the state junctions are labeled C, E1, and E2, and are also labeled with the states (So, S1, and S2) which they "represent." We now wish to construct a net containing wires C, El,..., EN1..l so connected that at each time exactly one of them is active (in state one) while all others are zero, and with the following inductive property. (A) Junction C (representing So) is active at time 0. (B) For any time t, if the junction labeled Si (i.e., C for i = 0, Ei for i> 0) is active at time t, the net input at time t is Ik, and<<Ik,Si>,Sj> is a row of the characterizing table, then the junction labeled Sj will be activated at time t+l. To realize Condition (A), we construct a starter (see Fig. 2) to produce the input-independent transformation 100.... The starter output is then disjoined with Eo to produce C. This will insure that C (0) - 1, and that 33

The University of Michigan * Engineering Research Institute Decoded output switch I00... Starter SOV Si S OS2 SI VS2 L' (So) E (Sj) E2(S2) 1.I I20 121 122 Transition part Fig. 2. Decoded normal form net. 34

The University of Michigan ~ Engineering Research Institute C(t) -Eo for t > O. The starter, which adds another delay element to the net, may be constructed without a cycle (see Section 4.3). The realization of (B) is more complicated. Since there is a state junction for each state (C for So, Ei+l for Si+l), the concept of a directtransition word is useful here. At each state junction Si we build N switches such that: Iio Si directly produces So(i.e., activates C at the next moment of time); Ii Si directly produces Sl(i.e., activates El at the next moment of time); Ii,N-1 Si directly produces SN-1 (i.e., activates EN_1 at the next moment of time). A net to accomplish this purpose is shown in Fig. 2, which is actually a net schema rather than a net, since the Iij are not specified. The boxes labeled with the direct-transition expressions Iij are called direct-transition switches. (Note that these direct-transition switches are different from the direct-transition switch of Fig. 1, though both kinds of switches play the same basic role in a net.) A direct-transition switch Iij has an output at time t if and only if the input to the net is represented by a disjunct of Iij. Every such switch can be made of a disjunction driven by conjunctions of positive and negative inputs. For example, let o10 be 0101 v 0110 v 1111, which may be written as 7bA12A3 v ToAlAZ2T v AoAlA2A3, and the latter is readily realized by a switch. Of course, the number of inputs may vary from net to net; we will usually show four inputs in our figures (as we do in Fig. 2). If a particular Iij is 0, the switch Iij and the conjunction it drives may be deleted. If a net always passes directly from a state Si to a state Sj, no matter what the inputs are (or because there are no inputs), then we can run a direct line from the Si junction to the input of the delay element driving the Sj junction. An input-independent net thus becomes a string of delays (corresponding to the initial part of the input-independent transformation), driven by a starter and driving a cycle of delays (corresponding to the periodic part of the function); cf. Theorem XIII and the accompanying figure of Burks and Wright,l p. 1363. We call a net of the form of Fig. 2 a decoded normal form net. We will first explain why we call this form "decoded" in contrast to the "coded" form described earlier, and then discuss the exact relation between decoded and other nets. The terminology is justified by the fact that in a coded normal form net the numbers representing delay states (i.e., the D's) appear 35

The University of Michigan ~ Engineering Research Institute in coded form, while in the decoded normal form net the numbers representing the delay states (the D's, except that Eo is replaced by C at time zero) appear in decoded form, in the sense in which the terms "coded" and "decoded" are used in switching theory. A decoding switch is a switch with the same number of output junctions Co,... CN_1 as there are admissible input words I,..., IN-i, and so connected that when the input state is In the output junction Cn is active and all other outputs are inactive. (This is a special case of the nets discussed in Burks, McNaughton, et al.13) The information on the inputs is in coded form, while that on the outputs is decoded. In our coded and decoded normal for= nets, however, the coding and decoding applies to delay outputs rather than to switch outputso Given any automaton net, we can construct a decoded normal form net which models that automaton net in the sense of having junctions which behave the same. Let us begin with the transition part of the given automaton net. Suppose it has n delay output junctions Fo, F1, o.., Fn_1 and N admissible states So, S1, S., SN_1 (N 2n). We next construct the transition part of a decoded normal form net with junctions C, E1,..., EN.1, such that the i-th bit from the left of C E.o o EN.'1 is unity when the original automaton is in state Si_1. Any function Fj is equivalent to a disjunction Sal v Sa2 v... v Sak of Just those states for which Fj has the value one. Hence by disjoining the appropriate state junctions of the decoded normal form net we can obtain a junction FS such that F3(t) - Fj(t) for all to Figure 2 shows a decoded output switch which realizes So v S1, So v S2, and S1 v S2, The "single-disjunct disjunctions"t So, S1,_and S2 are already represented in Fig. 2, and the input-independent outputs (So & S1 & S2) and (So v S1 v S2) (i.e., 000..0 and lllo..) are readily obtained from the net if needed, This shows how to construct junctions of a decoded normal form net which behave the same as the delay output junctions of the original net, Any other junction of the original net whose behavior at time t does not depend on the state of the inputs at time t can be treated in the same wayo For the remaining junctions we can build an output switch fed both by the decoded output switch and the net inputs. Alternatively, the decoded output switch can be replaced by a switch driven by the state junctions and the net inputs, Switches of these various kinds are allowed as parts of decoded normal form nets. We can now incorporate these results in a theorem. THEOREM 2: For any well-formed net with junctions Co, C1,.o,, one can construct a decoded normal form net with junctions C', Cl,... such that Ci(t) - C0(t) for all i and t. The transition part of a decoded normal form net can be drawn in matrix form to bring out its relation to the transition matrix. In Fig. 3 this is done for a transition matrix of order 4; the result is called a matrix box, The disjunction elements of Fig. 2 are omitted by the convention that

The University of Michigan ~ Engineering Research Institute So 10130I 102 I03 El Fig. 5. atrix box of order 4. 37

The University of Michigan ~ Engineering Research Institute several wires can drive a line (see Burks and Copi,2 p. 307). A normal form net with the transition part put in matrix-box form is called a matrix form net. Figure 4 is an example which lacks an output switch. A particular net of order 4 may be obtained from the schema of Fig. 5 by substituting the appropriate switches for the direct-transition-switch schemata. We illustrate this with the four-stage cyclic counter whose transition matrix was given in Section 3.1. If we' let input Io - 0 and input I1 = 1 (so the counter counts pulses rather than the absence of pulses), we get the matrix box of Fig. 4. Each Iii 10 Io = 0 so we replace these direct-transitionswitch schemata by negation elements. For each i, j such that j = i+l modulo four, Iij e 1, so we replace these direct-transition-switch schemata by single input lines. All other transition-switch schemata correspond to O's in the transition matrix defining the counter, so these are dropped; e.g., no direct transition from S2 to SI is possible, so there is no direct coupling from SZ2 to E1. It is manifest from Fig. 4 that the counter stays in its prior stage when the input is zero, but advances to the next stage (modulo four) when the input is one. 3.3. SOME USES OF MATRICES The discovery that matrices may be used to characterize automata nets opens up a number of interesting lines of investigation. In the present subsection we will discuss a few applications of matrices to the analysis of automata nets. A direct-transition matrix (whose elements are direct-transition expressions) characterizes the direct transitions of an automaton. We will first establish some properties of this matrix, and then generalize the concepts of direct-transition expression and direct-transition matrix to cover transitions of arbitrary length. Each row of a direct-transition matrix is a partition of M input words Io, Il,..., IM_1 into N columns So, S1,..., SN-1. Hence the disjunction of a row contains all the admissible input words. If these are all the possible words, the disjunction of a row is a tautology. If there are inadmissible input words, then the hypothetical whose consequent is the disjunction of the matrix row and whose antecedent is a disjunction of all the admissible input words, is a tautology. In this case we can say that the matrix row sums to a tautology relative to the admissibility conditions, or that it is a tautology in an extended sense of this word. (Note that this relative sense of "tautology" is relevant in minimality problems;in minimizinga switchwe are not lookingfo a switch logically equivalent tothe given one, but rather for a switch logically 58

The University of Michigan * Engineering Research Institute _ Storter Matrix box so Input Input SO Input Input 2 As ]Input Input Input Input 1InputS3 Input Input Fig~ 4~ Matrix form binary counters 39

The University of Michigan ~ Engineering Research Institute equivalent to the given one relative to the admissibility conditions on the inputs.) A single element j of a row i may be a tautology, in which case all other elements in the row are 0; this means that whenever the automaton is in state Si it makes a direct transition to state Sj, no matter what the input is. No input word can occur more than once in a row, else the automaton would not be deterministic. The disjunction of the elements of a column is not in general a tautology, but cases where it is are of special interest as they are related to the concept of backward determinism. Definition 5: An abstract automaton is backwards deterministic if and only if for each finite sequence I(O), I(1),..., I(t), S(t+l), there is a unique sequence S(O), S(1),..., S(t) satisfying the complete table. A direct-transition matrix is backwards deterministic if and only if for each Ik and Sj there is at most one state Si such that Ik Si directly produces Sj. We give an example of a backwards-deterministic matrix of order 3 Io v I1 I2 Io v Il v I0 12 Io v I1 Another example of interest is Io I1 1 2 Is I1 12 Is Io I2 Is Io I1 Is Io I1 Besides being backwards deterministic, this matrix has the property that a direct transition is possible from any state to any other state. We call such a matrix directly strongly connected; see Definition 6 below. THEOREM 3: The disjunction of every column of a direct-transition matrix is a tautology if and only if that matrix is backwards deterministic. An abstract automaton is backwards deterministic if and only if its directtransition matrix is backwards deterministic. 40

The University of Michigan ~ Engineering Research Institute We prove first that having every column sum (logically) to a tautology is a necessary and sufficient condition for a direct-transition matrix to be backwards deterministic. If every column sums to a tautology, every input word must occur at least once in each column. But no input word could occur twice in a column, because in the N by N matrix every input word must appear exactly once in a row and there are exactly N occurrences of each input word. If an input word occurred twice in the same column, then at least one of- the (N-1) remaining columns must miss that word and could not be a tautology. Hence for a given state Sj at time t+l, and a given input word Ik at time t, there is only one state Si which together with Ik could have directly produced S.. Therefore, having every column sum to a tautology is a sufficient condition for a direct-transition matrix to be backwards deterministic. The proof that it is a necessary condition is obtained by reversing the considerations just used. In a backwards-deterministic matrix no input word can occur twice in the same column. But each row contains exactly one occurrence of each input word and there are exactly N occurrences of each input word in the matrix. Hence, no input word is missing in any column, because otherwise it must occur twice or more in at least one other column. Therefore, every column must sum to a tautology. We show next that if a direct-transition matrix is backwards deterministic, the abstract automaton is backwards deterministic. Consider a finite sequence I(O), I(1),..., I(t), S(t+l). It follows from the results of the preceding paragraph that there is exactly one S(t) which together with I(t) directly produced S(t+l). Iterating this argument, we see that there is a unique sequence S(0), S(1),..., S(t) satisfying the complete table (for the given I(O),..., I(t), S(t+l)). To prove the second part of the theorem in the other direction, we note that if a matrix is not backwards deterministic, there will be some Ik and some Sj such that there are two distinct states Sa and Sb, either of which will, together with Ik, directly produce Sj. Hence for the sequence I(O) = Ik and S(1) = Sj, there are two sequences [namely, S(O) = Sa and S(O) = Sb] satisfying the complete table. In Section 2.1 we remarked that in the presence of our deterministic assumption an infinite past would be inconvenient. The first part of Theorem 3 may be used to justify this statement. In order to describe the behavior of a net over a certain period of time t, t+l,..., t+w, we would naturally need to know the inputs I(t), I(t+l),..., I(t+w) and the internal state S at one of these times (or perhaps at t+w+l). Now if every net were backwards deterministic, it would not matter for which time S was known. But for a net which is not backwards deterministic we must know S(t) to determine S(t+l),..., S(t+w). Hence we might as well pick a time t = 0 as a standard reference point for our analysis and always work forward from this time; we therefore allow t to range over the nonnegative integers only. [We could define backwards deterministic on the basis of each infinite sequence..., I(-7),..., I(0),..., I(t), S(t+l) determining an infinite sequence..., S(-7),..., S(O),..., S(t) and

The University of Michigan ~ Engineering Research Institute conduct the discussion in terms of this definition. Theorem 3 then holds with the following exception: the direct-transition matrix of a backwards-deterministic automaton may fail to be backwards deterministic with regard to states which cannot be recovered. For example, a backwards-deterministic automaton can have a transition matrix in which both Sa Ik and Sb Ik directly produce the same state Sj, but in which no state and input combination directly produces Sa or Sb.] We turn now to the task of generalizing the notion of direct-transition expression to cover nondirect transitions. Consider an example. Suppose it is possible to go from state three of an automaton to state seven with either a sequence I4, I6, or with I2. We could write this as I4I6 v I2, but it must be understood that juxtaposition here represents a noncommutative type of conjunction, since I6 followed by I4 may not carry the net from state three to state seven. We will sometimes use a special operation, called concatenated conjunction, to express the order-preserving conjunction needed here. Thus the above may be written I4%I6 v I2. However, the concatenated-conjunction symbol,', may be omitted if the context makes clear what is intended. The noncommutative nature of..concatenated conjunction can be brought out by making the role of time explicit: I4'.I6 is short for I4(t) ~ I6(t+l), while I6'1I4 is short for I6(t). I4(t+l). Clearly I4(t). I6(t+l) is not equivalent to I6(t) - I4(t+l). Concatenated conjunction can be used to build up transition words from direct-transition expressions. For example, we might have the transition expression I32.I2z,515,7 v IV3,7 or, more briefly, I3,2I2,5I5,7 v I3,7. In a concrete case it might reduce to the transition expression I4'(I6 v I9).I5 v (I3 v I6), or, more briefly, I4(I6 v I9)I5 v (I3 v I6), which is of course equivalent to the transition expression I4I63I5 v I4DI9^'I5 v 13 v I6 or, more briefly, I4I6I5 v I4I9I5 v 13 v I1. For this expansion we use a distribution principle for concatenated conjunction: (p v q).(r v s) a (p'.r v pis v q.r v q.s). We can now define the general concept of transition expression. A transition expression is a disjunction of concatenated conjunctions of directtransition -expressions, provided that if any concatenated conjunction contains a S, it may be replaced by 5. Thus I2,5155,3 v I2,3 might become (13 I7)ov l v ~, which would reduce to ~. We allow direct-transition expressions as special cases of transition expressions. Definition 6: A transition matrix of order N is an N by N array whose elements are transition expressions. Two transition matrices of order N can be combined by the following operations, where a(a,b), P(a,b) are transition expressions for transitions from state a to state b. Matrix disjunction: [a(a,b)] v [f(a,b)] = [a(a,b) v B(ab)]. 42

The University of Michigan ~ Engineering Research Institute Matrix concatenated conjunction: [C(ab)YN[(a,b)] = [y(a,b) ], where N-1 y(a,b) = E z(a,i)'f(i,b) i=O where 7 represents disjunction. Matrix (concatenated) power: Ml = M = Nn- M. n Sum of matrix powers: 2 Mi = M v MS v... v Mn The characteristic matrix C(M) of a transition matrix M is obtained by replacing the elements of M with zeros or ones, according to whether the elements do or do not reduce to O, i.e., according to whether transitions from state a to state b are not or are possible by M. A direct-transition matrix M is directly strongly connected if and only if every element of C(M) is unity. Inequality between characterizing matrices is defined by C(M) < C(N) if and only if for each element a(a,b) of C(M) and the corresponding element P(a,b) of C(N), a(a,b) =< (ab), i.e., a)3. THEOREM 4: Let M be a direct-transition matrix of order n. A transition from state Si to state Sj in exactly w steps is possible if and only if the element a(i,j) of C(Nf) is one. A transition from Si to Sj in w or less steps is possible if and only if the element a(i,j) of C is one. A transition from state Si to state Sj is possible if and only if (a) for i J j, the element a(ij) of Ct-1k) is one; (b) for i = j, the element ac(i,j) of C k Qt Mk) is one. A net is strongly connected if and only if every element of is unity. If a and B are positive integers, then

The University of Michigan ~ Engineering Research Institute c ) c Qk ) C = C o To prove this theorem, we first examine the structure of the elements of iw,~ where M is the direct-transition matrix. Each element a(i,j) is a disjunction of concatenated conjuncts, each of the form Iial Ialaa2 oo Iawj. Clearly a transition from Si to Sj in exactly w steps is possible if and only if at least one of these concatenated conjunctions does not reduce to 0, i.e., if and only if the element a(i,j) of C(Mw) is unity. A matrix w ZMk i=l has as its elements transition expressions covering transitions in 1, 2, o.o, or w steps, and hence the elements of are one or zero according to whether a transition from Si to Sj can or cannot be made in w or less steps. It remains for us to show that beyond a certain power (n for i=j, n-l for ifj), raising M to a higher power does not add to the possible transitions that can occur, but only to the way in which they occur. Consider two distinct states, Si and Sj. There are only n-2 other states to pass through. The automaton's being in one of these states Sk for more than one moment of time does not increase the possibility of getting to Si from Si, since whatever can be accomplished from a later occurrence of Sk can be accomplished from the first occurrence of Sko The argument is similar for possible transitions from Si to Si, with the difference that here we must consider n-l other states.

The University of Michigan ~ Engineering Research Institute 4. CYCLES, NETS, AND QUANTIFIERS 4.1. DECOMPOSING NETS In this section we discuss cycles in nets and their bearing on the application of logic to net analysis. As a first step we discuss the elimination of unnecessary cycles from nets. A well-formed net (w.f.n.) may have unused switching-element input wires. This is especially likely to be the case for a coded normal form net constructed from a characterizing table, for.not all bits of D and I need influence a given delay input junction. By inspection of the characterizing table of a w.f.n., we can tell which bits are irrelevant to a switch output Ci. Aparticular bit Aj of I2) is irrelevant to Ci if and only if for each pair I D identical.in every position Aj the value of Ci is the same. Using this criterion we can eliminate all the unused switch input wires by replacing the original switching elements with other elements which behave the same for all inputs and on which every switch input has an influence. The same process can be applied to an output table and an output switch. It should be noted that the above process is a minimization technique, i.e., a technique for producing a simpler net which realizes the same transformations as the original net. In Section 2.3 we showed how to minimize the number of delay elements by using a coded normal form. Other minimization methods are implicit in the results of preceding sections. For example, if two junctions of a net behave the same (cf. the decision procedure of Section 2.2), one may be eliminated. Note that for these minimization procedures we can work from complete tables, characterizing tables, and output tables; we need not refer to the net diagrams at all. However, our main interest at present is not in minimality in general, but in minimality only insofar as it relates to the number and nature of cycles in a net. For example, every normal form net with at least one delay element will have cycles, while the corresponding net with no irrelevant switching-element inputs may have either fewer cycles or perhaps no cycles at all. For this reason we shall hereafter consider only such nets. Our next task is to define a measure of the complexity of the cycles of a net. 45

The University of Michigan ~ Engineering Research Institute A sequence of junctions Al, A2,..., An, A1 (possibly with repetiticns constitutes a cycle if and only if each Aj is an input to an element whose output is Ak, where k =- J+l modulo n. Thus a junction occurs in a cycle if it is possible to start at that junction, proceed forward (in the direction of the arrows) through switching elements and delay elements, and ultimately return to the junction. A junction which does not occur in a cycle has degree zero, as does an input-independent junction. It should be noted that this definition assigns degree zero to some junctions occurring in cycles, i.e., to all inputindependent junctions which occur in cycles. The reason will become clear in the next subsection. For the same reason we require a further modification of the net before degrees are assigned to, the remaining junctions of it. That modification is to replace all cycles containing both input-independent and non-input-independent junctions by cycles containing only one of these kinds of junctions. Let C be an input-independent junction occurring in a cycle with a non-input-independent junction E. Break the cycle at C by deleting the element whose output wire is joined to junction C; to make the net behave the samee we connect C to the output of a subnet which realizes the input-independent transformation originally realized by C. Such a subnet may be so constructed that it has only one cycle, and such that every junction in it is an inputindependent junction (Burks and Wright,l Theorem XIII, p. 1363). Thus, given any net N, we can find an equivalent net Nt with no more cycles than N and which has no cycles containing both input-independent and non-input-independent junctions. We say that a net with no irrelevant switching-element inputs and with no cycles containing both input-independent and non-input-independent junctions is in reduced form. We assign degrees to all the junctions of N' (and hence derivatively to all Junctions of N) as follows. The degree of a non-input-independent Junction which occurs in a cycle is the maximum number of distinct delay elements it is possible to pass through by traveling around cycles in which the junction occurs. Figure 5 shows a net with the degree of each junction in parentheses. (We stipulate that in Fig. 5 the switching functions are so chosen that no junction is inputindependent, and the net is in reduced form.) Note that in order to get to both E2 and E3 from Co, it is necessary to pass through E1 twice. The degree of a net is the maximum of the degrees of its junctions. Figure 5 is of degree 3. A net is entirely connected if and only if its degree is greater than zero and the number of delay elements in it is equal to its degree. This notion should be compared with the analogous notion of "strongly connected," defined in Section 2.53. We define directly entirely connected analogously to the notion "directly strongly connected" of Section 3.3J that is, in a directly entirely connected net it is possible to start at any delay output junction and proceed forward to any delay input junction, passing only through switching elements. One of these sets of notions concerns states; the other set concerns the bits used to represent states. 46

-I E2($), | _' Al (O _((3) E 3) int o E — ~ EE (3) r) AA IVm Max subnet I Max subnet 2 Max subnet 3 Max subnet4 Degree 0 Degree 3 Degree I Degree 0 a RankO Rank I Rank 2 Rank 3 Fig. 5. Net of degree 5.

The University of Michigan ~ Engineering Research Institute Figure 5 is not entirely connected, but it may be completely decomposed into two nets of degree zero (the net A1-Eo and the net E4-E1-A1-C3-E5) and two entirely connected subnets (Eo-Co-E1-Az-C1-E2-E3 and Eo-Co-E4-C2). A maximal entirely connected subnet associated with a net junction, say F, is the net formed of all junctions which occur in a cycle with F, together with the elements between these junctions, and the switch input junctions of all switches whose output junctions are in a cycle with F. Subnet 2 of Fig. 5 (the net Eo-Co-E1-A2-C1-E2-E3) is a maximal entirely connected subnet associated with the junctions E2, C1, Co, El, E3. The part of this subnet which results by deleting the delay element between E1 and E3 is an entirely connecte subnet of Fig. 5, but it is not maximal. Since any two junctions of a net either do or do not occur in the same cycle, each element of a net either belongs to a subnet of degree zero (i.e.,, is not in a cycle) or belongs to a unique, maximal, entirely connected subnet of the original net. (Note in this connection that "occurring in the same cycle" is a transitive relation.) A given net element may belong to several subnets of degree zero; e.g., the delay C3-E5 of Fig. 5 belongs to subnet 4 and to the subnet consisting of itself. There are various ways to group the elements connected to junctions of degree zero into maximal subnets, of which we will give one. Let A be a junction of degree zero and B be any other junction of the net. Proceeding forward from B to A along a certain path, we can pass through n(n - 0) maximal entirely connected subnets before arriving at A. Note that n is bounded, for if we could pass through a given maximal entirely connected subnet M and then (always proceeding forward in the direction of the arrows) later come back to M and pass through it again, it would not be the case that M is maximal. Since there are a finite number of junctions in the net and a finite number of paths from each to A, there is a maximum such number N to be associated with A. Then group into a maximal subnet of degree zero all the elements lying between junctions with the same maximal numbers N, together with the input wires of all switches whose output junctions are assigned the number N. Subnet 4 of Fig. 5 is a maximal subnet of degree zero. It is by now clear that any net in reduced form can be uniquely and effectively decomposed into maximal entirely connected subnets and maximal subnets of degree zero, i.e., into maximal subnets of various degrees. Figure 5 is uniquely decomposed into'the four subnets shown there. To decompose a net, one need only find the degrees of the junctions, one by one, remove all inputindependent junctions which occur in cycles with non-input-independent junctions, determine the classes of junctions belonging to the same cycles, and then determine the maximal subnets of degree zero. 48

The University of Michigan ~ Engineering Research Institute A rank can then be assigned to each maximal subnet inductively. A maximal subnet which has no net inputs or whose only inputs are net inputs is of rank O. A maximal subnet which has at least one input from another maximal subnet of rank r and no inputs from maximal subnets or rank greater than r, is of rank r+l. See Fig. 5 for an example of ranks. There may, of course, be several maximal subnets of the same rank. It is clear that every maximal subnet has a unique rank, for there cannot be two such subnets driviny each other, else they would not be maximal (cf. Theorem IX of Burks and Wright ). It is worth noting that if each maximal subnet of a net is replaced by a single box with inputs and outputs, the result is a diagram without cycles. The following structure theorem summarizes these results. THEOREM 5: Any net in reduced form may be uniquely decomposed into (one or more) maximal subnets, each of which has a unique degree and rank. We conclude this subsection with a conjecture: For any degree d, there is some transformation not realized by any net of degree d. This means that there is no maximal degree such that any transformation can be realized by a net of this degree. Our grounds for making this conjecture are as follows. Consider counters with one input, designed to produce an output modulo m. When m is a power of two, one can construct a sequence of binary counters, each of degree one, each driving its successor, except the last one, which drives nothing but produces the desired output, and the whole net will be of degree 1. When m is not a power of two, the standard way of constructing the desired counter is to take a counter modulo a power of two, sense with a switch when it reaches m-l, and use that information to clear the counter back to zero. But such a feedback loop produces a net of arbitrarily high degree. Considerations of this sort lead us to believe that the conjecture is true. 4.2. TRUTH FUNCTIONS AND QUANTIFIERS We have already indicated (Section 2.2) the close correspondence between switching nets (switches) and the theory of truth functions (the propositional calculus, Boolean algebra). That correspondence permits us to assign variables to switch inputs and to associate with each switch output a truthfunctional expression which is a truth function of the input variables for that switch. Thus we can represent the output of a switch as an explicit function (in particular, a truth function) of its inputs. It is natural to seek analogs for well-formed nets in general. We will give an analog for nets of degree zero and then discuss the problem for nets of arbitrary degree. Consider first nets with delays but without cycles. For these we can express each output as an explicit function of the inputs by using the theory of truth functions enriched with the delay operator 8. Thus in Fig. 5 49

The University of Michigan ~ Engineering Research Institute Eo - bA1. That this can always be done for noncyclic nets can be proved from the formation rules (with rule 5 deleted, of course); the considerations involve generalizations of those connected with the concept of rank in Burks and Wright,l p. 1361 ff. We next mention two theorems for delay nets without cycles. The first concerns shifting a delay operator across a logical connective; for example, 8(A & B) = 8A & 5B. To prove this formula, we apply the definition of 8 to both sides: b[A(O) & B(O)] - 8A(O) & 8B(O) _ b[A(t+l) & B(t+l)] 8A(t+l) & 8B(t+l) - A(t) & B(t) The legitimacy of this operation is connected to the fact that conjunction is a positive truth function (i.e., has the value zero when all its arguments are zero). In general, if P is a positive truth function, the following holds: 8P(A1, A2,...) = P(bA1, 8A2,...). Both v and f are also positive truth functions. A negative truth function is one which has the value one when all arguments are zero; 4 I, $, i, and D are examples. To develop analogous principles for these, we need an operator 8' defined by 8'A(O) - 1 b'A(t+l) - A(t) If N is a negative truth function, we have 8'N(A1, A2,...) = N(sAl, 8A2,...). If the negative function is not tautologous (i.e., not true for all values of its variables), then 5N(Alo, NA2, ~.) N(lA, 2.), where each 8li is either 8 or 6'. For example, 58 A E= 8'A. Note that two formulae which are the same except for the absence or presence of primes on deltas describe two functions which differ only initially; after some fixed time which is determined by the number of deltas involved they are equivalent. Shifting deltas across truth-functional connectives is equivalent to shifting all delay elements to inputs, so the resultant net consists of delays followed by a switch. This theorem can often be used to simplify expressions and nets. For example, consider a net which realizes 8(8A ~ A) f (8A i A). Applying the 50

The University of Michigan * Engineering Research Institute theorem, we get (65A f 8A) t (8A t A), which by the theory of truth functions reduces to 88A t A. The second theorem concerns input-independent transformations. By a result of Section 2 every such transformation is periodic, and hence is of the form Z a a a..., where 0 and a are binary words. For example, in 1010100100100... 0 is 1010 and a is 100. We call the length of C in bits the periodicity of the transformation, assuming that Cz is of minimum length. The periodicity of our example is three (not six, or nine, or etc.). The second theorem states that the class of input-independent transformations realized by cycle-free nets is equivalent to the class of periodic transformations of period one. We omit a detailed proof. The essential point in showing that every input-independent transformation realized by a cycle-free net is of period one lies in the fact that an automaton without cycles cannot remember anything for more than a fixed period of time. To show that every periodic transformation of period one can be realized by a noncyclic net, we can use part of the construction of the figure for Theorem XIII of Burks and Wright.1 With this construction we can realize any transformation of the form 00000.... To realize a transformation of the form 1l111..., we feed T0000... through a negation element, where T is the bitwise complement of ~. Consider next input-independent nets, i.e., nets all of whose internal junctions realize input-independent transformations. These nets may have cycles. Nevertheless, we can express the behavior of a net output as an explicit function of the inputs (in a vacuous sense) without using quantifiers. To do so it suffices to state the times at which the junctions are active. Thus, for F(t) = 111010101..., we have F(t) = [(t = 1) v (t - O mod 2)]. We can now let an input of a noncyclic net be driven by an input-independent junction, and by making an appropriate substitution still obtain an expression for the output as an explicit function of the inputs. Thus, given C(t) - Ao(t) & Al(t) we can identify A1 with F above and obtain C(t) -(t) At) &8[(t = 1) v (t 0 mod 2)] - Ao(t) &2[(t = 2) v t > 0) & (t 1 mod 2)3] We can further extend out theory of truth functions to include expressions like those just used. By adding t = a, t > a, (t-a) - c mod b, where t is a variable and a,b, and c are integers, we can describe any periodic function (using, of course, the truth-functional connectives). We call the theory 51

The University of Michigan ~ Engineering Research Institute obtained by adding these symbols and the operator 8 the extended theory of truth functions. It is clear from the preceding discussion that the following theorem holds. THEOREM 6: For every junction of a net of degree zero, we can effectively construct a formula of the extended theory of truth functions which describes the behavior of the junction as an explicit function of the behavior of the inputs. This theorem provides the motivation for our decision in the preceding subsection to classify input-independent junctions occurring in cycles along with non-input-independent junctions not occurring in cycles, for both - can be handled by our extended theory of truth functions. A much more difficult problem is to find formulae which describe the behavior of junctions of degree greater than zero as explicit functions of the net inputs. The natural place to seek such formulae is quantification theory, the next step beyond truth-function theory in the usual development of symbolic logic. The theory of quantifiersuses, in addition to the truth-functional connectives, the quantifiers "(x)" ("all x"), "(ax)" or "'(Ex) ("tsome x"), etc. The functional expressions of net theory "A(t)," "tB(t+3)," etc., are clearly monadic propositional functions or predicates. An essential feature of a deterministic net is that an output C(t) cannot depend on any inputs for times greater than t; hence the quantifiers used must be bounded. These bounds may be expressed by predicates such as'x < t" and "x < y < t," which are basically dyadic (the second is triadic but is easily reduced to dyadic predicates). Hence the required form of quantification theory involves monadic predicates and bounded quantifiers ranging over the nonnegative integers. Figure 6A shows a very simple cyclic net; it is described by the bounded quantifier expression (4.2-1) E(t) - (Ex):x < t.A(x), which states that E is active at t if and only if A has been active at some prior time. The slightly more complicated cyclic net shown in Fig. 6B is described by the quantifier expression (4.2-2) C(t) - (Ex)*:x _ t.Ao(x): (y)x y < t.DAl(y) which asserts that C is active at time t if and only if there is some nonlater time x at which A0 was active and such that at that time and all later times Al was active. It is easy to give examples of quantifier formulae for much more complicated nets with cycles. Whether or not formulae of this type can be found for arbitrary w.f.n. is an open question. 52

The University of Michigan * Engineering Research Institute Al A vE Ao C -C (A) (B) Fig. 6. Two simple nets. It should be noted that in the above examples the quantifier expressions do describe the output as an explicit function of the inputs; ioe., the.only function variables on the right are input variables. That is analogous to using a truth-functional expression to describe a switch output as a truth function of its inputs alone. It stands in contrast to the recursive methods for describing net behavior used previously, in which the output was expressed as a function not only of the input Junctions of the net but also (in general) of the internal Junctions (at an earlier time). In some cases such a recursive formulation is the natural way of specifying the behavior of a desired circuit. On the other hand, it is often simpler and more direct to specify the behavior of a net in terms of the inputs alone by means of quantifiers and simple arithmetic predicates like "is odd," "is between m and n," etc. Eence it is of interest to develop a form of quantification theory that will facilitate this method of characterizing an automaton and to find both effective (in the purely theoretical sense) and practical ways of passing from formulae in the calculus to the corresponding automaton nets and vice versa. The problem of finding a quantifier formula for a net characterized recursively may be viewed as one of converting recursive definitions into explicit ones. As we have remarked in Section 2.2, the transformation realized by each delay output of a net is primitive recursive relative to the net inputs. Theoretically one can use the well-known procedures for converting primitive recursive functions (cf. Hilbert and Bernays,l7 pp. 412-421) to obtain the desired result. As it turns out, however, this method produces quantifier expressions in which some quantified variables range not over time but over the history of the states of the delay outputs. The quantifier' expressions so obtained are intuitively always no more and actually less transparent than the corresponding recursive characterization. 4.3 NERVE 1ETS We will close this paper with a few remarks about nerve nets and cycles in nets. A nerve net is a special case of a well-formed automaton net, in which 55

The University of Michigan ~ Engineering Research Institute each neuron consists of a positive switching element driving a delay element. Hence our general results apply to nerve nets. Not all transformations realized by well-formed nets can be realized by nerve nets. According to Theorem 2, every transformation realized by a w.f.n. can be realized by a decoded normal form net. By the results of Section 4.1 the starter of a decoded normal form net may be constructed without cycles. Hence we can construct a decoded normal form net whose cycles pass through conjunctions and delays only. Hence every transformation realized by a w.f.n. can be realized by a w.f.n. in which the only positive switches occur in cycles. A neural -net is a net in which only positive switches occur in cycles. It differs from a decoded normal form net in two basic respects: first, it has no starter, and second, every switch is combined with a delay. Hence, if a starter is added to the system of nerve nets, every automaton transformation can be realized by a nerve net, except that the nerve-net output may be later in time because each neuron has a delay built into it. Usually the total time lag can be made two, because a disjunctive normal form expression, e.g., (p.Iq) v (f.q), is a disjunction of conjuncts (see, for example, Kleene,11 Theorem 3). Kleenell has investigated the logic of nerve nets in some detail. He analyzes nets in terms of the kinds of events (input histories) they can detect, and he establishes the result that an event can be detected by a net if and only if the event is regular (Theorems 3 and 5). The reader is referred to page 22 of Kleenell for a definition of "regular"; we note here merely that an important ingredient of the notion of regularity is periodicity. For example, an input of the form aP...cC, with an indefinite number of a's, is regular. It is easy to construct a net which will be active at time t if and only if the history of its input is of the form a a...'G(S, for an indefinite number of a's; cf. the discussion of Section 4.2 on periodic transformations. The pervasiveness and importance of cycles in the analysis of automata and nerve nets are worth emphasizing. When cycles are permitted in automata nets, these nets become much more powerful, and, correspondingly, the logic required to treat them becomes much more complicated. There are many ways in which nets can involve cycles. We have just noted that by Kleene's results an important aspect of any input history which can be detected or distinguished by automata is the periodicity ingredient in its regularity. By our results of the previous subsection the internal structure of an automaton is analyzable into cycles; and by earlier results (see Section 2.2) any output which is independent of the inputs is periodic, and hence cyclic in character. The relations between these various cyclic aspects of automata remain to be investigated. It would be of interest to have a theory which shows how they are interconnected.

The University of Michigan ~ Engineering Research Institute BIBLIOGRAPHY 1. Burks, Arthur W., and Jesse B. Wright, "Theory of Logical Nets," Proc. IRE, 41: 1357-1365 (1953). 2. Burks, Arthur W., and Irving M. Copi, "The Logical Design of an Idealized General-Purpose Computer," J. Franklin Inst., 261: 299-314 and 421-436 (1956). 3. Shannon, Claude, "Computers and Automata," Proc. IRE, 41: 1234-1241 (1953). 4. Rochester, N., J. H. Holland, L. H. Haibt, and W. L. Duda, "Tests on a Cell Assembly Theory of the Action of the Brain, Using a Large Digital Computer," IRE Trans. on Information Theory, 1956, pp. 80-93. 5. Turing, A. M., "On Computable Numbers, with an Application to the Entscheidungsproblem,," Proc. London Math. Soc. (Series 2), 42: 230-265 (1936-37), with a correction, ibid., 43; 544-546 (1937). 6. Kleene, S. C. Introduction to Metamathematics. New York: D. Van Nostrand Company, Inc., 1952. 7. Wang, Hao, "A Variant to Turing's Theory of Computing Machines" (to be published in J. Assn. Computing Machinery). 8. Wang, Hao, "Universal Turing Machines: An Exercise in Coding" (to be published). 9. von Neumann, John, "The General and Logical Theory of Automata,," pp. 1-41 in Cerebral Mechanisms in Behavior, John Wiley and Sons, 1951. 10. Kemeny, John G., "Man Viewed as a Machine," Scientific American, 192: 58-67 (1955). 11. Kleene, S. C., "Representation of Events in Nerve Nets and Finite Automata, " pp. 3-41 in Automata Studies, edited by C. E. Shannon and J. Mccarthy, Princeton Univ. Press, 1956. 12. Shannon, Claude, "A symbolic analysis of relay and switching circuits," Trans. AIEE, 57: 713-723 (1938). 55

The University of Michigan * Engineering Research Institute 13. Burks, Arthur W., Robert McNaughton, Carol H. Pollmar, Don W. Warren, and Jesse B. Wright, "Complete Decoding Nets: General Theory and Minimality," J. Soc. Ind. Appl. Math., 2: 201-243 (1954). 14. De Turk, J. E., A. L. Garner, J. Kautman, A. W. Bethel, and R. E. Hock. Basic Circuitry of the MIDAC and MIDSAC. Ann Arbor: Univ. of Mich. Press, 1954. 15. Buck, D. A., "The Cryotron-A Superconductive Computer Component," Proc. IRE, 44: 482-493 (1956). 16. Moore, Edward F., "Gedanken-Experiments on Sequential Machines," pp. 129153 in Automata Studies, edited by C. E. Shannon and J. McCarthy, Princeton Univ. Press, 1956. 17. Hilbert, D., and P. Bernays. Grundlagen der Mathematik. Vol. 1. Berlin: Springer, 1934.

The University of Michigan * Engineering Research Institute DISTRIBUTION LIST (One copy unless otherwise noted) Alabama Florida The Air University Libraries Commander Maxwell Air Force Base, Alabama Air Force Armament Center Attn: Technical Library California Eglin Air Force Base, Florida Applied Mathematics and Statistics Commander Laboratory Air Force Missile Test Center Stanford University Attn: Technical Library Stanford, California Patrick Air Force Base, Florida Department of Mathematics Illinois University of California Berkeley, California Department of Mathematics Northwestern University Commander Evanston, Illinois Air Force Flight Test Center Attn: Technical Library Institute for Air Weapons Research Edwards Air Force Base, California Museum of Science and Industry University of Chicago The Rand Corporation Chicago 37, Illinois Technical Library 1700 Main Street Department of Mathematics Santa Monica, California University of Chicago Chicago 37, Illinois Director, Office for Advanced Studies, Air Force Office of Department of Mathematics Scientific Research University of Illinois Post Office Box 2035 Urbana, Illinois Pasadena 2, California Maryland Commander Western Development Division Institute for Fluid Dynamics and Attn: WDSIT Applied Mathematics Post Office Box 262 University of Maryland Inglewood, California College Park, Maryland..C.onncticut Mathematics and Physics Library Department of MPathematics The Johns Hopkins University Yale University Baltimore, Maryland New Haven, Connecticut 57

The University of Michigan * Engineering Research. Institute DISTRIBUTION LIST (Continued) Massachusetts Missouri (Concluded) Department of Mathematics Department of Mathematics Harvard University University of Missouri Cambridge 38, Massachusetts Columbia, Missouri Commander Linda Hall Library Air Force Cambridge Research Center Attn: Mr. Thomas Gillis Attn: Geophysics Research Library Document Division L. G. Hanscom Field 5109 Cherry Street Bedford, Massachusetts Kansas City 10, Missouri Commander. Nebraska Air Force Cambridge Research Center Attn: Electronic Research Library Commander L. G. Hanscom Field Strategic Air Command Bedford, Massachusetts Attn: Operations Analysis Offutt Air Force Base Michigan Omaha, Nebraska Department of Mathematics New Jersey Wayne University Attn: Dr. Y. W. Chen The James Forrestal Research Center Detroit 1, Michigan Library Princeton University Willow Run Research Center Princeton, New Jersey University of Michigan Ypsilanti, Michigan Library Institute for Advanced Study Minnesota Princeton, New Jersey Department of Mathematics Department of Mathematics Folwell Hall Fine Hall University of Minnesota Princeton University Minneapolis, Minnesota Princeton, New Jersey Department of Mathematics New Mexico Institute of Technology Engineering Building Commander University of Minnesota Holloman Air Development Center Minneapolis, Minnesota Attn: Technical Library Holloman Air Force Base, New Mexico Missouri Department of Mathematics Commander, Air Force Special Weapons Washington University Center, Attn: Technical Library St. Louis 5, Missouri Kirtland Air Force Base, Albuquerque, New Mexico 58

The University of Michigan ~ Engineering Research Institute DISTRIBUTION LIST (Continued) New York North Carolina Professor J. Wolfowitz Institute of Statistics Mathematics Department North Carolina State College of White Hall A and E Cornell University Raleigh, North Carolina Ithaca, New York Department of Mathematics Department of Mathematics University of North Carolina Syracuse University Chapel Hill, North Carolina Syracuse, New York Office of Ordnance Research (2) Mathematics Research Group Box CM New York University Duke Station Attn: Professor M. Kline Durham, North Carolina 45 Astor Place New York, New York Department of Mathematics Duke University Department of Mathematics Duke Station Columbia University Durham, North Carolina Attn: Professor B. 0. Koopman New York 27, New York Ohio Department of Mathematical Statistics Commander Fayerweather Hall Air Technical Intelligence Center Attn: Dr. Herbert Robbins Attn: ATIAE-4 Columbia University Wright-Patterson Air Force Base, Ohio New York 27, New York Commander Mr. I. J. Gabelman Wright Air Development Center Rome Air Development Center Attn: Technical Library Attn: RCOS Wright-Patterson Air Force Base, Ohio Griffiss Air Force Base Rome, New York Commander (2) Wright Air Development Center Commander Attn: ARL Technical Library, WCRR Rome Air Development Center Wright-Patterson Air Force Base, Ohio Attn: Technical Library Griffiss Air Force Base Commandant (2) Rome, New York USAF Institute of Technology Attn: Technical Library, MCLI Institute for Aeronautical Sciences Wright-Patterson Air Force Base, Ohio 2 East 64th Street New York 21, New York Chief, Document Service Center (10) Armed Services Technical Information Agency, Knott Building Dayton 2, Ohio 59

The University of Michigan ~ Engineering Research Institute DISTRIBUTION LIST (Concluded) Pennsylvania Washington, D. C. Department of Mathematics Human Factors Operations Research Carnegie Institute of Technology Laboratories Pittsburgh, Pennsylvania Air Research and Development Command Bolling Air Force Base Department of Mathematics Washington 25, D. C. University of Pennsylvania Philadelphia, Pennsylvania Chief of Naval Research (2) Department of the Navy Tennessee Attn: Code 432 Washington 25, D. C. C ommander Arnold Engineering Department of Commerce Development Center Office of Technical Services Attno Technical Library Washington 25, D. C. Tullahoma, Tennessee Director of National Security Agency Dr. Alston S. Householder Attn: Dr. H. H. Campaigne Oak Ridge National Laboratory Washington 25, D. C. Post Office Box P Oak Ridge, Tennessee Library National Bureau of Standards Texas Washington 25, D. C. Defense Research Laboratory National Applied Mathematics University of Texas Laboratories Austin, Texas National Bureau of Standards Washington 25, D. C. Department of Mathematics Rice Institute Headquarters, USAF Houston, Texas Director of Operations Attn: Operations Analysis Division, Commander AFOOP Air Force Personnel and Training Washington 25, D. C. Research Center Attn: Technical Library Commander (2) Lackland Air Force Base Air Force Office of Scientific San Antonio, Texas Research, Attn: SROAM Washington 25, D. C. Wisconsin Commander Department of Mathematics Air Force Office of Scientific University of Wisconsin Research, Attn~ SRRI Madison, Wisconsin Washington 25, D. C. Mathematics Research Center Belgium Attn: R. E. Langer University of Wisconsin Commander (2) Madison, Wisconsin European Office, ARDC 60 Rue Ravenstein Brussels, Belgium 60

0S: I To