THE U N I VE RS I TY OF M I CH I GAN COLLEGE OF LITERATURE, -SCIENCE, AND THE ARTS Computer and Communication Sciences Department Technical Report REALIZATION WITH FEEDBACK ENCODING Dennis Paul Geller with assistance from: Department of Health, Education, and Welfare National Institutes of Health Grant No. GM-12236 Bethesda, Maryland and National Science Foundation Grant No. GJ-29989X Washington, D.C. administered through: OFFICE OF RESEARCH ADMINISTRATION ANN ARBOR May 1972

t^, a k ^ — i ft3

ABSTRACT REALIZATION WITH FEEDBACK ENCODING by Dennis Paul Geller Chairman: John F. Meyer Automata theorists have long been concerned with various notions of realization. All of these have, as a basic feature, an input encoder a. which transforms inputs intended for the realized machine, M', into inputs to the realizing machine, M. In this paper we introduce one modification to this concept. We give the realizing machine a measure of control over the realization by introducing feedback to the input encoder. The input encoder is then a map h from pairs (q,i') to inputs of M, where q is a state of M and i' is an input to M'. We introduce the restriction that for each q, the map h(q,.)'is one-to-one and onto. With this restriction, we are able to demonstrate strong ties with the theory of directed graphs. We introduce and study admissible homomorphisms of graphs and show that they play a role for realization with feedback encoding similar to that played by homomorphisms of machines for classical realization. We also investigate algebraic properties of realization with feedback encoding, Finally, we discuss two applications of the concept of realization with feedback encoding. The first is to the problem of finding seriesparallel decompositions of automata, and the second involves the design of diagnosable machines capable of realizing a given state behavior.

ACKNOWLEDGEMENTS It is difficult to adequately express appreciation to all those who have helped me reach this stage. I most certainly thank the Research Center for Group Dynamics, Logic of Computers Group and the Computer and Communication Sciences Department, all of The University of Michigan, and the School of Advanced Technology and SUNY Binghamton for the financial'aid which theyprovided; in this line I must also thank the Woodrow Wilson Foundation and the National Science Foundation. As for the many individuals who helped me there are, first and foremost, the members of my committee. Before there was a committee, Frank Harary provided support, and tried to teach me how to write and ask questions. I will always be grateful to Stephen Hedetniemi and Gary Chartrand, not only for their help, but also for their friendship. This thesis might never have come into existence without the superlative translating, typing and shepherding done by Jan McDougall. Finally, to my wife Naomi: thanks.

TABLE OF CONTENTS Page INTRODUCTION 1 CHAPTER I - Fundamentals 4 CHAPTER II- Admissible Homomorphisms 11 CLtPTER III- Realization with Feedback Encoding 41 CHAPTER IV - Algebraic Structures and Simulation with 79 Feedback Encoding CHAPTER V - Distinguishing Sequences 107 CHAPTER VI - Summary 144 BIBLIOGRAPHY 147

LIST OF FIGURES Fig~ure Page 1.1 A state transition graph 5 1.2 A Mealy machine 8 1.3 A cascade 10 2.1 Twic Jigrliphs 12 2.'2 A machine and its digraph 13 2.3 A digraph and the derived pure digraph 17 2.4 A walkwise map 23 2.5 A factorization of a walkwise map 24 2.6 A factorization of a map 24 2.7 A mapping from a digraph to a subdigraph 26 2.8 A digraph 30 2.9 An admissible homomorphism 33 3.1 Schema for realization 43 3.2 Schema for realization with feedback encoding 44 3.3 A modulo three counter 45 3.4 A machine which does not realize a modulo three counter 45 3.5 A modulo six counter 47 3.6 A reset machine 50 3.7 An admissible homomorphism based on Example 3.2 55 3.8 A digraph and its absolute square 75 4.1 Simulation with feedback encoding 80 4.2 Another simulation with feedback encoding 93 5.1 Realization with feedback encoding 108 5.2 A diagnosable realization 111 5.3 Another diagnosable realization 112 5.4 A repeated symbol diagnosable realization 113 5.5 A machine without a 2-factor 122 5.6 A functional digraph 122 5.7 A reduced autonomous.machine 123 5.8 A machine with a distinguishing sequence 124 5.9 A convergence 125 5.10 A step in the proof of Theorem 5.5 126 5.11 A machine with convergences at three states 156 5.12 A line coloring of a bigraph 137 5.13 A line coloring satisfying the conditions of Theorem 5.7 138 5.14 An incomplete machine derived from Figure 5.12 139 5.15 An incomplete machine derived from Figure 5.13 140 5.16 A machine M with E(D(M)) convergences 142 iv

INTRODUCTION In this thesis we will be studying a new definition for the concept of realization of finite-state sequential machines. Although their content may vary, definitions of the notion of realization in the literature all fit into a basic paradigm. As the realizing machine will, in general, have a different input alphabet than the realized machine, provision is made for a memoryless input encoder, which transforms inputs "intended for" the realized machine into the input alphabet of the realizing machine. In some cases, (realization [17, p. 28]), this encoder transforms symbols to symbols, and in other cases, (simulation [17, p. 192]), symbols to strings. One feature, however, is always constant: the way a symbol is transformed is invariant. In this work we propose an apparently drastic change to this paradigm: we allow the realizing machine to exert a measure of control over the realization by adding feedback from the realizing machine to the input encoder. In this way, a given symbol may be transformed in any one of a number of different ways, depending on the current state of the realizing machine. We call this new notion of realization, realization With feedback encoding. It is a truism of automata theory that adding even a small amount of feedback to a system can have profound results, and the modification which we proposehere is no exception. We show, in Chapter III, and in a different context in Chapter IV, that unless some restrictions are placed on the behavior of the input encoder, the concept becqmes trivial, in the sense that there may be no discernible 1

relation, either behaviorally or structurally, between the realized and realizing machines. The restrictions which we place on the encoders in Chapter III are natural ones, but they lead to a surprising result. Whether or not one machine can realize another with feedback encoding can be ascertained by examining their state transition diagrams with the input labels on the arcs removed. Thus, graph theoretic tools become quite valuable in the study of realization with feedback encoding. Graph Theory is reviewed in Chapter II, where we present the notion of an admissible homomorphism between graphs. Properties of admissible homomorphisms are discussed, and they are related to other forms of mappings between graphs, most notably those of McNaughton [25] and of Harary, Hedetniemi and Prins [13]. In Chapter III we show that the existence of an admissible homomorphism between the unlabelled state transition graphs of two machines is both necessary and sufficient for realization with feedback encoding of the range machine by the domain machine. We then show how well-known theorems about realization of machines by cascade compositions can be generalized to include the class of realizations with feedback encoding. Using the complexity measure size, we then generalize a result of Zeigler [36] and show that it is impossible to realize, even with feedback encoding, all machines by cascades of bounded complexity. Finally, we investigate the addition of a feedback encoder to the special case of simulation, b-slow simulation, and in the process we provide a characterization of those digraphs which are powers of other digraphs.

3 In Chapter IV, we investigate algebraic properties of both realizations and simulations with feedback encoding. We show that it is possible to associate a semigroup with each machine so that the question of whether one machine can realize or simulate another with feedback encoding can be resolved by examining the semigroups associated with the two machines. We also relate the notion of simulation with feedback encoding to the S*-semigroups defined by Hedetniemi and Fleck [20], and settle a conjecture of theirs in all but one case. In Chapter V we examine the problem, given a machine, of finding a realizing machine which has a distinguishing sequence. It is shown that allowing realizations with feedback encoding makes the problem in general more tractable even with the additional restrictions, which we assume throughout, that there be no increase in state-set or input-set size, and as little increase as possible in output-set size. In particular, we define a class of machines for which the problem can always be solved when realization with feedback encoding is permitted. For other machines, we show that the difficulties which are reflected in the structure of the state transition graph of the machine to be realized can be modelled as a problem in coloring the lines of bipartite graphs. We develop a new canonical form for such colorings, and then apply it to the distinguishing sequence problem.

CHAPTER I FUNDAMENTALS This chapter and the next present some of the basic concepts from the theories of machines and graphs which will be important to the study at hand. A machine, or finite-state automaton M, consists of a set Q of states, a set I of inputs, where both Q and I are finite, and a map 6:Q x I + Q. A machine M = <,,I,6> can.be specified either by giving the map 6 explicitly, or by drawing its state-transition graph, a labeled directed graph (.See Chapter II) whose points are the states of M, having an arc from state qi to state qj labeled with input x e I if and only i.f 6(qi,x) = qj. A machine is autonomous if ilI = 1. Example 1.1 The following table for 6 describes the machine M = <Q,I,6> whose state-transition graph is given in Figure 1.1; Q = {ql'q2'q39q4} and I = {x1,x2}. M x. x2 q1 ql q4 q2 q4 q q3 q3 q2 q4 q4 q3 We will make the notational convention that, unless otherwise specified, the state set and input set of a machine M'are always denoted by Q and I, respectively; thus, for example, a machine M' has state set Q', and a machine M2 has input set I21 4

^x~'^1Xl 5x X2 O~: 2X Figure 1.1. A state-transition graph. We will also omit explicit reference to 6, the transition function; if 6(qi,x) = qj we will instead write qiX - qj. (1.1) A machine M', is a submachine of a machine M if Q'cQ, I' CI, and 6' = 61Q A string y over a set I is a nonempty finite sequence y = xx2... xn of symbols from I; the length t(y) is the number n of symbols in the sequence. Given any finite set I we define I+ to be the set of all strings over I. The transition function 6 of a machine M can be extended to a map 6:Q x Is + Q by setting 6 q,x) = 6(q,x) for all x c I and 6(q,xy) = 6(6(q,x),y), for x,y C I+; alternately, we could write q(xy) = (qx)y. In fact, as is common, we will use the symbol 6 or the notation of (1.1) for both 6 and 6.. It is well known that the value of qz for z e I1 is independent of the way that z is decomposed

6 into a product z = xy of strings. For the machine of Example 1.1, qlXlx2 = q4 q2xl x2 q3 q3xlx2 q2 q4xlx2 = q3 Let d be the vector of length IQI, where 4(i) = q.. Each string x e I' transforms a to a vector 4x, where 4x(i) = qix. Note that for the machine of Example 1.1, x2 = x2x2x2x2x2. Let I* = I+U{A}, where A, the empty string, has the properties that L(A) = 0, AA = 4 and, for any string y e I+, Ay = yA = y. Define an equivalence relation on I* by setting x E y if and only if 4x = 4y. Then the equivalence classes under = are the elements of a semigroup, S(M), where [x][y]-= [xy]; S(M) is the semigroup of M. One notion which is crucial to our study is that of one machine being able to "simulate" the behavior of another. onto We say that M realizes M' if there are maps 4:Q nt Q' and h:I' ~nt~ such that -(q)x' = q(qh(x')). (1.2) The map h is called the input encoder. If the map % is one-to-one we say that the realization is isomorphic, and otherwise it is homomorphic. In either case, 4 is said to be a homomorphism with substitution property, or an SP homomorphism. An important property of an SP-homomorphism between two machines is that if ~(ql) = 4(q2) then for all x e I,,(qlx) = O(q2x) (1.3) The partition which an SP homomorphism induces on its domain is an SP partition.

7 A machine M simulates a machine M' if there are maps *:Q + Q' and h:I' + 1I such that (q)x' (qh(x')) (1.4) A special case of simulation is b-slow simulation; M b-slow simuates M' if M simulates M' and the input encoder h has the property that for each x' E I', Z(h(x')) b. Often it is not important to know precisely to which state an input takes a machine. Instead, outputs are associated with a machine, and one is interested in the mapping from inputs to outputs which the machine induces. A Mealy machine is a machine M = <Q,I,6> together with a finite set Y of outputs and an output function X:Q x I + Y; M I <Q,I,6,X,Y>. It may often be the case that for any states ql1q2 and inputs xl,X2, qlx = q2x2 implies that X(ql,x1) = X(q2,x2). In such a machine X(q,x) can be considered to be a function of the next state, qx. Such a machine is called a Moore machine, and, as the meaning will always be clear from context, we will write X(q,x) = A(qx). In fact, the two types of machines define the same classes of behaviors. In many applications, Moore machines are instead defined so that X(q,x) = X(q). We chose the formulation which we did (see [1]) to make the derivations in Chapter V more tractable although, as we point out there, we actually lose no generality by doing so. As with the transition function, we can extend X to a map with domain Q x I+. In fact, this can be done in two ways. We define X:Q x I + Y by X(q,x) = X(qyxl), where x = yxl C I+ and xl e I. On the other hand, B:Q x I+ Y+ is the map B(q,x1... xn) = X(q,xl)X(qx1,x2)... X(qx1... xnl,xn). Using the same symbol, X, for both X and A, we see that in the Mealy machine M * <{qlq2,q3,q4},

8 {x1,x2}, 6, X, {0,1}> of Figure 1.2, ~~~~~x1;0 (k/~jO(~x2O;1 ^;l Xl;1 / J x2;l." 2 X;0 Figure 1.2. A Mealy machine. (q1,x1x2) = X(q4,x1x2) = X(q2,xlx2) = 1, and B(ql,xx2) = (q4,xlx2) = 01, but 6(q2,x1x2) = 11. We say that two states q1 and q2 of a machine are equivalent if, for each x e I+, X(q1,x) = X(q2,x). A machine is reduced if no two states are equivalent. For every machine M which is not reduced there is a unique reduced machine which is formed by identifying each set of equivalent states to a single state. If M and M' are machines with output then M realizes M' if there onto onto are maps 4:Q +o Q', h:I' t~ I, g:Y - Y' such that O(q)x' = (qh(x')) (1.5) Ax((q),x') = -g(X(q,h(x'))) It then follows that every machine realizes its reduced machine. Given machines M1 and M2, let Z be a map Z:Q1 xI1 + 12. The cascade M1 oz M2 of M1 and M2 with connecting map Z is a machine M with Q Q1 x Q2, I =I1, and (q1,q2)x1 - (qlxlq2Z(ql,xl)). If Z is in fact independent of the state of M1, Z:I1 + 12, then the cascade

9 is a paraZlle conposition. In a cascade M1 oZ M2, M1 is the front component and M2 is the tail component. The operation of cascading machines is, in general, not associative. On the other hand, if we write M M2 oZ M3 we are not actually I 2 z 3 being ambiguous: once we specify the maps Zi the cascade will be defined uniquely up to isomorphism. Thus, if we have a cascade M of n machines M. we can write M = nM. as a purely notational convenience. Of course, since cascade is a binary operation, every cascade iM. can be written as IMi = N1 oZ N2, where, in general, both N1 and N2 can be specified as cascades of the M.. Among the well-known, in fact almost "folklore'", theorems to which we will be referring in the sequel are ([17]). 1. M can be isomorphically realized by a cascade M1 o M2 if and only if M is SP homomorphic image of M. 2. M can be isomorphically realized by a parallel composition M1 oZ M2 if and only if each of M1 and M2 is an SP homomorphic image of M and the meet of the induced SP partitions is the zero partition on Q. For example, the partition {q1q3;q2q4} on the machine M of Example 1.1 has SP, and is the only such partition which is nontrivial. The cascade M1 ov M2 in Figure 1.3 isomorphically realizes M.

10 x2 x2 Yl Y3 Zx1 x2 i 2 rl1Yl Y2 rl q1 q3 r2 y3 Y2 r2q2 q4 Figure 1.3. A cascade.

CHAPTER II ADMISSIBLE HOMOMORPHISMS In this chapter we will develop an apparently new notion for homomorphisms of directed graphs, the admissible homomorphisms. A survey of other definitions for homomorphisms can be found in [18]. Our admissible homomorphisms will be seen to lie "between" SP homomorphisms of machines and iMcNaughton's pathwise homomorphisms [25]. The notion is somewhat tangent to the usual idea of homomorphisms in graph theory [12], but we will develop some relationships to these; in particular, we will strengthen a result of Hedetniemi [19] on the conjunction of two graphs by generalizing it to digraphs and then showing that in many cases the homomorphisms which he-considers are, in fact, admissible. In graph theory, the term digraph usually refers to irreflexive relations. This turns out to be unsuitable for our purposes: most of our results will eventually be applied to state-transition graphs of machines, which often have loops or multiple arcs. We will therefore, unless we specify otherwise, use the term digraph to refer to the more general structures called nets in [15]. A digraph D consists of a set V of points together with a collection X (repetitions permitted) of ordered pairs, called arcs, from V x V. If uv = (u,v) is an arc of D write uv c D or uv e X. If there is at least one arc from u to v then u is adjacent to v and v is adjacent from u. The expression "uv c X" is used both in the normal set-theoretic usage and alsb as a predicate, to express "u is adjacent to v". Such dual usage causes no confusion, except possibly when there is more 11

12 than one arc from u to v. Where difficulties may arise, we will try to be especially careful in using this notation. Figure 2.1 shows two digraphs; the first is also a digraph in the sense of standard graph-theoretic usage. We will generally use the symbols p for IVI and q for IxJ. Figure 2.1. Two digraphs. With each digraph D, we associate a relation y, defined by (u,v) c y if and only if uv e D; then y(u) = {vluv c D} and y-l(v) = {uluv C D}. With each point of a digraph D, we associate two numbers. The indegree id(u) of u is the number of arcs vu c D; note that, since we are permitting multiple arcs, id(u) is not in general equal to Y-1(u)I. Similarly the outdegree od(u) is the number of arcs uv C D. A digraph is outregular (of degree d) if all its points have the same outdegree (ecual to d).

13 If a digraph is, in fact, an irreflexive relation, and hence a digraph in the sense of [15], we will say that it is pure. We will use the standard definition for graphs [12]. A graph G consists of a set V of points together with set X of unordered pairs of distinct points, called lines. If there is a line between u and v in G we write uv e G or uv e X. As we mentioned, we will be interested in applying the results of this chapter to state-transition graphs of machines. To do this, we will associate a digraph with each machine. If M = <Q,I,6> is a machine, the digraph D(M) of M has for its points the set Q and for its arcs the collection X of arcs uv such that for some x c I, ux = v. Note that if there are, say, n inputs x. for which ux. = v then there are n arcs from u to v in D(M), so that D(M) Is always outregular. A machine and its associated digraph are shown in Figure 2.2. 1' / Ci, 1' M D(M) Figure.2. A machine and its digraph.

14 A walk in a digraph is a sequence W =,<uOul,ulu2,...,u u > of 1Y 1U2 n-l- n arcs. The length of W is n. We may often abbreviate this notation and write W as a sequence of points, W = u0u1... un, although it is understood that in a digraph with multiple arcs there may be distinct walks which have the same abbreviation. A digraph D is said to be strong, or strongly connected, if there is a walk W = vl... v Iv such that each point of D appears at least once as one of the v.. In much of this work we will be dealing with maps between digraphs. Although it will be convenient to treat these as mappings between the point sets of the digraphs, we will provide a slightly unconventional definition. To this end, we introduce some preliminary notation. R The reflexive closure D of a digraph D is the smallest superdigraph of D which has a loop at each point. For a point u E D, let S(u)be the set of arcs incident from u, and let S(u) be the set of arcs incident to U. Given digraphs D and E, by a mapping from D onto E we will mean a mapR ping ~:X(D) - X(E ) which is onto X(E) and which satisfies the following property: for each u e V(D) there is a u' e V(E) such that *(S(u))C S(u') and *(S(u))C S(u'). A consequence of this definition is that * can be considered as a map from V(D) onto V(E); we will write * for the map in either case, and it will be clear from context whether we are mapping points to points or arcs to arcs. There are two important features of mappings as we have defined them. Hedetniemi [19] calls a mapping O:G + H between the point sets of two graphs a fuZZ mapping if for each line u'v' c H in the image there

15 is a line uv C G such that u' = p(u) and v' = +(v); fullness is automatically a property of mappings as wo havo defined them. Another important property is that if x e X(D) is an arc from u to v then Gux) is an arc from t(u) to +(v) in X(E), unless p(u) = t(v), in which case +(x) may not be a loop in X(E), but may instead be a loop in R X(E ) - X(E). The reason for this dichotomy arises from differences between well-established results in the theories of graphs and automata. Following McNaughton [25), if D and D' are digraphs and onto O:D t D' is a map, then % is said to be watkwiee if, for any walk'<xl,,x'> in D' there is a walk <x,...,x > in D such that (xi) x!, i 1,...,n; since McNaughton used the term "path" for walk, he called such mappings "pathwise". McNaughton defined "pathwise homomorphisms" between state-transition graphs in conjunction with his studies of the star-height problem. He showed that the "rank" (in the sense of [5]) of a state-transition graph cannot increase under a walkwise homomorphism. Hedetniemi [19] studied maps which were homomorphisms between graphs in the sense of [12], from here on simply called homomorphisms, and also walkwise, and'proved a similar result for the cycle rank [12], the number of independent cycles. We will complete this sequence by generalizing Hedetniemi's'theorem.

16 A cycle Z in a digraph D is a walk Z = v... v, where v = v 1 n 1 n and, for 1 < i < j < n, vi vj. The length of a cycle is the number of distinct points. As in [12,p. 37], we can consider each cycle in q D to be a formal sum over the q arcs of D, Z = where the c. i=l i' are elements of the two-element field {0,1} and c. = 1 if and only if x. is an arc of Z. Thus, each cycle can be represented by a vector from the vector space of dimension q over the binary field. A set of cycles is linearZy independent if the cycles are linearly independent as vectors. In a graph G, it is known that the cycle rank m(G), the maximum number of independent cycles, is equal to q-p+k, where k is the number of connected components of G. Berge [ 2 ] showed that the same equation holds for strong pure digraphs, so that the cycle rank is m(D) = q-p+l. His result can be applied to digraphs as we have defined them. Lemma 2.1 If D is a strong digraph then m(D) = q-p+l. PROOF: (sketch). Suppose that D is a strong digraph. We form a new digraph D+ with V(D+) = V(D)UX(D), and X(D+) = {uixj,xjukluiuk = xj}. In effect, D+ is formed by inserting a new point on each arc of D. It is clear that D+ is a pure strong digraph, and that m(D+) = m(D). By Berge's result, m(D+) = q(D+) - p(D+) + 1. But p(D+) = p(D) + q(D) = 2q(D). Thus, m(D) = m(D+) - 2q(D) - (p(D) + q(D)) + 1 = q(D)- p(D) + 1. Berge's result, and hence Lemma 2.1, does not hold for digraphs which are not strong. The digraphs D and D+ of Figure 2.3 each have

17 q-p+l 1, but each is acyclic. However, if a digraph D has k strong components and every arc belongs to a strong component, then m(D) = q-p+k. D D+ Figure 2.3. A digraph and the derived pure digraph. Corollary If D is not strong then m(D) is the sumn.of,the cycle ranks of the strong components of D. Lemma 2.2 If ~:D - D' is walkwise and if D' has a cycle of length n, then D has a cycle whose length is a nonzero multiple of n. PROOF: Let the cycle Z' in D' have points vl....,v and consider an arbitrarily large walk around Z':v{,vI,..,,v',v,... This walk must have a pre-image which is a walk, say W = V11'12..Vlnv21*...2nV51,.., where 2(vij) = vj. Since D is finite there must be repetitions of points in W;'suppose that vij = vke is the first such repetition. Then clearly j = t and vij,vi, +l,...,vk. is a cycle whose length is (k-i)n. l j lJ1i'kj

18 For any digraph D and any collection Y of arcs of D, D-Y is a digraph with V(D-Y) = V(D) and X(D-Y) = X(D)-Y. Lemma 2.3 Let D and D' be digraphs and p:V + V' a walkwise map. For any collection X = { u' v'} of arcs of D', = 1U 1 V'' n n -11' i 4, considered as a map from D - (XI) to D' - X, is walkwise. The proof is immediate. Lemma 2.4 If D is a strong digraph, and x is an arc of D, then m(D-x) < m(D). PROOF: Since D is strong, x belongs to some cycle Z of D. Now any linearly independent collection of cycles in D-x is certainly linearly independent in D, but Z cannot be dependent on any of these as x, D-x. A strong digraph D is minimalZy strong if for each arc x, D-x is not strong. It is shown in [ 7 ] that every minimally strong digraph has at least two points having both indegree and outdegree equal to 1. In particular, each arc x incident to such a point has the property that m(D-x) = m(D)-l. Theorem 2.1 If D and D' are digraphs and if *:V + V' is walkwise, then m(D) Lm(D').

19 PROOF: We proceed by induction. If m(D) = 0 then by Lemma 2.2, m(D') 0. Suppose that the theorem is true whenever m(D) < n and let m(D) - n+l. If m(D') > m(D) then certainly m(D') _ 1. Let x' be an arc in D' such that m(D'-x') = m(D')-l. Such an arc certainly exists if D' has a strong component which is not minimally strong, by Lemma 2.1, and, as noted above, a suitable arc also exists if every strong component is minimally strong. Note that x' must belong to a strong component of D', so that there is a closed walk in D' containing x'; therefore some element of * (x') belongs to a closed walk, and hence a strong component, of D. Then by Lemma 2.4 and the corollary to Lemma 2.1, m(D-~ (x')) ( m(D). Using Lemma 2.3 and the inductive hypothesis m(D- l(x')) 2 m(D'-x'). But m(D) m(D-'l (x'))+l and m(D'-x')+l _ m(D'), so it follows that m(D) > m(D- (-lx'))+l > m(D'-x')+l I M(D'). The proof technique in the theorem is essentially that used in Hedetniemi's proof, but Theorem 2.1 applies to a wider class of maps. Hedetniemi [19] asked which graphs have no walkwise homomorphic images. During the course of this chapter we will be discussing conditions which are sufficient to insure that a digraph has a walkwise image. Now, however, we present a class of digraphs which have no walkwise images. The digraph which consists precisely of a cycle of length n is an n-cyce, denoted Cn. Theorem 2.2 For any prime p, there is no nontrivial walkwise map whose domain is Cp.

20 PROOF: By Lemma 2.2, if p is walkwise, then'(Cp) cannot contain any cycles of length greater than one and less than p. On the other hand, since Cp is strong,,(Cp) must be strong. The only resolution to these conditions is that either' is an isomorphism or IV(C(Cp))I = 1; i.e., that' is trivial. We suspect that the only strong digraphs which have no nontrivial walkwise images are essentially the prime cycles, with loops at some points permitted. For now, we can show that a large number of (strong) digraphs do have walkwise images. A path is a walk v v2.. vn in which no points are repeated. A simple path in a digraph D is a path vv2... v such that for 1 < i < n, each point v. has indegree and out12. 1 degree one in D. An arc is also considered to be a simple path. Theorem 2.3 Let D be a digraph. If there are points u and v such that there are at least two simple paths from u to v, then D has a nontrivial walkwise image.

21 PROOF: We note that if 4 is a map from D to some digraph D' and if 4 is an isomorphism from a subdigraph of D, then 4 is walkwise. Consider first the case in which one of the simplo paths is an arc. Let the other simple path from u to v be uvlv2... Vnv. If v is the map which takes each v. to v, and acts as the identity on the other points of D then *(D) is isomorphic to D - {uvl,vlv2... vv}, so that 4 is walkwise and nontrivial. The same technique works in the case that neither path is an arc. Let the paths be uv1... vnv and uwl... wmv, where n Z m. First map each vi, i> m to v, and then map each v, 1 i < m to wi. The resulting digraph is again isomorphic to a subdigraph of D, so that the induced map, 4, is walkwise. A map whose only action is to identify two points is an elementary identification. For homomorphisms of graphs, there is a very strong interpolation theorem, which states that if 4:G + H is any homomorphism which is not elementary, 4 can be factored into two nontrivial homomorphisms, $1:G + H1 and 02:H1 H [13]. For walkwise mappings we get a different sort of interpolation property, which we present in the next two theorems. Theorem 2.4 If )1:D + D1 and $2:D1 + E are both walkwise, then so is P2OO 1 F::D +E. PROOF: This is immediate. Any walk WE in E has a preimage WD in D1 which is a walk; 2D ) =WE. Also, since WD is a walk there is a W1 1 walk WD in D such that l1(WD) WD. Clearly 4(WD) = WE, so that 4 is walkwise.

22 Theorem 2.5 If p:D + E is walkwise and we write 4 = ~2{1 where I1:D + E1 and {2(E1) = E, then ~2 is walkwise. PROOF: Let ww2... w be a walk in E. Then there is a walk vlv2... v 1 2 n 12 n in D, where {(vi) = w.. But, l1(Vl)l (v2)'' 2l(Vn) is a walk in El. If u E.( 1( (wi )) then u 2 1 (w.). Therefore, 2(%1 (vi)) = wi, and hence l(Vl)l(v2)... 1(Vn) is a walk in E whose image under 2 is wlw2... w. Thus, 2 is walkwise. This theorem gives us a tool for checking whether a map is walkwise. The map can be expressed as a compositionT of two maps, of which the second to be applied is elementary, and hence rather simple to check for the walkwise property: if the test fails "the original map is not walkwise. Theorem 2.6 Let q:D + D' be an elementary map which identifies two points u and v to a point (uv) c D' and which acts as the identity on V - {u,v}. Then 4 is walkwise if and only if both of the following hold: -1 -1 (1) yvcyu or y uCY v -1 -1 (2) yuCyv or y vCy u PROOF: Whenever W' =...w(uv)w2... is a walk in D', then either W =..w uw.. or W =.. vw is a walk in D. If yv ~ yu, u 1 2l v 12 let w2 E yv - yu; then Wv must be a walk in D. But if we take wl E y u *2en we ms-1 -1 -1 then we must have w1 e y v; thus y ucy v; the rest follows by symmetry.

23 To prove sufficiency, first label the subconditions: (a) yuCYv (8) Y lvcYlu (u) Y)VCY -(n) y uCYY v Let W' =...w1(uv)w2.. be a walk in D', and let Wu.=.wluw2..:- -1 1 Wv =...wlvw2... Clearly wl c y uUY vand w2 e yuUyv. We must show that either Wu or Wv is a walk in D; i.e., that either -1 -1 w1 e Y ury v or w2 e yulyv. Case (ou)-. Then yu = yv. -1 -1 Case (3n). Then Y u = y v. Case (an). If w2 e yu we are done. If w2 e yv - yu then W1v E D by (n), so Wv is a walk. Case (BP) follows similarly. Since graphs are symmetric digraphs we have Corollary For graphs G and G', if *:G + G' is an elementary identification of u and v to (uv), then 9 is walkwise if and only if for all wI c yu, w2 yv, either w2 e yu or w1 e yv; i.e., if and only if either yuCyv or yvCyu. Example 2.1 We give an example showing that even if the composition of two maps is walkwise, both need not be walkwise. Consider the cycles in Figure 2.4. Let 4 be the map ui v3); ^1 -imod 3) u5 U4 v0 Figure 2.4. A walkwise map.

24 then $ is walkwise. Suppose that we factor $ as in Figure 2.5. Using the theorem, we verify that %2 is walkwise, since y w2 {w} w0 2 yw5 = = {w0}, Yw, ylw5 = {w1}. (This, of course, does not prove that ~ is walkwise). On the other hand, using Theorems 2.5 and 2.6 we can show that 1 is not walkwise. u1 u 2 - w5 v1 Uo U5 u4 UO'U3 ur wo w2 wO, vO 0 v UlU4 FF W1 W1 V1 ~ 2 w1 1 1 U2 ~2 W2'5 wV2 u5 ~ w5 Figure 2.5. A factorization of a walkwise map. For, if we factor %1 as in Figure 2.6, then yr4 = {r5}, yrl = {r}, -1 -1 y r4 = {rO}, Y r = {rO}, so that 44 is also walkwise, but 3 is not walkwise, since ul C yu0 and u2 e y 1u but u2 ~ yu0 and ul y u3. In other words, there is no walk whose image is r2rOr1. c 2 0 w u uu2'4 r 1 2^u2 r 2 Wr "5 "r4 rrw4 5 -^ " ^2 2w u'-+ r 3 4 r 5"^5 Figure 2.6. A factorization of a map.

25 We now turn our attention to an important class of mappings, the admissible homomorphisms, which we will show to be walkwise. It is important to note that these need not be homomorphisms in the sense of 112]. Let D and D' be digraphs. A map':V(D) + V(D') is an ad&ssibte homomorphism if whenever *(u) = +(v) and uw e D then there is a point w such that vw c D and +(w) = +(w). Theorem 2.7 Every admissible homomorphism is walkwise. PROOF: Let V:V(D) + V(D') be admissible, let W' = w... w' be a walk in D', and suppose that w'.. w' has a walk pre-image ul.. u. 1 n-1 -l Since w'n wn E D' and. is full, there are points w E1 e f1(w 1), Wn e (w') such that w w e D. But then, by admissibility, since n n-in,(un ) = Cwn l) there is a point u such that (Un) = (wn) = w n-1 n-1 n n n n and u u e D. Therefore u... u is a walk in D and the result n-1 n 1 n follows by induction. While it is true that a mapping between digraphs can always be written as a map between their point sets, expressing it in this way may not exhibit all possible information. For example, the mapping from digraph D to a subdigraph D' of D which is illustrated in Figure 2.7 would, as a map from V(D) to V(D') seem to be the identity.

26 X Y l< L, x1 1 2 x2 X1'X2 X1 D V1 V D' V2 V2 Figure 2.7. A mapping from a digraph to a subdigraph. On the other hand, it is often possible to write a mapping as a product of elementary identifications of points. From now on we will assume that an elementary identification of adjacent points results in a point with a loop. Theorem 2.8 Let 4:D - E be expressible as e = n... c where e.:D D. is an elementary identification, and D =D, D E. Let 0. n *i be en.'':D. E. Then if ~ is admissible, each.i is. 1' i-l1' PROOF: We first show that n = e is admissible. Suppose e:D + D = E n n n n-l n is defined by e (U) = e (v) = (uv), and that uw e Dnl, nT' n-l Let Y. = c.... Let' (u') = u, (v') = v; since 4 is 1 1 1(u') n-l full u' can be chosen so that it is adjacent to some w' for which ((w') = en(w). Suppose that w is not one of u,v. Then nY (w') (w') = w. Thus there must be a w" such that f(w") =4 (w') and v'w" e D. Since w is not u or v, %(w'") = %(w') implies that nl(w") = V' (w') = w. Thus, since v'w" C D, vw C D n-1 ()= n-l' If w is one of u,v then there is a w" such that *(w") = *(w') = (uv) and v'w" c D. Thus' (w") is u or v. n-1

27 Thus, is either case, there is a w such that vw C Dn and n (W) = e (w), so that e is admissible. in nn Now, suppose that j = cn ej. is admissible. We want to j+l n j+l show that j = n....E is. Suppose that for u # v, *j(u) =.j(v) and uw c Dj 1. Case 1. e.(u) ej (v). Then, in Dj, *j+ (u) = j+l(v) and ucj(w) c Dj. Thus, there is a w c D. such that vw e D and j+ l(w) fj+ (ej(w)). Since vw E Dj there is a w' C e. (w) for which vw' e D; clearly =j *' Ci+() = *j+ (w) =,j(w). Case 2. e (u) = e (v) = (uv). Since uw e Dj1,, (uv)w e Dj. Choose u' e sl(u), v' E v (v), w e YI (w) such that u'w' e D; this is always possible by fullness. j-i, Since (u') = *(v') = jl((uv)), there is a w" such that O(w") = C(w') and v'w" e D. Let w' be Y. (w"). Then vw' e D 1' and Cj(w') = (w') = C(w') = j(w). A functional digraph is a digraph in which each point u has od(u)l [12]). If M is an autonomous machine then D(M) is functional. Yoeli and Ginzburg [34] characterized SP homomorphisms of autonomous machines in terms of primitives which they called "elementary" homomorphisms. (Note: keeping track of terminology can be quite tricky here. Yoeli and Ginzburg's use of elementary is different from that of graph theory, where the term is used to mean an elementary identification of nonadjacent points [12]. Also, Yoeli and Ginzburg ([34],[1I]) use "admissible" as a synonym, for SP.) Hedetniemi asked whether the elementary hoommorphisms are always walkwise [18]. That this is indeed true will follow

28 as a corollary to the next theorem, which will also be proved as a corollary to Theorem 3.5. Theorem 2.9 Let M and M' be machines and let 4:M + M' be SP. Then if {I = {I',':D(M) + D(M') is admissible. PROOF: NOTE: We are given a map 4 from Q to Q'. The first step in showing that ( is admissible is to show that 0 induces a mapping in the formal sense from D(M) to D(M'); that is, we will show that from p we can define a function from X(D(M)) onto X(D(M')) which has the property that for each point u of D(M) there is a point u' of D(M') such that.(~(u)) 9(u') and p~(u)) ~(u'). Having done this once it will be clear in later proofs that the same technique will let us take other functions between machines and "reinterpret" them as mappinks between the digraphs of those machines. Let x be an arc of D(M) from u to v. Then u and v are states of M and there is an input i such that ui = v, and i is the label on arc x in M. There is an arc x' from (u) to 0(v) labelled with i (or, more generally, with h-l(i)); we take x' to be the image of x. The map thus induced from X(D(M)) to X(D(M')) is onto, since if x' is an arc from ~(u) to v' induced by input i' then the arc from u to uh(i') which is induced by h(i') will have x' as its image. Since this arc-to-arc mapping is defined in terms of the point-to-point mapping 4, it is clear that all the arcs incident to (from) a point u map to arcs incident to (from) 0(u). We have thus shown that the SP homomorphism 4 induces a mapping from D(M) to D(M'); in addition, when this mapping is considered as a map from V(D(M)) to V(D(M')), we get precisely

29 the map *. Notice that having started with a point-to-point map the thing which we really had to provo was that * is full. Suppose that O(u) * +(v) and that uw e D(M), Then there is an input x such that ux = w. Since 0 is SP, *(vx) X +(w). Thus, letting w = vx, *(w) = ~(w) and vw e D(M), so that * is admissible. Corollary Every SP homomorphism is walkwise. Corollary Every admissible homomorphism between autonomous machines is SP. A partition r on the points of a digraph D is an admissibte partition if whenever u - v and uw e D then there is a point w such that vw e D andw w. Lemma 2.5 Any admissible homomorphism induces an admissible partition on its domain. Conversely, if I is an admissible partition on D, then I is the partition induced by some admissible homomorphism whose domain is D.

30 PROOF: Let O:D + D' be an admissible homomorphism, and define t by setting u E v if and only if ~(u) = ~(v). Then, since % is admissible, if u 2 v and uw e D there is a point w such that vw C D and (w) = %(w). But if ~(w) = [(w) then w - w, so that I is admissible. On the other hand-, if i is admissible, we will define an admissible homomorphism 0 from D to a digraph D'. The points of D' are the blocks of T, and (u) is just the block [u] of T which contains u. In D', [u] is adjacent to [v] if and only if there are points of D, ul C [u] and vl e [v], such that u v c D; note that D' has no multiple arcs. It is easy to see that ~ is a mapping. If 0(u) = 0(v) then u = v. If, also, for some point w, uw e D then it follows from the admissibility of t that there is a point w such that vw e D and w - w; but then ~(w) = 0(w), so that % is an admissible homomorphism which induces the partition IT on V(D). It is interesting to note that, unlike the SP partitions, the admissible partitions of a digraph do not necessarily form a sublattice of the lattice of partitions. Example 2.2 Consider the digraph D of Figure 2.8. Both w1 = {uv;wlw3;w2w4} and T2 {uv;w1w4;w2w3} are admissible, but W1 W2 w4 D Figure 2.8. A digraph.

31 the meet T1 A 2 = {uv;wl;w2;w3;w4} is not. Theorem 2.10 The join 1 V w2 of two admissible partitions is admissible. PROOF: Let T1 and Is be admissible partitions of D, u v, and suppose that uw c D. Since u = v there is a sequence u = Uo,U1,...,utl v such that for each i = 0,1,...,t, either ui u. or u. -ui I + I1^+1 +l* Then there are points w = wo,wl,...,wt+l such that u.w. e D and either'. wi. or w., fhus w W and vw C D; hence V1V2 vt+l Dhc I1V72 is admissible. 2 2 The next theorem is actually a corollary of Theorem 2.9 and the Yoeli and Ginzburg results, but because of its importance, we prove it directly. Theorem 2.11 Any pure admissible homomorphic image of a cycle C is a cycle Cm, where mfn. PROOF: Let:Cn' D be admissible, and let w be the induced admissible partition. If the points of Cn, in cyclic order, are u0,...,Un 1, suppose that ui and uj, i < j, are such that u. s u. and uk E u, ~~ I J1 7T k T m m > k, implies that [m-kl 2 Ij-i.l Without loss of generality take i = O, so that u0 o uj. Then we claim that for any r, urE ur+ and if u us then jl(s-r). To see this, note first that since uO - uj and uou1 e Cn, there must be a point w with ul r w and ujw e Cn. Since od(uj) = 1, w must be uj+l, so u1 - Ul+j. An inductive argument then establishes the first part of the claim. For the second, if u r uS and s > r, then write s = r+kj+m, m < j. We

32 know that u r u u.; but this contradicts the minimality r 7T r+kJ r+kj+m of j. Now, let m be the smallost integer such that mj > n-l. If k E mj(mod n) then uo Uk, so that k > j. But k < j, for otherwise (m-l)j > n-l. Thus k = j, and mj = n. Hence jjn, and 0(Cn) is Cj. Theorem 2.12 Let O:D - D' be admissible, and suppose that D and D' are both outregular of degree d, Then for any u and v the number n of arcs from u to 4 (O(v)) is the same as the number n' of arcs from 0(u) to 0(v). PROOF: Since. is full, every arc from [(u) to 0(v) has a preimage, say ulv1. Since %(u) = [(ul) and 4 is admissible, there must be a point ves ([(v)) such that uv c D. Thus n 2 n'. But this inequality holds for each point v' such that 4(u)v' ~ D'. Let the set of all such points be S' = {v6 =,(v),v,...,vm}; note that - (S') = yu. If we denote the number of arcs from 4(u) to v' by ni and the number of arcs from u to 4 (v!) by ni., then for each i = 0,...,m, ni > n'. But m m E n'. = od(,(u)) = d and Z n. = od(u) = d. i=O i=0 Thus, for each i, n. = nW. Note that the outregularity restrictions of the theorem are necessary, as can be seen from the admissible homomorphism in Figure 2.9; there are two arcs from vl to v2, but only one arc from u 1 to -1(v) 1, (v2.

33 U1 U2 V,, Ul,U3 + v1 u2'u4 + 2 u4 u Figure 2.9. An admissible homomorphism. The existence of an admissible homomorphism with domain D might seem to imply symmetries in D. While this would be hard to state precisely, there is a well-defined converse. An automorphism of a onto digraph D is a map g:V(D) ~ V(D) which satisfies: uv C D if and only if g(u)g(v) e D. The automorphisms of D form a group, rD. Any subgroup r' C rD partitions V(D) into sets 01,...,0 such that for each g e r, g(Oi) = Oi; the 0. are the orbits of r'. Theorem 2.13 For any digraph D, the partition wr, defined by v. = v. if and only if there is an automorphism g e r(D) such that g(vi) v., is admissible. PROOF: Suppose v. s v. and vivk eD. If g is an automorphism for which. iv J i Vk g(vi) = vj then v. is adjacent to g(vk), and certainly vk s g(vk), so that i is admissible.

34 Example 2.3 Let D be any digraph with n points in which each point has outdegree t, and suppose that there is an automorphism g e rD which 2 n-i is.n n-^ cnZ,:; that is, for any v e D the points v, g(v), g (v),...,g (v) are all distinct. Label the points by choosing v0 arbitrarily and then setting vj = gJ(vo. Let = {i,,ik k be those integers = Vo). -et i i be t ik+l such that yv0 = {v.,...,v. W. We first show that for any v. c D, the'1 t 3 arcs from v are v.v,..,vjvji. For, if v Vk e D then j+i l t 3 3^^1 3+3. t Ok 4. g(vo)gJ(i) D; that is, vjvj+ik D. But this accounts for t arcs from vj, and there are only t. We will refer to such digraphs by listing the i. and n; i.e., D = D(i,...,it,n). Now, let mjn and define D'(D) to be a digraph whose points are the m sets w. = {v,vi,v..., where w.w. E D'(D) if and only if there are v c wi, v Ec w. such that v v ec D. The wi, n/m as subsets of V(D), are the orbits of gn. Thus the map 4:D + D'(D), m which takes each v. to that set w. of which it is an element, is admissible. These digraphs D(il,...,it,n) will be of prime importance in Chapter III. In Theorem 2.8, we gave a necessary condition for a map < to be admissible. As with walkwise maps, that condition is most useful when f is decomposed into? = %21, where %2 is an elementary identification. In this case, it is easy to check whether %2 is admissible. If u and v are not adjacent then yu must equal yv. If, say, uv is an arc then yu - {u,v} = yv - {u,v} and either vu or w is an arc. On the other hand, suppose that a = n2c1 and %2 is admissible. What conditions on 1 will insure that * is admissible? Theorem 2.14 Let %:D + F be decomposed as ( = %2$1 where %l:D + E is an elementary identification of u and v to (uv) and %2:E + F is admissible. Then * is admissible if and only if *(yu) = ~(yv).

35 PROOF: Let W1 = yu-yv, W2 = yunyv, W3 = yv-yu. Then ~(yu) = *(yv) if and only if one of the following holds: (1) UW3 = 0 (2) (W1) c (W2) (W3) (W3) c(W 2)u(W1) (Necessity): Suppose * is admissible. Since *(u) = 4(v) let uq1 E D. Then ql e W1UW2. There must be a q2 e D such that 9(q2) = *(q1) and vq2 e D. If ql c W2 there is no problem. Suppose that ql e W1. Then since vq2 E D, ^2 E W2UW3. Thus (q) e (W2) U (), so (W1)C (W2) U (W3) (Sufficiency): If (1) holds then *1 is admissible, so * must be. Otherwise, let ((ql) " %(q2) and qlq3 eD. Case 1: If neither ql nor q2 is u or v, then *2(q1) = %2(q2) Also, qlq3 e D implies ql1l(q3) e E. So, there is a q4 in E such that *2(q4) -= 2(11(q3)) and q2q4 e E. Thus, there is.a q' e 11(q4) such that q2q1 E D and W(qp) = ~(q3). Case 2: If q = u and q2 v, then q3 e W1UW2. If W = then since 4(q3) e 4(W2) there is a q e W2 such that (q4) 5= (q3) and, since q e W2, vq4 e D. If W3 is not empty by the above it suffices to consider the case where q3 e W1 and *(q3) e *(W3). Then there is a q4 e W3 such that *(q4) ~ *(q3); clearly vq4 e D.

36 Case 3: If q u and q2 v then again q3 E W1UW2. Now, in E, 2(q2) = 42((uv)) and (uv)l1(q3) c E. Thus there is a q4 c E such that q2q4 e E and S2(q4): 2((1(q3)) Since q2q4 E there is a q4 e 1 (q4) such that q2q e D. Also, (q) 4= 2(q4) = 2(1(q3)) (q3) so (q4) = (3) Case 4: ql # v and q2 = u. Since qlq3 e D, qll4(q3) e E. Also, since (q) (q2), 2(ql) = 2((uv)). So there is a q4 E such that (uv)q4 and 2(q4) = (2((1(q3)). Thus there is a qv4 E 1 (q4) such that either uq or vq4. Suppose q4 c W otherwise we are done. Since q* e W3, %(ql) e O(W1)UL(W2). Thus there is a q5 e W1UW2 such that (q5) = (). Since (q4) = 2(4) = (q 3), 4 q3(mod ). But (q5) = c(q); thus (q5)) =,(q3) Corollary If 4 is a map which identifies u and v, write % = %1!2 where 1 is the elementary identification of u and v. Then 4 is admissible if and only if ~2 is admissible and u and v satisfy the condition of the theorem. The result of applying an elementary identification to a machine is often a nondeterministic machine. Thus, we would not expect to find similar decomposition results for SP homomorphisms. A homomorphism of a digraph D is a map 4:D + D' such that if uv c D then ( (u) ^(v) e T' and 4(u) # 4(v). The conjunction D1A D2 of digraphs D1 and D2 is a digraph D with V(D) = V(D1) x V(D2), and (u1,,u^)(v1,v')e D if and only if ulv1 c D1 and u v2 e D. The next

37 theorem was proved by Hedetniemi for graphs [19]. Theorem 2.15 A pure digraph D is equal to a conjunction D AD of pure 1 2 digraphs if and only if there are full homomorphisms 1:D + D1, *2:D + D2 such that.the meet of the corresponding induced partitions is the identity partition 0. PROOF: If D Di D2, let a1 and 2 be the projection homomorphisms of D.. to the Di:'i(u) = proji(u). If 01(u) = 1l(v) and u t v then u and v cannot be adjacent. If u and v are adjacent, then l1(u) and 0l(V) are adjacent. If ul and vl are adjacent in D1, then, for any uv2 e D2, u = (ul,U2) and v = (vl,v2) are adjacent in D, and 1(u) = U1,' 1(V) = v1. Thus f1 is a full homomorphism. A similar proof serves for.2' If l(u) = l(v) and +2(u) = *2(v) then u = v, so that induced partitions must have meet equal to 0. Conversely, if 01 and (2 are full homomorphisms of D onto D1 and D2 respectively, and the induced partitions have meet equal to zero, then each point u of D can be represented uniquely by the ordered pair (91(u),'2(U )), and this representation defines an isomorphism of D with D AD2. This follows as in [19]. We presented the first part of the proof in detail because we can show that, in many cases, the projection maps are also admissible. A sink in a digraph D is a point u such that od(u) = 0. In a graph, a sink is called an isolate. Theorem 2.16 Let D1 and D2 be pure digraphs, neither of which has a sink, and let D = D1AD2. Then the projection homomorphisms

i.:D D. are admissible. PROOF: Let *1((u1,u2)) = 41((vl'V2)), so that ul - v. Suppose (u1,u )(w,w2) D. Then u1w1 e D1 and u2w2 D2 Since v2 is not a sink there is a point w2 such that v2w2 E D2. Now l1(wl,w2) = ol(Wl'W and (vl,v2)(WlW2) C D, since vl = u1. Thus, 1 is admissible. Similarly, *2 is admissible. Corollary If G1 and G2 are graphs without isolates and if G = GlAG2, then the projection homomorphisms.i:G + G. are admissible. Since we know from Theorems 2.7 and 2.1 that for any admissible homomorphism ~ of a digraph D, m(D) > m(4(D)), it is now natural to ask what the cycle rank of the conjunction of two digraphs is. Theorem 2.17 Let D1 and D2 be strong digraphs with Pi points, and m(Di) = k.+l. If D1A D has 6 strong components, then 1' 1.1 2 m(DA D2) = (kk+6) + k + 2 Pkl. Note: If the greatest common divisor of the lengths of all closed walks in D. is di, then 6 = gcd(dl,d2) (see [16]). PROOF: Let D. have q. arcs. Then D AD2 has qlq2 arcs, and 1 1 i1 2 has q~q2arcs, and m(D1AD2) = qlq2-PlP2+6. Now (ql-Pl)(q2-P2) = klk2, so that qlq2-Plq2-q2 Pl+pl2 = klk2 and qlq2-P1P2+6 (klk2+6)+Plq2+P2q1-2P 1P2 = (klk2+6)+p (k2+P2)+P2(kl+pl)-2Plp2 = (klk2+6) +Plk2+P2kl.

39 Coro llary If G1 and G2 are connected graphs with cycle ranks ki+l, the cycle rank of GAG2 is (kk2+6)+qlq2+plk2+p2kl, where 6 - 2 unless one of G1,G2 has an odd cycle, in which case 6 - 1. A homomorphism of a graph G onto a graph with n points defines a coloring of G, an assignment function from V(G) onto a set of n colors such that no two adjacent points are assigned the same color; conversely, every coloring of the points of a graph defines a homomorphism, where two points are identified just when they have the same color. The minimum number of colors in any coloring of a graph G is the chromatic nwnber, x(G), and the image under the associated homomorphism is the complete graph KX(G) The homomorphisms associated with colorings are, in general, not admissible. It might well be of interest to develop a theory of admissible colorings of graphs. We will present one connection between colorings and admissible homomorphisms. A graph G with chromatic number x(G) = n is said to be uniquely n-coZorable if there is exactly one homomorphism from G onto Kn (see [4], [14]). Theorem 2.18 If G is uniquely n-colorable then the homomorphism 4:G Kn is admissible. PROOF: It is shown in [4] that if G is uniquely n-colorable then in any coloring of G with n colors, every point v is adjacent to at least one point of every color different from that assigned to v. Thus if 4

40 4:G - K is the unique homomorphism from G onto Kn, suppose that O(u) = 4(v) and uw C G. Then w is assigned a color different from that of u. Since u and v have the same color, it follows, from the result just quoted, that v is adjacent to a point w which has the same color as w; i.e., p(w) = (w). Thus, 4 is admissible.

CHAPTER III REALIZATION WITH FEEDBACK ENCODING A major concern of classical automata theory has been the realization of either the state behavior or the input-output behavior of machines. The main thrust of the research has always been to realize the given machine by a loop-free network; that is, by a network for which the digraph obtained by taking the modules as points and the flow, lines between them as directed arcs has no directed cycles. In such a network there is no feedback, except possibly within modules. As noted by Holland [22], "feedback is a prominent structural feature of most systems which exhibit complex behavior." Nevertheless, as Zeigler [36] showed, many networks can be modelled by cascades; i.e., feedback-free networks. The modules of'the new network are taken to be the strong components of the original network. This construction, of course, greatly increases the complexity of the component modules. There are advantages both to excluding and including feedback in networks. On the one hand, there is a large body of techniques available for analyzing feedback-free systems and,also for deriving feedback-free realizations of a given behavior [17]. On the other hand, a feedback-free realization can be artifactual: it often consists of merely masking those parts of the network with feedback, the strong components, and considering them as "black boxes". We thus have a trade-off between simplicity of the components and simplicity of the interconnections. It is certainly to be expected that by restricting in some way the form that feedback is allowed to take we can strike a suitable balance between the two extremes. 41

42 Thus, one motivation of the study we are about to undertake is the desirability of defining a restricted form of feedback which will prove tractable in both theory and application. / i,!Vr is the following. Suppose that we have some given state behavior wh it to realize by a sequential machine. If the behavior is so realizable then, in fact, techniques exist for doing this in an optimal way: the algorithm for deriving reduced machines, first discusb'' "ore [30]. However, it is often the case that additional restrictions are p. i on the realization. For example, one might want the realizing machine to be expressible as a cascade of modules from a well-defined set. This, of course, has been studied by Zeiger [35], Krohn and Rhodes [24], and others. In general, it turns out that the behavior of the cascade which is derived properly includes the desired behavior. For another example, take the fault diagnosis problem, a small segment of which is discussed in Chapter V. Here we might want a realizing machine, for example, which, if it fails in some way, allows us to determine the cause or location of the fault, or perhaps even to correct it. Even the simplest fault diagnosis properties cannot, in general, be incorporated into the reduced machine which realizes the given behavior, and it is usually necessary for the realizing machine to have more states or inputs (and thus, more state circuitry) than the reduced machine. A second motivation for the study to follow, therefore, is to develop a form of realization which will allow us to add additional constraints and at the same time, will not cause the same state set growth as the classical theory. As presented in Chapter I, a realization of a machine M' <Q',I',6'> by a machine M = <Q,I,6> is defined in terms of two maps

43: Qono Q, and h:I' I such that, for any q e Q and any x' e I' O(q)x' = =(qh(x')). (3.1) The function h can be considered to be a combinatorial (i.e., Memoryless) module which encodes inputs to M' into inputs to M (see Figure 3.1). x, Figure 3.1. Schema for realization. We will say that M realizes M' with feedback encoding if there are maps. Qo no Q, and h:Q x I' I such that for all q e Q and all x' e I' 4(q)x' = (qh(q,x')). (3.2) Thus the combinatorial circuit h has as inputs both the input to the realized machine M' and the current state of the realizing machine M. This is expressed diagrammatically in Figure 3.2.

44 h (qx' Xv'Figure 3.2. Schema for realization with feedback encoding. We will often find it convenient to consider the map h to be a collection {hqlq e Q} of maps, where each hq maps I' to I. Thus, (3.2) could be restated as,(q)x' =,(qhq(x')) (3.3) If all the maps hq were identical, then h would simply be a function from I' to I. We thus have the following simple observation: Theorem 3.1 If M realizes M' then M realizes M' with feedback encoding. We now give some examples of pairs of machines M and M' such that M realizes M' with feedback encoding, but not without. Example 3.1 Let M1 = <Q,I1,61> be the modulo three counter in Figure 3.3, and let M2 = <R, I2,62> be the machine in Figure 3.4. Note that the only actual difference between the two machines is that the input labels at state r3 have been permuted.

45 0/ I'' Figure 3.3. A modulo three counter. To show that M2 does not realize M we would in general have to consider all maps':R~n Q and h:Il +; however, since M1 is highly symmetric, it is sufficient to look at the map 1l such that r(Cri) qii = 1,2,3. a a Figure 3.4. A machine which does not realize a modulo three counter. There are then four candidates for the h to consider. Although it is quite clear, as it will be other examples, that only one-to-one maps need be considered, in this one case we will examine all the h maps for completeness. For each proposed map, we will give an example showing how (3.1) is violated.

46 Map h Violat i.- of (3.1) 1. h(O) = h(l) = a ll " 2 ='l(r a) 2. h(O) = h(l) = b 1(rl)0 )' = ql $ q2 = %l(rlb) 3. h O) =b 1(r2)1 1 = q3 2 = 1(r2a) h(l) = a 4. h(O) = a Ql1r3)1 = q q3 = 1(r3b) h(l) = b Thus M2 does realizeM1. It is c aar th. nost likely candidate for a su;,le h map was the fou,, whic. Is because the actions of a; b at state r3 are not muetri their actions at r1 and r2. By ^ lowing feedback encodin,; c: ear up this difficulty. Let h:R x 1! + 12 be the map 0 1 rl a b h: r a b 2 r3 b a By the above discussion it is easy to see tililt this map, together with the map p1 which takes each ri to qi, satisfies (3.2) in all cases, so that M2 does indeed realize M1 with feedback encoding. Example 3.2 Consider M3 = <P,I3,63> from Figure 3.5, and M2 from the previous example.

47 d d -c / (^~(^ Figure 3.5. A modulo six counter. We will try to find maps *2 and h which define a realization of M2 by M3. By the symmetry of M3, it does not matter which state of M2 we take to be the image of pi under 2, so take *2(P1) = rl By (3.1) r b = r2'= 2(Plh(b)) and rla = rl = t2(Plh(a)). Thus h(a) must be d. If h(b) = d then *2(pld) = r2(P2) = rl, and similarly, for all i, 2(Pi) = rl, which is impossible. Then h(b) = c and 92(P2) = r2 Then 3 rb = 2(P) But then r3 = r1 but *2(p3h(a)) = 2(P3d) = *(p3) = r3, so M3 cannot realize M2. However, if wenow define ~2 and h by the tables 02, h a b P1 rl P1 d c P2 r2 P2 d c P3 r3 P3 c d P4 r P4 d c P5 r2 P5 d c P6 r3 P d

48 then we get a realization with feedback encoding of M2 by M3. Next we display two tables: the entries in the first are the values <2(pj)X and those in the second are 2C(pjhp (x)), where x e {a,b}. Since the tables 2(Pj)x a b (pj hpx)) a b 2x)) a b P1 r1 r1 r ^^P ~~ 1 -2 2 3 P2 r2 r3 r r P3 1 3l rs p3 | rl r3 P |; rl r3 P4 rl r2 P r r r P r 3 p3 r2 r3 Pg r r P6 3 3 P6 rl r3 P6 rl r3 are identical, 2 and h define a realization with feedback encoding. Now let 4 = +2:1 P + Q and let h:P x I 1+I be defined by h(pj,x) = h h (p x)). As a map from P x I 2 I2 h has Pj 2(Pj) 1 2 2 (pi) the table h(P ) 0 1 P1 a b P2 a b P3 b a P4 a b P a b b a P6

49 Then h(pj,x) has the table 0 1 Pl d c P2 d c P2 P3 d C P4 d c P5 d c P6 d c We see that h:,P x I1 + I is in fact independent of the state of 1 3 M3, so that the composition of the realizations with feedback encoding Of examples 3.1 and 3.2 is a realization of M1 by M3. It is not true in general that the "composition" of realizations with feedback encoding is a realization. On the other hand, such a composition is always a realization with feedback encoding. Theorem 3. 2 Let M3 realize M2 with feedback encoding and let M2 realize M1 with feedback encoding. Then M3 realizes M1 with feedback encoding. PROOF: Let machine M. have state-set Qi and input-set I.. Then we have maps onto onto:Q2 ~t~Q+l' *:Q3~n~ h:Q2 x I1 + I 1 g:Q3 x I2a -+3

50 such that (q2)xl = (q2h(q2,xl)) q2 E Q2 xl c 1 and (q3)x2 = (q3g(q3x2)) 3 Q3' X2 2 Then, for any q3 c Q3 let q2 be p(q3) and let ql be (q2). For any x1 c I1, qlx1 = (q2h(q2,x1)). But h(q2,x1) E 12, so b(q'h(q^,xl)) = (q3(qg(q3h(q2,Xl))). Thus, letting T(q3) = <((q)) and f(q3,xl) = g(q3,h(i(q3),Xl)), we see that the pair (T,f) defines a realization with feedback encoding of M1 by M3. Example 3.3 Let M be the reset machine displayed in Figure 3.6. We will show that M realizes with feedback encoding the machine M of Example 3.1. X1 _ ApX2 x 22 x3 Figure 3.6. A reset machine. Let 4 be the map':r. qi' and define h as follows: if qjy = qk where y e I1, then h(rj,y) = xk. Since machine M is a reset machine, so that for any state rj,rjxk = rk, it then follows that if (rj) y = qjy -- qk then rjh(rj,y) - rk, so that *(rj)y =. (rjh(rj,y)).

51 Theorem 3.3 Let M' I <Q',1',6'> be any machine and let M = <Q,I,6> be a machine such that i. IQI IQ'I ii. for each qi E Q there is an input xi such that for all qj, qjxi qi Then M realizes M' with feedback encoding. PROOF: Let * be any map from Q onto Q'. Then, as in Example 3.3, if o(qi)Y' = (q(j) define h(q.,y') to be the reset input xj; the resulting pair (0,h) defines a realization with feedback encoding. Of course, for a given qi and x' there may be many states whose image under 0 is <(qi)x'. It does not matter which of these is taken to be the qi of the proof. In one sense, Theorem 3.3 would seem to be quite impressive. We know from the Krohn-Rhodes results [24] that reset machines are quite simple, and we have shown that they are sufficient to realize any machine with feedback encoding. On second thought, however, we must have quite a different opinion. Sequential machines can have quite complicated behavior, and it is clear that what we are doing in Theorem 3.3 is lumping all the behavior into the h-map. Any sequential machine can, in fact, be modelled by a machine whose memory is a reset machine and whose transition function is computed by a memoryless circuit. While it is often desirable to transfer some of the complexities of a machine to a combinatorial circuit in this way there is a, perhaps ill-defined, point at which it becomes fatuous: if our purpose is to study the

52 complexities of sequential machines, i.e., machines with memory, it does us no good to define a canonical form in which all the complexity resides in a memoryless component. It appears that the type of feedback we have defined does not quite strike the balance alluded to in the introduction to this chapter. We must therefore place some restrictions on the types of feedback encoding which we shall allow. Given machines M =.<Q,I,6> and M' = <Q',I',6'> we will say that a map h:Q x I' - I is a type I feedback encoder if, for each q c Q, the map hq:I' + I is one-to-one and onto. Recalling Example 3.3, it is clear that machine M cannot realize the modulo three counter M1 with type I feedback encoding. Unfortunately, this example obscures the point somewhat: M cannot realize M1 with type I feedback encoding because iII > III. Since, in studying realization, we are interested in whether one machine can capture the behavior of another, we certainly do not want to say that one machine cannot realize another solely because the cardinalities of their input sets are not equal. Even in the classical realization theory, where the encoder is independent of the state, this restriction does not appear. Thus, we relax the previous definition in the following way: a map h:Q x I' - I is a type II feedback encoder if there is a subset I C I such that for each q c Q the range of themap hq is I and hq is one-to-one. Even with a type II encoding M cannot realize M1 in Example 3.3. In the sequel we will rarely use the adjectives "type I" and "type II". Whenever we refer to feedback encoding we will assume it to be type I, unless explicitly stated otherwise. Now, since if M realizes M' with type II feedback encoding it I.s clear that a submachine of M

53 realizes M' with type I feedback encoding, most of the results we obtain for type I encodings will readily translate into results for type II encodings, and we will rarely make this translation explicit. A word is in order here about the restriction that the maps hq be one-to-one. Example 3.3 shows that this cannot be relaxed very much before the concept of realization with feedback encoding becomes both unrestricted and unrealistic. However, in practice, in applying the concept of realization with feedback encoding it might be desirable to introduce slight relaxations, depending of course, on the resultant increase in complexity of the h map. We are now ready to begin our study of realization with feedback encoding. We first wish to point out that the composition property developed in Theorem 3.2 has not been lost. Consider the proof of that theorem, and suppose that both h and g are type I feedback encodings. We wish to show that f:Q2 x I1 + 13 is type I; i.e., that for each q3, fq is both one-to-one and onto. For each q3 and each x3 C I2, we know that there is an x2 C 12 such that g(q3,x2) = x3. Also, there is an x, such that h(q(q3),x1) = x2. Thus f(q3,xl) = g(q',h(p(q3),xXl = x3, and so fq is onto. Now, since fq (x1) = g(q3,h( (q3),x)), and since both g3 and h, are one-to-one, it follows immediately that fq is 43 ^(q3) 3 one-to-one. We have thus proved: Theorem 3.4 Let M3 realize M2 with feedback encoding and let M2 realize M1 with feedback encoding. Then M3 realizes M1 with feedback encoding.

54 This theorem, of course, refers to type I feedback encodings. On the other hand, if one of the realizations is of type II, then so will be the composed relation. We next show that with the restrictions we have introduced, we have limited the power of the concept. The next theorem is important for another reason as well; it demonstrates a relationship between realization with feedback encoding and the structures of the state graphs of the realized and realizing machines. Recall from Chapter II that for any machine M, D(M) is its underlying digraph. Theorem 3.5 If M realizes M' with feedback encoding then there is an admissible homomorphism from D(M) onto D(M'). Conversely, if there is an admissible homomorphism from D(M) onto D(M'), and if III = I' then M realizes M' with feedback encoding. PROOF: Let the realization be defined by maps f:Q - Q' and h:Q x I' + I. Suppose that (u) = 0(v) and that uw e D(M). We must show that there is a w such that vw C D(M) and 4(w) = (w). Since uw c D(M) there is an input x such that ux = w. Since hU:I' + I is onto there is an x' E I' such that hu(x') = x. Then (u)x' = c(w) so that p(v)x' = <(w). Let w = vhv(x'). Then by (3.3), f(w) = (w), and, since w = vhv(x'), vw is certainly an arc of D(M). Therefore, 4 considered as a mapping from D(M) to D(M') will be admissible once, as in Theorem 2.9, we can verify that it is full. Since M realizes M' with feedback encoding, if there is a single arc u'v' C D(M') then this arc must have a pre-image arc in D(M). But if there is more than one arc between

55 u' and v' in D(M') then there are inputs, say xj,...,xr, such that u'x u'x' = = v'x' = v'. Then for each u c l(u') there are 1 2 r V1,.,vr such that uhu (x) = v. and d(vi) = v', for i=l,.,,r. Then each arc uv. e D(M) is a pre-image for one of the arcs between u' and v'. While it may be the case that the v. are not all distinct, since hu is one-to-one we are assured that there will be r distinct arcs from u to points in'l(v'). Thus, O:D(M) > D(M') is admissible. For the converse, suppose * is an admissible map from D(M) onto D(M') and that III | II,' 1 Let u be a state of M and suppose that _.1 aP2 D(M3F: 3o Ea P3 36N/ PS PS P4 P!'P4 rl 2: P2'PS H r2 P3'P6 r3 r2 D(M2)'3 Figure 3.7: An admissible homomorphism based on Example 3.2.

56 p(u)x' = w'. Since 9 is a map from D(M) to D(M') there must be states u and w in M such that ~(u) = +(u), 4(w) = w', and uw c D(M). But then, by admissibility, there is a w e M such that (w) = w' and uw e D(M). Thus, we define hu(x') to be that input x which induces the arc uw. It is clear that once we verify that this assignment can be done so that hu:I' - I is one-to-one and onto the pair (4,h) will define a realization with feedback encoding. But, the one-to-one and onto properties will be satisfied just when the cardinality of the set of arcs from u' to w' is the same as the cardinality of the set of arcs from u to points in -1C w'), and we proved this property of admissible maps in Theorem 2.12. Thus, M realizes M' with feedback encoding. As we noted above, a type II feedback encoding is associated with realization by a submachine. The implicationrthere was that we were discussing spanning submachines; submachines with the same state set but restricted input set. However, as in the classical theory of realization we want to allow a machine Mt to be realized with feedback encoding by a machine M, where the actual realization is done by a submachine of M which might have either fewer states or fewer inputs, or both. Theorem 3.5 carries through to cover these generalizations in a straightforward manner. Corollary If M' is realized with feedback encoding by a submachine M of M then there is an admissible homomorphism from D(M) onto D(M'). It is sometimes common in the classical realization theory to say that M realiztes M' when, in fact, the realization is by a submachine of M. We will, in general, only say that M realizes M' with feedback

57 encoding when the realization is by all of M. Corollary If ~ is an SP homomorphism from M to M' and III - II' then * is an admissible homomorphism from D(M) to D(M'). PROOF: We know that *:M + M' is an SP homomorphism if and only if M realizes M'. But any realization is trivially a realization with feedback encoding, as we saw in Theorem 3.1. Although Examples 3.1 and 3.2 were presented before we had the concept of type I feedback encoding, the realizations in each are easily seen to involve type I feedback encodings. Thus, the converse to the second corollary is certainly false. We have not, thus far, distinguished the situations in which * is an isomorphism and that in which * is a homomorphism: in the former case * and h define an isomorphic realization with feedback encoding, and in the latter, a homomorphic realization with feedback encoding. As we mentioned in the introductory remarks to this chapter, we will eventually wish to show that,by introducing feedback encoding, realizations with extra constraints can sometimes by achieved with less cost, in terms of the switching circuitry, than with the conventional concept of realization. In particular, in Chapter V we will give instances in which such constrained realizations with feedback encoding can be achieved by machines with no more states than the reduced machine which exhibits the behavior we wish to realize. To this end, we will essentially restrict our study to isomorphic realizations with feedback encoding. Corollary If M isomorphically realizes M' with feedback encoding, then D (M) - D(M').

58 In the case that D(M) = D(M'), Hedetniemi and Fleck [20] say that M and M' are graph-isomorphic. Much of the classical theory is concerned with realizing machines by cascades of other machines. We can also study realization by cascades, where of course the realization will involve feedback encoding. Most of the theorems we derive will be seen to generalize results about realization without feedback encoding. Lemma 3.1 If a cascade M1 oz M2 realizes M isomorphically with feedback encoding, then there is an admissible homomorphism from D(M) onto D(M1). PROOF: Let the realization with feedback encoding be defined by maps':Q1 x Q2 + Q and h:(Q1 x Q2) x I + I1, so that for any input x E I and any state (q1,q2) in Q x Q2' q((qlq2))x =.%((qlq2)h((ql, q2)x)). Let p = -1, and for a state q e Q let q(j) be the state projjp(q). Now, define an equivalence relation on the states of Q by q q' if and only if q(l) = q'(l). Suppose that ql - q2 and that for some x C I, qlx = q3. Let S1 = {projlP(qy) ly~I} = {ql(l)h(p(ql),y)jlyI} and 2= pr p(q2y) = {prop(qy)ly} {q2()h(p(q2),y){yI}. Since ql(l) = q2(1) by hypothesis, and since {h(p(ql),y)jysI} = {h(p(q2),y) ysI}, it follows immediately that S1 {ql(l)h(p(q),y)} = {ql(l)h(p(q2),y)} = S Thus there is some state q4 e Q and an input y c I such that q2y = q4 and q4(1) = q3(1); i.e., q4 - q3. Thus projlp:Q Q1 defines an admissible homomorphism from D(M) onto D(M1).

59 This lemma also follows from the Hartmanis and Stearns results [17], for M1 induces an SP partition on the cascade, and this converts to an admissible partition on M. Corollay If M' is a submachine of a cascade M1 oz M2 such that {projl(q')lq'EQ'} = Q1 and if M' realizes a machine M isomorphically with feedback encoding, where [I'J = lI1l, then there is an admissible homomorphism from D(M) to D(M1). Lemma 3.2 Suppose there is an admissible homomorphism from D(M) to D(M1), where III = lI1l. Then there is a machine M2 such that M can be isomorphically realized with feedback encoding by a submachine of a cascade M1 oZ M2. PROOF: Suppose that ) is an admissible homomorphism from D(M) onto D(M1). We must exhibit a machine M2 and a connecting map Z such that M1 oZ M2 isomorphically realizes M with feedback encoding. Now, the map 4 induces a partition I on the states of M; suppose that there are r partition classes Pi and that the largest of these contains s states. Number the states in each Pi as 1,2,...,ni, where ni ~ s, and form a new partition a' with s blocks Bi, where Bi consists exactly of those states, at most one from each Pj, which were numbered i. Then each state q e M can be associated with exactly one pair (Pj,Bj) of blocks such that q e Pi(OBj. Clearly the blocks Pi are identifiable with the states of M1. We know from Theorem 3.5 that there is a map h:Qx I1 I which is one-to-one and onto and which, taken together with 4, defines a

60 realization with feedback encoding of M1 by M. We can rcprresent ti state of M associated with blocks P. and B. by qij. Now, suppose under some input x, qijx = qkm Then, by (3.3), (qij)h (x) = O(qkm); we are permitted to refer to II since h is a Type I feedback encoding. cij.-chine M2 will have the blocks Bj for its states. Now if qijx = qkm' and x' = h-1 (x), we will want (Pi,Bj)x' = (Pix',BjZ(Pi,x')) q.ij (PkBjZ(Pi.x')) = (Pk Bm)' We already know that Px' = Pk' so we must define M2 and Z in such a way that BjZ(Pi,x') = Bm. Let M2 have r inputs Yi- If (Pi.B)x' = (Pk,B ) then in M2 we define Bjyk = Bi; since not every pair (Pi,Bj) defines a state of M, this may leave us with don't-care conditions, which can be assigned arbitrarily. Now we define Z(Pix') = Yk where P.x' - Pk With these definitions it follows that if qijx = qkm then (Pi,B)x' = (Pix',BjZ(Pix)) (PkB ) (PB). If p is the map which assigns to (Pi,Bj) the state qij then for each x e I, p((Pi.,B))x = P((Pi.Bj)hq (x)). Thus if g:({P.}x{B.}) x I is defined by g((Pi.,B),x) = h (x), p and g will define an isomorphic 1j jij realization with feedback encoding of M by that submachine of M1 oz M2 consisting of the pairs of states (Pi,Bj) for which P.inB.j. Combining the lemmas, we have Theorem 3.6 A machine M can be isomorphically realized with feedback encoding by a submachine M' of a cascade M1 oZ M2, where II'j = Ill, if and only if there is an admissible homomorphism from D(M) onto the subdigraph of D(M1) induced by the states in {projl(q')lq'Q'}. Corollary A machine M can be isomorphically realized with feedback encoding by a submachine of a cascade M1 oz M2 if and only if there is an admissible homomorphism from D(M) onto the digraph of a submachine of M1.

61 Having established Theorem 3.6, it is natural to investigate what happens if the feedback to the h map in a realization with feedback encoding by a cascade is from only one of the components of the cascade. Unfortunately, as we shall see, there does not seem to be too much to say about such situations. Theorem 3.7 Let M and M be machines. Then M has an SP homomorphic image which is graph-isomorphic to M1 if and only if M can be isomorphically realized with feedback encoding by a cascade M1 oZ M2 in such a way that the encoding map h has domain x I. PROOF: If M1 is graph-isomorphic to an SP homomorphic image M1 of M then in any isomorphic realization of M by a cascade M oZ M, M 1 Z 2' 1 can be replaced by M1 together with the appropriate feedback encoder, which, of course, is independent of the state of M2. On the other hand, let the realization with feedback encoding be defined by the maps *:Q1 x Q2' Q and h:Q1 x I- Il Let p projlbe the admissible homomorphism guaranteed by Theorem 3.5; p:Q X Q1. As in the proof of Lemma 3.1, for each state qi, (qi) = (qi(l),qi(2)). Now, suppose that p(ql p(q3), qlx = q and qx - q4 Then ((ql(l),ql(2)))h(ql(l),x) = 4((q2(1),q (2))) and ((qs(1),q3(2)))h(q3(1),x) = 4((q4(1),q4(2))), where q2(1) = q(l)h(ql(l),x) and q4(1) = q3(1)h(q3(1),x). But since p(ql) = p(q3), ql(1) = q3(1), and therefore q2(1) = q4(l). Thus, if p(q) = p(q3) then p(qlx) = p(q3x), so that p induces an SP partition on M, and hence an SP homomorphism i is reinterpreted as a digraph mapping, as in the proof of Theorem 2.9, it is identical to p; thus M1 is graph-isomorphic to gI1

62 We will give twov ('xmlplis in which the encoder is independent of the state of the front component. Example 3.4 Let M be the machine M 0 1 1 1 5 2 2 6 3 4 3 4 6 2 5 5 1 6 3 4 Consider the following two machines and connecting map Z: M 0 1 M 0 1 Z 0 1 ql ql q2 r r r q O 1 q1 q~cj2 1 r q10 q2 q2 q1 r2 r2 r3 q2 1 0 r3 r3 r2'3 3 2`0 Then the cascade M1 oz M2 is M1 Z M2 0 1 - ____________. P1- qlr1 q1rl =P1 q2r1 P5 P2 2 = q 2 q2 = 2 r3 P6 P3 = qlr3 qlr3 = P3 q2r2 = P4 P4 = q2r2 q2r3 = P6 qlr2 P2 PS = q2r1 q2rl = P5 qlrl P1 P6 q q2r3 2 P4 qlr3 = P3

63 Under the maps *:Pi i and h, h 0 1 P1 0 1 P2 0 1 P2 P3 1 0 P4 0 1 P 0 1 P6. 1 0 M1 oz M realizes M with feedback encoding, and the encoding map depends only on the input and the state of M2. Notice that M has the SP partition {15;2346}. One might ask, therefore, whether it is "worth" using a realization with feedback encoding when, in fact, there is a cascade which realizes M without feedback encoding. The answer, of course, depends on the intended application, but in this case M must be realized without feedback encoding by the cascade of a two state machine and a four state machine, which certainly requires more state circuitry than the cascade M1 oZ M2. In the next example, which will be quite similar, the realized machine will have no nontrivial cascade realization. Example. 5 Let M2 be as in the previous example, and let M1 instead be isomorphic to M2, with states {qiji=l,2,3}. Under the connecting map Z(qi,x) = x if i 3 and l-x if i. 3, the cascade M1 oz M2 is:

64 M1 oz M2 0 1 qlr1 qlr1 q2r2 qlr2 qlr2 q2r3 qlr3 qlr3 q2rl q2r1 q2rl q3r2 q2r2 q2r2 q3r3 q3r2 q3r3 qlr2 qsr2 q3r3 qjr2 q3r3 q3rl q1r3 Now, with the maps s(qirj) = 3(i-l)+j and h(qir.,x) = x if j $ 3 and l-x if j = 3, the following machine M is isomorphically realized with feedback encoding by M1 oz M2: M 0 1 1 1 2 2 2 6 3 4 3 4 4 8 5 5 9 6 7 6 7 8 1 8 9 2 9 3 7 However, in this case M has no nontrivial SP partitions, as can be shown in the usual manner by demonstrating that for any pair (i,j)

65 of states, an SP partition in which i - j has only one block. It should be clear by this point that whether or not a machine M has an isomorphic cascade realization with feedback encoding depends on the structure of D(M). We will now strengthen this impression by giving a large class of families F of digraphs such that if some cascade has a submachine whose digraph is in P then one component of the cascade also has a submachine whose digraph is in F. We.begin by proving a number-theoretic lemma (Lemma 3.4). Recall the following common notations. If integer n divides integer m we write njm, and if not we write n'm. The greatest common divisor of a set {il,...,i } of integers is denoted (il,...,in). Lemma 3. 3 Let g = (a,m). Then the congruence ax i b(mod m) has no solutions if gb. If gib then for any t = 0,l,...,g-l, x = (b/g)x+t(m/g) (mod m) is a solution, where xn is any solution to (a/g)x l(mod m/g). PROOF: See [32, Theorem 2.13]. Note that (a/g)x l(mod m/g) always has a (Cm/g) solution x0 (a/g) ) where the value of +(n) is the number of positive integers less than or equal to n which are relatively prime to n. Lemma 3.4 If (il,...,it,n) a g then for any r there is a solution to t Z w.i. = r(mod n) if and only if glr. j.l

66 PROOF: The case t = 1 is just Lemma 3.3. Suppose the result to hold s when t a s-l and consider the congruence Z w.i.j r(mod n). For a j=l J fixed ws, this equation has a solution if and only if the equation s-l w.i. - r-wsi (mod n) (3.4) j=l J has a solution. Let g' = (il,...,is_,n)'. By hypothesis, (3.4) has a solution just when g'l(r-wsis). Thus we must find a solution to w is E r(mod g'), which, by Lemma 3.3, has a solution of and only if (g',i )r. But *~~ ~~s (g',i) = g, so there is a solution to Ew is r(mod n) if and only if glr. Now, recall that in Example 2.3 we presented a class of n-state, t-input machines, each of which had an n-cycle in the automorphism group rD(M), and showed that the digraphs of these machines could be completely specified by n and the numbers ij,j=l,...,t, such that yqo = {qi.}. We also showed, for any min, that there is an m-state machine M', with digraph D(M') = Dm(M) where rD(M') contains an m-cycle, such that D'(M) is an admissible homomorphic image of D(M). We use m these facts in the next theorem which shows that, at least for machines whose digraphs are sufficiently symmetric, cascade decompositions are structure preserving. Theorem 3.8 If M, with digraph D(M) = D(il,...,it,n) is a submachine of a cascade M1 oz M2, and if the g.c.d. (i1,...,it,n) = 1, then there is a machine M' with digraph D'(M) such that either m m

67 i. min, m > 1, and M' is a submachine ofM1; or m ii. m = l and M' is a submachine of M2. m 2 PROOF: Let * be the admissible homomorphism of M1 oZ M2 onto M1 guaranteed by Theorem 3.6. We first assume that,1(M) > 1, and consider the map * restricted to M. Let k be the smallest integer for which there are states qi,qi+k such that C(qi) =' (qi+k); by symmetry we can take i = 0. For each', 1 < r < t, there is an sr, 1 sr 5 t, such that tqi ) = *(ki ) r sriis If, for each 1 < r < t, sr = r then, for all j, )(qi ) = (q ) 3j3 [Note: if, for some r, ir and i+l are the same integer we could have s= r+l and sr+ = r without affecting this conclusion]. Otherwise, let r0 be the smallest integer such that is.< i. Then 5r0 O P(qi ) = (qk+i ) and, since i i< O, (k+i ) - i < k, a r s r0 r 0 0 r0 00 contradiction, unless {[((qi))l = n, in which case *IM is an isomorphism. Thus (q0) = (qk) q (qi) = (qki) for all 1 < j s t and, in general, if a = Wlil+w2i+... +w then (q)= ) for any n,,.~,n 0. nl','.nt > ~ But, by Lemma 3.4, since (il,...,it,n) = 1, there is a solution to a = Sw.i. r(mod n) for any r, so that for all s and t, o(q.) = (qs+kt. But then, if q(q ) = *(qs), where s > r, we can write s-r = hk+t, t < k and conclude that 0(qr+hk) = (qS),,.a contradiction to the minimality of k since s-(r+hk) = t < k, unless t a O. Thus *(qr?' *(qs) if and only if s r(mod k). The nD(C(M)) m Dk(M). We have shown that if {I(M)] * m > 1 then there is a submachine M' of M1 with the desired properties. We now consider the case where I(CM)i. 1.

68 Let <(q0) = q*. Then, as a submachine of M1 oZ M2, M has states qi = (q*,qi(2)); then the states {qi(2)} induce a submachine with digraph isomorphic to D(M), where if x is the input which takes qi to qj in M, the input which takes qi(2) to qj(2) is Z(q*,x). Therefore, in fact, the states {qi(2)} induce a submachine of M2 which is isomorphic to M. Corollary If the machine M' has 6(M') = D'(M) = D(jl,...,jtm), then m m m (jl"'''jt'm) = 1. PROOF: The parameter jk for Mm is the residue of ik module m; that is, O s jk' m-l and jk ik(mod m). Now suppose g = (jl,.",jtm)' Then glm and for each k, gljk. But since jk ikCmoduie m), there is an Zk 2 0 such that ik = jk+mk, so that glik. Also, since glm and min, gin. Thus gl(il,...,it,n). Hence g = 1. We now proceed to consider the complexity of cascade realizations, with feedback encoding. These results are motivated by Zeigler's [36] generalization of the Burks-Wang conjecture [ 3]. Up to now, we have been concerned only with cascades of two machines, but it is clear that we can generalize our results to the more general iterated cascades as defined in Chapter I. Consider, for example, Theorem 3.8. Suppose that M, as defined in the theorem is a submachine of a cascade IN.. Generalized cascade is a binary procedure, so we can find the major connective and write IIN = L oZ L2. The theorem guarantees the existence of a machine i. ~l "*i 1

69 M' in either L1 or L. Since the parameters for M' satisfy M m 2 m ~*.>j>m) = 1, if that component of the cascade L1 o L2 which contains M' is itself a cascade, we can repeat the procedure. m Eventually we will find a submachine Mr with digraph D'(M) of some r Xr component Ns of the cascade INi. This proves Corollary If M, with digraph D(M) = D(il,...,it,n) is a submachine of a generalized cascade TINj, and if (il,...,it,n) = 1 then there is a submachine M' with digraph D' (M) of some component Nk r r of the cascade. Suppose now that we have a cascade N = INi, and let machine Ni have Pi states. We define the size of the cascade, denoted size(N), to be the max{pi}. Let Sa be the collection of all generalized cascades N having size(N) _ a. We will show that S0 is not uniiersat for isomorphic realization with feedback encoding; i.e., that (for any a) there is a machine M which cannot be isomorphically realized with feedback encoding by any element of So. In fact, we will show a slightly stronger result. First, however, we will show that this statement is not true for unrestricted feedback encoding, by proving the following corollary to Theorem 3.3. Corollary Let M = <QMj,IM,6M> be any machine. Then there is a cascade N = QN'N'6N> 1=.Ni of size 2 and maps P:QM QN' h:QN M + IN, where p is one-to-one, such that for each q e QM and x C IM, qx p 1 (p(q)h(p(q),x)). (3.5) PROOF: Note first that if h were a Type II feedback encoder then the pair (p l,h) would be defining a realization with feedback encoding by a submachine of M.

70 Let n = {log2tQM }, and let Nn have 2" states {qO,...,qn1} and 2 inputs {x0,...,x2n 1}, where forl any qi and xj, qixj = qj. By Theorem 3.3, maps p and h certainly exist and satisfy the condition of the theorem. We need only demonstrate, therefore, that Nn is a cascade of size 2. We do this inductively. Consider the machines N and Nwith input sets {yii=0,..2n -l} n- N1,with -l and {xi i=O,l}. Modify Nn 1 to get a machine N'1 by replacing each yi by two inputs, Yi0 and Yil, where for each qj, qj0 = qjYil = qi. It is easy to verify that with connecting map Z such that Z(qj,yio) = xg ilN' o N is isomorphic to N Had and Z(qj,Yil) = Xl the cascade Nnl 1 is isomorphic to Nn. Had we expressed N = Nn o N1 we would be done. As is, we must now - n n-l Z 1 show that N' can be further decomposed. n-1 Let r be an s-bit binary number r = b1... bs, and let projj(r) = bj; for i < j let proji j(r) be bibi... b.. Let N* be I 3r ii+i J 1 the machine with states {q0,ql} and inputs {y0,...,y2n —1} such that qix = pro(j). Let Z be the map Z(qiyj) = pro2j; then the j _- qproj! (j) projj cascade N* = N* o N has states {q0'ql'q2 q3} and inputs {yi i=O,...,Y2n-l_1} such that qiYj = qpro (j) Let Z2 be the map Z2(qi'yj) Xproj3(yj); then N* = N o0Z N1 has state {qili=O,...,n} and inputs {yi.i=0,...,2n -1 }, such that qij = qpro (j) Continuing in this manner, we see that N* has 2n states and 2n inputs such that the action of each yj in N* is isomorphic to the action of r Yproj,(j) in Nr. Then N*1 is isomorphic to N'i, so that Nn is a cascade of size two.

71 We now return our attention to restricted feedback encodings. Theorem 3.9 For any a and any t there is a t-input machine M which cannot be isomorphically realized with feedback encoding by any submachine of any element of SO. PROOF: Let p be a prime larger than both a and t and consider any p-state machine M with digraph D(il,...,it,p) as described in Theorem 3.8 and Example 2.4. Such a machine can, in fact, always be constructed such that it 0. If M is isomorphically realized by a submachine N of an element Ni. of SO then N also has digraph D(il,...,it,p). Then since p is prime, Theorem 3.8 assures us that at least one component Ni of the cascade contains a submachine with digraph D'(N) - D(N). But this submachine has p > a states so that Ni has more than a states, and hence size (HNi) > a. Corollary For any a and any t, there is a t-input machine M which cannot be isomorphically realized by any submachine of any element of Sa. In the classical theory of realization a similar result holds for both homomorphic realization and for simulation [36]. We will see in the next chapter that the corresponding result does not hold for simulation with feedback encoding. For homomorphic realizations, we can prove a result slightly more restricted than Theorem 3.9. If a cascade M1 oZ M2 contains an n-cytle Cn, i.e., a strong, autonomous, n-state machine, then by the corollary to Lemma 3.1, 4 M1 contains an admissible homomorphic image of Cn. By Theorem 2.11, any admissible homomorphic image of Cn is some Cm, where min. Thus,

72 for some min, M1 contains Cm. If Cn is defined by input x and has states <q0,.'qn1> consider the string y = Z(qO(l),x)Z(ql(1),x)... Z(qm l(1),x). It can be shown (see below) that the states, in M2, n q (2),q (2)y,qO (2)y..,qo (2)y'n are all distinct, and that q0(2)ym = q(2). The string y thus induces n a string cycle in M2; we call nr the string period. If p is a prime which divides n, then either plm or pim, so one of M1 or M2 must have at least p states. Thus, size(M1 oZ M2) > p. We would like to be able to continue this process and show that if any cascade M contained Cn, and pin, then size(M) > p. If plm and M were a cascade, we could indeed continue, since M1 contains a cycle Cn. At some point in this "unfolding", however, we may come across the situation where the machine which is guaranteed to have at least p states" is the tail component of a cascade, and then we have the problem of decomposing a string cycle of string period t, where pit, into string cycles, one of whose string periods is a multiple of p. This can always be done. Lemma 3.5 (Zeigler [36]) Let M be a cascade M1 oZ M2 which contains a string cycle of period n. Then there is some kin such that M1 contains a string cycle of string period k and M2 contains a string cycle of string period n/k. We are now ready to state a complexity result for homomorphic realization with feedback encoding. Theorem 3.10 For any a there is a machine which cannot be homomorphically realized with feedback encoding by any submachine of any element of S. a

73 PROOF: Let M be Cp, where p > a is a prime. If a submachine N of UN. e So homomorphically realizes M then, by the corollary to Theorem 3.5, D(N) is an admissible homomorphic pre-image of Cp. Then, by Theorem 2.11, for some n, N is Cnp. Applying Lemma 3.5 as outlined above, we can conclude that one of the Ni has at least p states, contradicting the hypothesis that p > a. We stated that this result is more restricted than Theorem 3.9. In fact, since for autonomous machines the notions of homomorphism and admissible homomorphism coincide, it really says nothing new (Corollary 3.6). The distinction between Theorems 3.9 and 3.10 is that we have no detailed knowledge about admissible homomorphic pre-images of complex structures, such as the digraphs D(il,..,it,n) of Theorem 3.9. Undoubtedly there is some relationship between these digraphs, their admissible homomorphic pre-images, and the admissible homomorphic images of these, but what this might be is unclear at present. We stated above that simulation with feedback encoding would be discussed in Chapter III. It is proper at this point, however, to consider a weakened form of this. A b-slow simulation was defined in Chapter I. We say that M isomorphically b-sow simnuZates M' with feedback encoding if there are a constant b and maps':Q + Q' and h:Q x I' + I+, where 4 is both one-to-one and onto, and for all q,x',Z(h(q,x')) = b, such that f(q)x' = *(qh(q,x')) (3.6) Of course, if b = 1 this reduces to an isomorphic realization with feedhack encoding. What we are doing in an isomorphic b-slow simulation

74 with feedback encoding is assigning to each arc from state ql to state q in D(M') a walk of length b from -l (ql) to -l(qp) in D(M). The next definition may make this clear. For any digraph D with point set V = V(D) the absolute n-th power <n> <n> n> D of D has V(D ) = V(D) and uv c D if and only if there is a walk of length n joining u to v in D. Figure 3.8 shows a digraph and its absolute square. Thus M isomorphically b-slow simulates M' with <b> feedback encoding if and only D(M') is a subdigraph of D(M). As an aid to determining whetherM' can be isomorphically b-slow simulated by some machine, we can give necessary and sufficient conditions for <b> the existence of a digraph D such that D b D(M'), although these conditions are unwieldy, at best, even for the case b = 2; if D =D we say that D is an absolute b-th root of D'. Rather than directly prove the characterization theorem, we prove a similar result, whose proof is virtually identical. If D is a digraph with point set V = V(D), the n-th power Dn has V(Dn) and uv e Dn if and only if there is a walk of length at most n from u to v in D; if E = D then E is an n-th root of D. Let S,S2,...,S2n be sets, not necessarily disjoint, subject to the constraints that Sn p 0 and if Sn j = O(Sn+j = 0) then for all k > j Snk = 0(Sk = 0) Let K = Kn(S1..,S2n- ) be the digraph with 2n-1 n-l 2n-2 V(K) = U S and X(K)=[ U Sx(SnU...USn )]U[ U Sjx(Sj.U.. US2n1)]. j==l j=l j n n+ j=n " n Theorem 3.11 Let D be a digraph and let n > 2. There exists a digraph E such that En = D if and only if there is a collection of subdigraphs Ki. K (S i,'... Si, n ) of D associated with the

75 D Figure 3.8. A digraph and its absolute square.

76 points u. of D such that (1) S. = {u.} 1,n 1 (2) X(D) = X(Ki) (3) Ui S. if and only if u. c Si j,n-1 j i,n+1 (4) for any 0 < r < n-l and s = r+l: uk e S.in if and only if there is a u. e S. such that J i,n-r k j,n-l; Uk,ns if and only if there is a u. e S nr such that uk Sn+l that U~r k S,n+lk PROOF: Let E be a digraph. For each u. c E define S.. to be yE (ui); in particular S. = {ui}. In En, for each 1 < j < n-l, each point i.,n'-1 of S. j is adjacent to each point of S.,SS and if i,n' i,ni i,nn+j n < j < 2n-2, each point of S. is adjacent to each point of S. O,...,S. 2 Each arc u.u. of En is determined by a path u.uk...uj of length r < n, so that u. E Sk,n- and uj Sk,nr and u.u. Kk. Conditions (3) and (4) follow from the properties of rE. Conversely, let D have such a collection. Define a digraph E by setting V(E) = V(D) and u.uj c E just when u. c S. n We show i 3, n-l' En = D by demonstrating that, for each u. and each 1 < k S 2n-l, k-n nif and only if Sik YE= (ui). It will then follow that u.uj. E if and only if u.u. c D. For, if u.u. c D, u.u. is in some Kk; thus for some r and 13 1J3 1 s, u. e Skr and u. e Sk, where r < n < s and s < r+n. Then there 1,r 3kr is a u e Y (r+l)(n (uk) such that u. s so that u Su E, L1 EL^ iC i L^,n"Jl 1 i'"} and u e S, r We then find t2 such thatu u u EE and u C S; we continue until we find tb such that Utb tb c E and u- e Sknli solth ib te End UIt trtb~ Sk. so that u Uk e E. Similarly we find a sequence tdtdl,..tb+3,tb+2 b

77 where d = s-r-l, such that uutu,ut u,...,u u. are all arcs - b+2 b+2 b+3 td of D. This defines a walk of length s-r s n between u. and u. in E; 1 3 thus uu. c E. If u.u. e En then ui and u. are joined by a walk uiut.* ut u of length at most n. Now, u. e y1 (u ) = S n 1 u B ( u t1,n-1 and u. E ( ) =t S. Since k < n, n+k s n+(n-l); thus u.u. e D. I ^ tl,n+k We proceed to show that Si.k " (i). f k n-l then Ykn(i) (i)' {uj uji E}- {u j. Si,n} = Sik. If k n+l, y k(ui) {YE(ui)} {u ui.uj e E} {uj ui e Sj,1}. But Ui E Sj n. just when uj e Sin+l; thus YE(Ui) = Si,n+l. 1 3j,n-l^^ j^ i " ^,^1- In+1 Ei in+1 Suppose the result holds for n-r < k < n+r, and let s = r+l. Then (n-s)-n -s -r u n = E (Ui ) = E (i E) = rE,n-r U{YE (uj)1uj E Si n-r} For any ujy1(uj S. T1 us if.inr Thus if p in-r-l 3 p j,n-l u e i(uj) and u Sn-r teup i,n-r-1 1,n-s' hanU. upE S n 1then there is a uj such that u S. C ald Uj E,(U n' up Y j)y(si,nr) YTE (YE (ui)) ) YE (ui) Similarly we can show that Sin = Y5(u), establishing the theorem. The results in 8 ] and 1 ] follow as corollaries for the case n - 2. 2n-1 Now, let H = H (S,...,S2n_) be the digraph with V(H) = U S. n-l j=1 and X(H) = U [S x S.j]. The proof of the next result is virtually identical to that of the preceding and will be omitted. Theorem 3.12 Let D be a digraph and n > 2. There is a digraph E such that En ~ D if and only if there is a collection Hi 5 Hk(Sil,.l.,Si,)2nl) associated with the points ui of D which satisfies conditions (l) - (4) of Theorem 3.11.

78 Corollary A digraph D has an absolute square root if and only if there:air sets S. and T. associated with the points ui of D such that (1) each point of S. is adjacent to each point of Ti; (2) for each arc x of D there is some u. for which x joins Si and Ti; and (3) u. e Sj if and only if u. c T.. 1 1 1 J ~~~~~~j 1

CHAPTER IV ALGEBRAIC STRUCTURES AND SIMULATION WITH FEEDBACK ENCODING An important structure associated with a finite automaton M is its semigroup, S(M). The semigroup, in fact, is quite crucial to the decomposition theory of Krohn and Rhodes [24]. While it would certainly be desirable to be able to associate an algebraic structure with a machine M which gives us the same information about realizations with feedback encoding that S(M) gives for classical realization, S(M) is clearly not the structure we would wish to use. The semigroup S(M) reflects quite accurately the state-behavior of M, while, as we have shown, the theory of realization with feedback encoding depends much less on behavioral properties than on (digraph) structural ones. Hedetniemi and Fleck [20] defined the S*-semigroup of an automaton and, with Oehmke [33], showed that it had at least some of the properties we would want any algebraic structure to exhibit for us to consider it to be the "natural" system to study in conjunction with realization with feedback encoding. Unfortunately, as we will show, the properties they found do not extend in the way they should. In fact, rather than being able to apply the S*-semigroup to the present study, the converse holds: we are able to apply the notion of simulation with feedback encoding to a conjecture of Hedetniemi and Fleck on S*-semigroups. With the work of Hedetniemi, Fleck, and Oehmke as motivation, we will define other semigroups and study some of their properties vis-&-vis realization with feedback encoding. We first define a simulation with feedback encoding. Let M = <Q,I,6> and M' <Q'I',I'> be machines. Then M simutates M' with feedback 79

80 onto encoding if there are maps 4:Q t Q' and h:Q x I' I I+ such that for each q c Q and x' e I',(q)x' = 4(qh(q,x')). (4.1) If IQI = IQ' I then we have an isomorphic simulation with feedback encoding, and, if not, a homomorphic simulation with feedback encoding. Example 4. 1 Let M and M' be the machines in Figure 4.1. _a1 a Figure 4.1. Simulation with feedback encoding. Then by Theorem 3.5 it is clear that M does not realize M' with feedback encoding. But M does simulate M' with feedback encoding. One such simulation is defined by the pair (0,h), where (qi.) = r i = i,2,3, and h has the following table: a b 9q 00 0 q2 1 01 q3 1 0 Theorem 4.1 If M1 simulates M2 with feedback encoding~and M2 simulates M3 with feedback encoding, then M- simulates M3 with feedback encoding.

81 PROOF: Let the simulation of M2 by M1 be defined by maps * and h, and that of M3 by M2 be defined by maps ~ and g. Let ql e Q1 and x3 C 13. Suppose that g(%(ql),x3) = x21.. x2n; then (0(q())X3 =... x ).2n Let ql = qlh(qlx21), q12 = qllhqll'X22)"'''qln q,n-lh(l,n-12n Then (qll) = q(qlh(q1,x21)) = (q1)x21, (q12) (q111h(q11X 22)) - (ql, h(qlx21)h (ql l X22)), etc. Thus, if we define f(ql,x3) to be the string h(qlx21)h(qll,x2~)... h(ql,n-2) where qllq12',.,q' n and x21... 2nare as above, then p(O(ql))x3 = *({0~(qf(qlx))), so that M1 does indeed simulate M3 with feedback encoding. We now proceed to define and study the algebTaic structures mentioned above. Let S and S' be semigroups with zero, where, without confusion, we will use the same symbol, 0, for both the zero of S and the zero of S'. A map:S + S' is a zero-free homomorphism if it satisfies 1. Ker() = {0}; 2. If ab 0 then $(a)4(b) = (ab); 3i. If 0(a)b' 0 O then there is some b e O l(b') (4.2) such that ab 0 (4 3ii. If a'(b) # 0 then there is some a c 1 (a') such that ab # 0. We will see examples of zero-free homomorphisms which are not homomorphisms later. For the present, we give some basic properties of zero-free homomorphisms.

82 Theorem 4.2 a) Let S1,S2 and S3 be semigroups, and let':S1 S and y:S2 + S5 be zero-free homomorphisms. Then i. If (a) (b) = 0 then ab = 0. ii. Y':S1 S is a zero-free homomorphism. 1 3 iii. If 0 is one-to-one then O is an isomorphism. b) If O:S1 + S2 is a semigroup homomorphism with Ker(o) = {0} then 0 is a zero-free homomorphism. PROOF: a) For (i), suppose that ab O. Then (a) (b) = 4(ab) # 0 since Ker(O) = {O}. The result follows by contraposition. For (ii) we must verify the four properties of (4.2). First, suppose that TY(s) = 0. Then Y(O(s))= 0 so $(s) e Ker(Y) and hence O(s) = 0. But then s E Ker(c), so s = 0. Next, suppose that, in S, st O0. Then O(s)<(t) = O(st). By (i), (st) 0 O, so T(O(s))T(O(t)) = YT((s)(t)) = (,(st)). Finally, suppose'Y(a)b # 0, where b c S3. Then'(O(a))b $ 0 so there is a d S2 such that TY(d) = b and (a)d 0. Then, there is an e e O (d) such that ae $ 0; YO(e) = T(d) = b. This verifies 3i, and the proof for 3ii is virtually identical. For (iii) we need only check that if ab = 0 then O(ab) = O(a)O(b). In fact, 9(ab) = O(a) (b) = 0; for, since O is one-to-one it follows from 3i and 3ii that if O(a)>(b) f 0 then ab $ 0, a contradiction. Thus O(a)>(b) = 0, and 0(ab) is certainly zero since ab = 0 c Ker(O). b) Suppose that':S1 + S2 is a homomorphism, and that Ker(O) = {0}. Since (ab) = ~(a) (b) for all a,b s S1, condition 2 of (4.2) certainly holds. If 4(a)b' $ 0 then for any b such that O(b) = b', (a)4(b) = ((ab). Thus ab $ 0 since 4(0)'- 0 P O(a)$(b).

83 Let M-<Q,I,6> be a machine. An ordered three-tuple (s,x,t) is a triple for M, or simply a triple, if s,t e Q, x c I+, and sx = t. A triple is elementary if the length t(x) - 1; i.e., if x e I. Let S(M) be the set of all triples of M together with a distinguished zero element O. We introduce an operation on S(M) by the followin2 rules: 1. (s,x,tl)(t2,y,r) = (s,xy,r) if t1 t 0 if t1 #t 2. bO = Ob = 0 fqr eachib e S(M). Lemma 4.1 For any machine M, S(M) is a semigroup. PROOF: Clearly the operation is defined for each pair of elements in S(M). Thus, we need only show associativity. Let bl,b2,b3 e S(M). Since O commutes with every element of S(M), if any of the b. are 0 then certainly (b b2)b3 = bl(b2b3) = 0. Suppose that bI = (q,x,r1), b2 = (r2'Ys2), b3 = (s3,z,t). If rl = r2 and s2 = s3 then bb2 = (q,xy,s2), b2b3 b (r2,yz,t), and (blb2)b3 = (ql,(xy)z,t) (q,x(yz),t) = bl(b2b3). If rl f r2 and s5 1 s2 then bb2 bb = 0, so (blb2)b3 - bl(b2b3) = 0. If rI = r2 and s1 s2 then b2b3 = 0 and bl(b2b1 ) = 0. Now, blb2 (q,xy,sl) and so (blb2)b3 0. Similarly, if r r2 and s = S2 then (bb2)b3 = b (b2b3) = 0. Thus S(M) is a semigroup. Suppose that M realizes M' with feedback encoding, where the realization is defined by maps * and h. Although, for each s e Q,

84 hs is a map from I' to I, hs can be extended to a map h5 from I'+ to It in the following way. If x' I', h (x') = h (x'); if x',y' I'+, h (x'y') = h (x's (x') ( We must, of course, verify that if w'z' is a different way of writing x'y' then h (w'z') = h (x'y'). s s Lemma 4.2 a) For each s e Q, h is a function. b) For each s C Q, x' E= I (sh (x')). c) For each s e Q, x C I+ there is a unique x' E I'+ such that h (x') = x. S PROOF: a) We proceed by induction. The result is true for strings of length 1 since h is a function, and true for strings of length 2 since, for such a string, there is only one decomposition into smaller strings. Suppose it to be true for strings of length n-l and let x = w... w, where each w! c I'. Choose 1 < i < j s n-l. We must show that h (w, w')h( s(w.. ww ),w w)= s~l i/ " s^ I..i+l n+ h (w w)h (sh (w w) w!)... w'). Let s = h (w' w!). sIl j sl yj j+l n 1 S I i By induction we can write h (wi+. w' = h (w'... w!)h (w!.. S 1I1 n S J+l S 2j+l n+ where s s1h(!... w!). Also by induction, where 2 s h (W+ h (w'.. w)hs (W.i+l w!) = hw... w!). Thus - s l' 1 1 i.. (w'.. w!)hs (w+.. W')= h (wI w)h (WI W ), and s 1 1i+l n) (w1 s.. w)h (wj 1 w')s and the two expressions are equal..b) Let s e Q and x' = w'. w' where each w! c I'. If 1W n' 1 Z(x') = 1 then the result holds since h (x') = h(x'). Assume the result holds when n < m-l and let n - m. Choose 1 < i- S n-l. Then ~(S)X' = [~(s)wl... w,]w * w = 4(sh (w{... w))w w))w 1 11i+l n s1 1 i+l n If S1 = Shs.wi _.. W! If sl = sh (wi... w!) then *(s)x' = 4(s)wi+... w = (shs (wl.. wn)), which, by part a), is equal to (shs (x')). ^ s^ ~l n's

85 c) This follows immediately from part a) and the fact that it is true for hs by the properties of a Type I feedback encoding. From this point, we will use the symbol h for both h and h. s 5 s For a state s E Q and a string x e I+ we will write hs l(x) for the unique string x' e I'+ such that h (x') = x. s Theorem 4.3 If M realizes M' with feedback encoding then there is a zero-free homomorphism from S(M)'onto S(M'). PROOF: Let the realization be defined by maps * and h. If b = (s,x,t) is a triple of M, define 0(b) = (<(s),hs l(x),(t)). By Lemma 4.2,,(s)hs C(x) 5= (sx) = (t), so that 0(b) is a triple for M'. Also, define 4(O) = 0. We will prove that 0 is a zero-free homomorphism from S(M) onto S(M'). First, we show that 4 is onto. Let b' = (s',x',t') be a triple of M'. Choose s e * l(s'), and let x = hs (x'). If t = sx then *(t) = 4(sx) = *(s)x' = t' so 0((s,x,t)) = b. Clearly Ker(t) = {0}. Suppose that bl = (s,x,t) and b2 = (t,y,r), so that blb2 = (s,xy,r) r O. Now ((bl)O(b2) = ((s),hl(x),t)) (Q(t),ht 1(y), (r)) = C(Cs),h l(x)ht l(x)ht (y),r)). By Lemma 4.2, hS (x)ht l(y) = h (xy), so s(bl)C(b2) = (C(s),h5 l(xy),(t)) = (blb2). Suppose that,(b)d' # 0, where b = (s,x,t). Then ~(b) = (C(s),x',<Ct)), so that for some y' I'+, r' Q', d' C ((t),y',r'). If d = (t,ht(y')tht(Y')), then 4(d) = d' and bd $ O. Similarly, if d'I(b) F 0 then there is a d such that,(d) = d' and db # 0. Corollary If M isomorphically realizes M' with feedback encoding then S(M) SC(M').

86 PROOF: This follows from Theorem 4.2.2. We can now give the examples mentioned above of zero-free homomorphisms which are not homomorphisms. Consider the map 4 guaranteed by Theorem 4.3. If sl and s2 are two states of M for which [(sl) = O[s2) then any triples of the form b1 = (r,x,sl) and b2 = (s2,y,t) have product bb2 = 0, while 0(bl){(b2) = ($(r),xy, (t)) 0. We will also prove a converse to this theorem. If b = (s,x,t) is a triple of machine M define i(b) = s and f(b) = t. A state s of M is reachable if there is a triple b such that f(b) = s. Let S be a semigroup with zero. For any s C S define s = {tlst 0} and sr= {tits F 0}. We can then define equivalence relations =E and = on S:s t if s = tm, and s =E t if sT = tT. Lemma 4.3 For any machine M in which every state is reachable the relation E on S(M) has finite index which is equal to 1 + IQI. In fact, b b b2 if and only if fCbl) = f(b2), and [0] = {0}. PROOF: Clearly 0 = 0, while for any b = (s,x,t) there is some triple (t,y,r), so that b # 0. Thus [0] = {0}. Note that, by definition, bd & 0 if and only if f(b) = i(d). Thus if bl = b2t then f(b1) = f(b2), and conversely. Therefore under E 2 2 there is one equivalence class for each reachable state of Q1 and one for 0 and so, if every state is reachable the index of =l is IQi+l.

87 Lemma 4.4 For any machine M in which every state is reachable the relation r on S(M) has finite index, equal to l+lQJ; 0]r {(0, and b1 r b2 if and only if i(b1) " i(b2). PROOF: The proof is similar to that for Lemma 4.3. In this case we need the reachability criterion to guarantee that [0] {(0}. Theorem 4.4 Let M and M' be machines in which each state is reachable. If I l Ii' I and there is a zero-free homomorphism O:S(M) o0tS(M'), then M realizes M' with feedback encoding. PROOF: We first show that bl 2-b2 implies that~^(bi) -0 (b2). Suppose that b1, b2 and that (bl)d' i 0, so that d' E 0(bl)L Then there is a d such that 0(d) = d' and bld 9 0 by (4.2). Since b! = b2, b2d 0. Then 0(b2)O(d) = (b2)d' -= (b2d) 9 0 since Ker(l) = {0. Thus d' C *(b2). Similarly, B(b2) 1C(bl)' so 0(bl)= 4(b2)and hence 0(b1) =.L (b2). By a similar argument, bl.rb2 implies 0(bl) m ((b2). It then follows from Lemmas 4.3 and 4.4 that f(b1) = f(b2) implies that f(o(bl)) = f( (b2)) and i(bl) = i(b2) implies that i(O(bl)) = i(C(b2)). Therefore, 0 induces maps iQ:Q + Q' and f:Q + Q' such that 0((s,x,t)) = (i(s),x',Of(t)). Now, suppose that bl = (r,x,s) and b2 = (s,y,t), so that blb2 9 0. Then (b1) = (iX),f(s)), 2(b2) = (i (S)yY',"f(t)). But we know that O(bl)C(b2) 9 0; thus, for each reachable state s, *i(s) = fC(s). Since every state is reachable, *i f i=, a map from Q to Q'. Furthermore, * is onto, since * is.

88 Let x c I+, x' C I', be such that for some triple b = (s,x,t), 0(b) = (~(s),x',0(t)). If we write x = yz, where neither y nor z is empty, then b = (s,y,sy)(sy,z,t). Thus 4((s,y,sy))O((sy,z,t)) = D(b) = (p(s),x',(t)). But this is an impossible situation: the second components of 0((s,y,sy)) and ~((sy,:,t)) are nonempty strings, say y' and z', so that x' =y'z', a contradiction of the hypothesis that x' e I'. Thus, every pre-image of an elementary triple of M' is an elementary triple of M. We now show that if b' = (s',x',t') and ((s) = s', then there is a triple b with i(b) = s such that ~(b) = b'. Since s is reachable, there is some triple d = (r,y,s). Now, O(d) = ((r),y',~(s)), so <(d)b' I 0. Thus, since P is a zero-free homomorphism, there is a triple b such that db f 0 and O(b) = b'. But db # 0 implies i(b) = f(d) = s. Now let j I = I'I = n. Choose any s' e Q' and any s such that (s) = s'. If the n elementary triples b! with i(b!) = s' are b = (s',x!,t!) then for each b! there is a triple b. with O(bj) = b! and i(b.) = s. Furthermore, as we have shown, each such b. is an elementary triple. Also, since III = II'I = n, there are exactly n elementary triples b with i(b) = s. Therefore, for each s e Q and eachx' e I' there is a unique x e I such that ~((s,x,sx)) = ((s),x',(sx)); we can then define a map h:Q x I' + I by setting h(s,x') = x, where 0((s,x,sx)) = (4(s),x',4(sx')). The pair (0,h) defines a realization with feedback encoding. Corollary If M and M' are machines in which each state is reachable, with 1I1 = I' I, and if S(M') i S(M), then. M isomorphically realizes M' with feedback encoding.

89 We can get a result parallel to Theorem 4.3 for the case in which one machine simulates another. Let S and S' be semigroups with zero; we say that S' zero-free divides S if there is a subsemigroup S1 of S which contains the zero of S such that there is a zero-free homomorphism from S1 onto S'. Theorem 4.5 Let M and M' be machines such that M simulates M' with feedback encoding, where the simulation is defined by maps * and h. If the extended map h:Q x' I'+ I+ is one-to-one for each s e Q, then S(M') zero-free divides S(M). Note: A sufficient condition for the extended map to be one-to-one will be given as Theorem 4.15. PROOF: For each triple b' = (s',x',t') define'(b') = {(s,hs(x'),sh5(x')){4(s) = s'}, and define V(0) - 0. Let S1 = Y(S(M')). We first show that S1 is a subsemigroup of S(M). What we must verify is that if blb2 b3 in S(M) and if bl,b2 c S1, then blb2 b3 c S.1 This is immediate if any of bl,b2,b3 are 0. Suppose that b1 = (s,x,t) c i((s',x',t')), and b2 = (t,y,r) e s((t',y'r')). Then (s',x',t')(t',y',r') = (s',x'y',r'). Since x = hs(x') and y = ht(y'), xy = hS(x'y') and s(xy) = r. Thus bb2 = (s,xy,r) c Y((s',x'y',r'))CS1 Now, for each b e S1, define D(b) = -l(b); 4 is properly a function since if b = (s,x,t) is in ((slxt)) and also in ((s,x2.,t)) then s = s. = (s) by definition of Y and xI = x2, since hs is one-to-one, so that t = sx = t. 1 2 5 1 11 2i The proof that' is a zero-free homomorphism is now identical to the proof of Theorem 4.3, once we note that, by the assumption that each

90 hs is one-to-one, for each s e Q, x E I+ such that (s,x,sx) C S1 there is a unique string x' = hs (x) E I'+ such that hs(x') = x; this is, of course, the analogue of Lemma 4.2c. Let M and M' be two machines, and h:Q x I' + I+; we say that a subset QlCQ is closed under h if, for each s e Q1, x' E I', shs(x') c Q1. If there is also a map':Qlono0 Q' which satisfies (4.1), then we will say that M' divides M with feedback encoding; we call Q1 the core of the division. Corollary If M' divides M with feedback encoding in such a way that the map hs:I'+ + I+ is one-to-one for each s, then S(M') zero-free divides S(M). It is the corollary to Theorem 4.5, rather than the theorem itself, which has a converse. To prove it, we need one additional concept from the theory of semigroups. If S is a semigroup and s,t C S then s divides t, written sit if there is an r such that either sr = t or rs = t; t is prime if there is no s which divides it. Note that for any machine M the only primes in S(M) are the elementary triples. Theorem 4.6 Let M and M' be machines such that every state of M' is reachable and S(M') zero-free divides S(M). Then M' divides M with feedback encoding, and the extended map h:I'+ I+ is one-to-one for each s. PROOF: Let SCS(M) be the subsemigroup for which there is a zero-free homomorphism':S ~t~oS(M'). We first show that if e(b) is prime, then b is. For suppose b is not prime; since 0 is not prime and (0) = 0, b i 0. Then b = blb2, so that b(b) = 4(bl)4(b2) is also not prime.

91 Next, we show that {i(b)beS}) {f(b)lbeS}. Let b = (t,x,s) and O(b) = (t',x',s'). Since every state of M' is reachablo, there is a d' (r',y',t'). Since d't(b) t O we can find d such that ~(d) = d' and db i 0. Thus-f(d) = i(b) and {i(b)jbeS}S{f(b){beS}. The other direction follows similarly, except that there we rely not on the fact that every state of M' is reachable, but rather only on the fact that M' is a (complete) machine. Let Q1 = {f(b)lbeS}. Since for each s e Q1 there are triples bl and b2 such that i(bl) = s = f(b2) we have, all the machinery needed to piove the natural analogue of Lemmas 4.3 and 4.4. The relations riand =,have finite index, equal to 1+{Q1{; bI -Lb2 if and only if f(b1) = f(b2), b1 =Tb2 if and only if i(bl) = i(b2), and [0)]T [0]J = {0}. Exactly as in Theorem 4.5, we can now show that b1 r.b2 implies O(b1) ET ~(b2) and b1 = b2 implies 4(b1) - "(b2), so that there is a map':Q~n'~ Q' such that 4((s,x,t)) = ((s),x',(t)) Let JI'I = n and suppose that for some s' C Q' the elementary triples b' for which i(b!) = s' are b! = (s',x!,t!). Choose any:1 ~^ J J;. s e Q1 such that ~(s) - s'. We show that for each j there is a triple b. with i(bj) = s such that *(bj) = b!. For, if d is any triple with f(d) = s then (d)b! O, so there is a b. c 0 (b!) for which dbj $ 0; certainly i(bj) = s. Then, as we have shown, each such b. is prime in S. Now, for each s c Q1, x C I', define hs(x') to be that string x for which 4((s,x,sx)) = ([(s),x',O(sx)). Then 0 and h satisfy (4.1), so that M' divides M with feedback encoding, where the core of the division is Q1. Since, is a function, each hs:I' * I+ must be one-to-one.

92 We now show that the map h which we have defined extends so that for each s e Q1, hs:I'+ - I+ is one-to-one. We have already shown that hs:I' + I+ is one-to-one. That argument, which shows that the preimage of a prime is prime, can extend to show that for each n, hs:I'n - I+ is one-to-one, but this is not sufficient What we need to prove, as in Lemma 4.2, is that for each x e I+ such that (s,x,sx) e S, there is a unique string x' E I'+ such that h5(x') = x. As noted, this is true if (s,x,sx) is prime. We show that each triple b c S which is not prime has a unique prime decomposition; i.e., if b = b... b = d, where each bi and dj is prime, then n = m n l m l and b. = d., for i = l,..,n. This is certainly true in S(M'). If b'= (s',x',t') is prime then t(x') = 1. Thus if the b! = (p!,x!,q!) and the d! (u!,y!v!) are primes and (s',x',t') b.. b = d' d 1 1 1y 1i V n 1 m then n = m = t(x'). Then x' = x'... x =y y.' so that for 1 n 1 n i = l,...,n, x! = y!. If b' = b... b' = d.. d' the p u s' 1 1 1 n n 1 1 so that uly plx' q = vI1 = u2 p etc. With this at our disposal, u2 2' suppose that b = b...b = d.. d Then n m.(b) = 0(bl)... (bn) = ((dl)... D(dm). Since each b. and.d. is prime, each D(bi) and O(dj) is, so that n = m and, for i = l,...,n, (bi) = P(di). Now i(dl) = i(b1) = i(b), and since hs is one-to-one on I', this means that d1 = b1. It then follows that i(d2) = i(b2), so that d2 = b2, etc. Hence, each element of S has a unique prime decomposition. Now, for s E Q1 and x' = x... xn E I'+ define h(x') = h(s,xl)h(sxlx)... h(sxI... x If h(x... x') h=s(Y h... Y) then (s,x,sx) = (s',hS(xi),sh5(xI)) bl = (S,hs(y),shs(yl))b2

93 Since (s,hs(xl),shs(xl)) and (s,h (yl),shs(yl) are primes, they must be equal, since (s,x,sx) has a unique prime decomposition, and hence x = y. It follows inductively that x = y,..., and finally that n m and x1. xI' y... y, so that h:I'*+ I* is one-to-one. We should note that it is possible for M to simulate M' with feedback encoding without having each map h:I'+, I+ be one-to-one, even if every state in each machine is reachable. Example 4.2 Let M and M' be as in Figure 4.2. Let *(pi) = si and suppose that h has the following table: h a - P1 a ab P2 a bc 3 c b c Fanma Figure 4.2, Another simulation with feedback encoding.

94 It is easy to verify that 4 and h define a simulation with feedback encoding, and, in fact, that h:I' -> I+ is one-to-one for each i. But h(ph(p,a),) = ah(p2,) = abc = h(p1,)h(pa) = h(p )h(ph( ) so that hp (ac) = hp (80). ^1 -1 We have thus shown that the semigroups S(M) reflect quite accurately the relationship of M to other machines as far as realization or simulation with feedback encoding. One important feature of the classical semigroup of a machine is that for every finite semigroup S there is a machine such that S = S(M). A similar result cannot be possible for the semigroups S(M); for example, it is clearly impossible for S(M) to ever be a group. We can, however, characterize those infinite semigroups S such that,-for some machine, S = S(M). To do this, we need some preliminary definitions. Let P be a partially ordered set under the relation <. If a,b ~ P we say that b covers a if a < b and there is no c C P such that a < c < b; the cover of a, cov(a), is the set of all b which cover a. We will call a partially ordered set P an n-tree if it has the following properties: i. P has a least element; ii. for each a e P, Icov(a)I = n; (4.3) iii. if a f b then cov(a)ncov(b) = 0. For any semigroup S we can define a relation s by defining a < b if there is an element c such that ac = b, and a < b if either a = b or a < b. This relation may, but need not, be a partial order, since it is not necessarily antisymmetric.

95 Theorem 4.7 Let S be an infinite semigroup. Then there is a machine M with II = n such that S = S(M) if and only if i. S has a zero, 0. ii. The relations =- and T have the same finite index, and [01T = ] (0). iii. If st 0 then st c [s]J [t]J. iv. Each block of the equivalence relation =T, except for [O]T is a disjoint union of n n-trees under the relation <. PROOF: We have already shown the necessity of each condition except iv. Notice first that c must be antisymmetric, for if (s,x,t)<(u,y,v) then s = u and, more important, t(y) > Z(x). For Bachistate s e Q we can find exactly n primes, the elementary triples b. = (s,xi,sxi), i = l,...,n. For each triple bi, cov(bi) = {(s,xixj,sxixj)lxj e I}, etc. Thus, under <, each elementary triple is the least element of an n-tree, and each block B of =-is the union of the n n-trees generated by the elementary triples in B. To prove sufficiency, let m+l be the index of =T. We will define a machine with state set Q = {ql9..qm} and input set I - {x1,...,xn} such that S(M) = S. Let the blocks of Er be B0 = [O],T B1,...,Bm, and let the n-trees of B be Ti,.,Tn. Let the blocks of -2be Do = [0]A, D1...,Dm. There is a natural correspondence between the Bi and the Dj. If s e tT then st T 0, so t C sA, and conversely. With each B. = [si]r associate that Dj for which D. [t.],and tj. Bi. If necessary, renumber the Dj so that Bi is associated in this manner with Di.

96 With each s E S we will associate a triple. If s e B.iD. then s will be associated with (qi,-,qj); the second element of the triple is yet to be assigned. For each block Bi, let sl,...,s be the least elements of the n-trees of Bi, and assign as the second element of the triple associated with s. the symbol x.. We proceed inductively; suppose that the string y has been assigned to element t, and let it,...,tn} = cov(t). Then assign as the second element of the triple for tj the string yxj. This association between triples and elements of S is an isomorphism between S and S(M), where M is the machine whose elementary triples are exactly the mn least elements of the n-trees of S. Before proceeding we mention one interesting auxiliary property of the semigroups S(M). Suppose that the machine M is actually a finite-state acceptor; that is, there is a starting state q0 and a set of final states F c Q, so that we can associate with M the event E(M)CI+ of strings x for which qox e F. Theorem 4.8 If M is a finite state acceptor then there is a right ideal R and a left ideal L of S(M) such that E(M) = {proj2(s) s C LOR-{O}}. PROOF: Let L = {s f(s) c F}U{0} and R = {sji(s) = q0}U{O}. Then L is a left ideal since, for any t c S(M), s c L,if ts Z 0 then f(ts) = f(s) e F. Similarly, R is a right ideal. Then LnR-{0} is the set {sli(s) = q0 and f(s) E F}, so that the second components of the strings'of LNfR-{0} are all the walks from qn to F; i.e., E(M).

97 We now discuss briefly a new finite semigroup for a machine M. Let H(M) be the set of ordered pairs (s,t) e Q x Q such that, for some x, (s,x,t) C S(M), together with a zero element, 0, with the same multiplication rule us in S(M); i.e., (s,t)(u,v) (s,v) if t u, and (st)xuv) - 0 if t ~ u. We call H(M) the reaohability emigroup of M. Theorem 4.9 For any machine M, H(M) is a homomorphic image of S(M). PROOF: Let 0:S(M) + H(M) be defined by s((s,x,t)) = (s,t) and 0(0) = 0. Let bl,b2 be elements of S(M); if either is zero then blb2 = 0, and (bl) (b2) = 0 = C(blb2). Suppose that neither b1 nor b2.is zero. If their product is zero then f(bl) i i(b2), so that certainly 4(b1)'(b2) = 0- (b1lb2). Finally, if bl = (s,x,t), and b2 = (t,y,r), then 4(b) = (s,t),,(b2) (t,v), and',(blb-) = ((s,xy,r)) = (s,r) C(bl)O(b2). Thus 0 is a homomorphism. Note, incidentally, that since Ker()) = {0}, is also a zero-free homomorphism by Theorem 4.2b. Because the map i, as defined in the theorem, is so obviously important in studying the H-semigroups we find it convenient to also consider it as an operator a on S-semigroups, so that for any M, 3(S(M)) = 1(M). Not surprisingly, zero-free homomorphisms between S-semigroups tend to induce zero-free homomorphisms between the corresponding H-semigroups. We state this in the context of Theorem 4.3. Theorem 4.10 If M realizes M' with feedback encoding then there is a zero-free homomorphism from H(M) onto H(M').

98 PROOF: Let P:S(M) + S(M') be the zero-free homomorphism guaranteed by Theorem 4.3. Note that we showed in the proof of Theorem 4.3, although we did not have the notation to express it, that if a(b1) = a(b2) then a(0(bl)) = 3(<(b2)). Thus, for each (s,t) e H(M) choose b e S(M) such that a(b) = (s,t) and define Y((s,t)) = a3(b); by the preceeding discussion the value of Y((s,t)) is independent of the choice of -1 b a -((s,t)). Certainly Ker(Y) = {0}. Suppose that d1 = (s,t) and d = (t,r) 1 2 are elements of H(M); for any choice of b. e a (di), b, and'(blb2) = dld2. Since 0 is zero-free, 4(bl)C(b2) = ~(blb2), and so a (bl)a(b2) = a(blb2) b 0. But a3(blb2) is jdst Y(dld2), and so Y(dl)Y(d) = a (bl)a(b)- =(dld) If 0(d1)d' # 0, choose bf s a'l((dl)) and b2 e a (d); then blbf $ 0. In particular, if b1 a-l(d) then (b) E 3((d), 1 21 so that 4(b )bl b 0 for any b' e ~ (d ). Then, since 4 is zero-free, there is a b2 E 1(b) such that blb2 0. Let d2 = 3(b2), then dld2 # 0 and, by definition, Y(b2) = d2. Similarly, if dl(d2) Z 0 -1 then there is some dl e y (dl) such that dld2 Z 0. Thus, Y is a zero-free homomorphism. The converse of this theorem is not true. As an example, let M1 and M2 be two machines whose digraphs have the same transitive closure. Then H1(M1) - H(M2), but it is not necessarily true that M1 isomorphically realizes M2 with feedback encoding. For an example, see Figure 4.2; since the two digraphs D(M) and D(M') are strong they have the same transitive closure. Thus the progressive relationships we found between the S-semigroups and the concepts of realization and simulation with

99 feedback encoding do not hold for the H-semigroups. In fact, Theorem 4.10 is a special case of Theorem 4.11 If M simulates M' with feedback encoding then there is a zero-free homomorphism betwnAn H(M) and H(M'). PROOF: Let the simulation be defined by maps * and h. Define o((s,t)) to be (0(s),<(t)), and 0(0) = 0. If we have (s,t),(t,r) e H(M) then ((s,t)) ((t,r)) = (~(s),4(t))( (t),<(r)) - (4(s),(r)) = 4((s,t)(t,r)). The other properties of zero-free homomorphisms follow as in previous proofs, and will be omitted. As with the S-semigroups we can define the relations T and and the functions f and i. We also need the hotion of an elementary element of H(M); (s,t) is elementary if st e D(M). Theorem 4.12 If M and M' are machines in which every state is reachable, and if H(M') is a zero-free homomorphic image of H(M), then M simulates M' with feedback encoding. PROOF: Let':H(M) + H(M') be a zero-free homomorphism. We must first show that 0 induces a map from Q to Q' as in Theorem 4.4. Since the relations Et and E on S-semigroups were shown to be directly dependent on the functions i and f, most of the results we obtain about them carry through directly for H-semigroups, and the proofs will be omitted. In particular, f(bl) = f(b2) implies f(t(bl)) - f(Z(b2)), and i(bl) = i(b2r implies i(~(bl)) = i(Q(b2)), so that $ induces maps *i and ~, and O((s,t)) = (Ci(s),f(t)). If b1 = (s,t) and b2 = (t,r)

100 are elements of H(M), then $(bl)1(b2) f 0, so that $i(t) = f(t). Since every state of M is reachable,.i and $f agree on each t c Q; call the function thus defined p. Let (s',t') be elementary in H(M'), and let x' be an input such that s'x' = t'. We claim, first of all, that for each sC E (s') there is a t c -(t') such that (s,t) e H(M). To see this, we first show that if +(s) = s' then there are elements bl and b2 in H(M) such that s = i(bl) = f(b2). Certainly, there must either be a bl for which i(bl) = s or a b2 for which ffb2) = s. Suppose the former. Since s' is reachable, there is an element (r',s') c H(M'). Then (r',s') (bl) # 0 so there is some (r,s) e H(M). A similar argument completes the demonstration. Now, to show that there is an element (s,t), where ~(t) = t', we need only choose some element (r,s) and note that since D((r,s))(s',t') # 0 there is an element (s,t) such that O((s,t)) = (s',t') by the properties of zero-free homomorphisms. But as we have shown, if (((s,t)) = (s',t') then 0(t) = t'. Thus if s'x' = t', for each s such that 4(s) = s' there is a state t reachable from s such that ~(t) = t'. Choose any string x c I+ for which sx = t and define hs(x') = t'. Then the pair (+,h) defines a simulation with feedback encoding. Note however that hs may not even be one-to-one on symbols. We now turn our attention to the third and final algebraic structure which we will study. We will first discuss its relationship to realization with feedback encoding, and some of its shortcomings in that regard, and then go on to show how the concept of simulation with feedback encoding applies to a conjecture about these semigroups. For any machine M, the S*-senigroup S*(M) has for its elements all finite sets of triples [we will write 0 for the empty set 0 of triples],

101 where if U = {bi.i = 1,...,n} and V = {d.j = 1,...,m} are elements of.S*(M) then UV = {b.d.ii l,...,n;j = l,...,m}. We list some of the properties of S*(M) in the next theorem; for proofs see [33] and [20]; a state s is terminal if for all x c I, sx = s. Theorem 4.13 a) If M isomorphically realizes M' with feedback encoding then S*(M) S*(M') b) If M and M' are machines in which each state is reachable and there is at least one nonterminal state, and if S*(M) ~ S*(M'), then M isomorphically realizes M' with feedback encoding. These results make it seem reasonable to expect that we can find generalizations in the sense of Theorems 4.3 - 4.6. Suppose, for example, we had M realizing M' homomorphically with feedback encoding, the realization being defined by maps d and h. It would be natural, as would in fact be done in proving Theorem 4.13a, to define, for each triple b = (s,x,t) of M *(b) = (O(s),hs (x),W(t)) (0* is just the map O:S(M) + S(M') of Theorem 4.3.) We would then define.({biji = 1,...,n}) to be {4*(bi)}. Let S = {bj} and T {d)l}, S,T e S*(M), and suppose ST # 0. For each pair b. and dk for which bjdk # O, it will follow that 4*(bjdk) = **(bj)**(dk), so that D(ST) C_ (S)(T). But, should there be a pair bj,dk such that bjid = 0 but 0*(bj)**(d): 0, then O(ST)C:(S)O(T), so that 4 would not even be a zero-free homomorphism. Such a pair bj,d would.certainly exist unless Iql = IQ'. The preceding discussion, of course, only shows that one particular approach to the problem is infeasible. Recalling the proofs for previous theorems, as well as the proof of Theorem 4.13b which provided much

102 of the motivation for the work in this chapter, where we showed that a zero-free homomorphism between two S-semigroups or two H-semigroups induces a map between the state-sets of the two machines, we can show Theorem 4.14 Let M and M' be two machines where M is strong. If there is a zero-free homomorphism O:S*(M) + S*(M') and a map *:Q + Q such that for each singleton b = {(s,x,t)} c S*(M), D(b) = {(t(s),x',+(t))}, then IQI IQ'I. PROOF: Suppose not, and let <(ql) = (q2) = q'. Since qi is reachable there is some bl {(r,x,ql)} e S*(M). For y,z e I+, let b2 {(qly,qly) (q2,Zq2z)}. Since blb2 Z 0, (b ) C(b2) = {((r),x'y', (qly)),(4 (r), z',4$(q2z)) ) (b2) = {(4(r),(xy)',4(qly))}. Among other things, this would imply that if q3 is reachable from ql and q4 is reachable from q2 then 4(q3) = 4(q4). Since M is strong, every state is reachable from every other, so that * must be one-to-one. The hypothesis that M be strong is probably more than is needed to reach the conclusion of the theorem, but it certainly does show that the S*-semigroups are not especially useful in a study of realization, with or without feedback encoding. On the other hand, Hedetniemi and Fleck [20] made the following conjecture, which we can use the concept of simulation with feedback encoding to settle in almost all cases:

103 Let M and M' be any two strong machines with the same number of states. Then S*(M) is isomorphic to a subsemigroup of S*(M'), and S*(M') is isomorphic to a subsemigroup of S*(M). We first prove a crucial result, to which we alluded earlier. It gives both the sufficient condition which we mentioned in connection with Theorem 4.5, and also shows, as we promised in Chapter III, that simulation with feedback encoding is an essentially uninteresting concept. Theorem 4.15 Let M be a strong machine with at least two inputs, and let M' be any machine with IQ' I < IQI. Then M' divides M with feedback encoding, and the feedback encoding can be chosen in such a way that the maps hs:I'+ - I+ are one-to-one. PROOF: Since M is strong, for any s,t e Q there is a string w t with Z(w5t) > 1 such that swst = t. Let Q be any subset of:Q with cardinality Q' I, and 4:Q o QtoQ, be any map. Let I' = {xl,...,xa} and let n i n be two elements of I. For each s Q, X! C I', let t = 1 (sx!) and define h (x!) = nJnwqt, where q = sn n. Then c(shs(x!)) = (s)x!, so that M' divides M with feedback encoding; if IQj = IQ' J then M simulates M' with feedback encoding. Each map hs is certainly one-to-one on strings. We show that the extended maps are also one-to-one. Let jl j and k... k be two strings from (I')+ " m 1" M and let hs(jl j) hs(kl. k,) = w. Then, by'definition, there are states r,t e ~ such that w = h5(jl)hr(j2... jm) = hs(kl)ht(k2 **. km,) But there is'a unique positive integer n such that the prefix of w having length n+l is the string nnO. This uniquely determines

104 k x', so that r = t = sx. Then hr(j2... jm) = h(k2... k,), n. n r2nM. r k2 mM and we can repeat the above process until we arrive at m = m' and j * kp, p = 1,2,...,m. P P Corollary If M is strong, with at least two inputs, and M' has IQ' I IQI, then S(M') zero-free divides S(M). Corollary If M is strong, and M' has IQ'I < QI then M' divides M with feedback encoding. PROOF: We need only cover the case in which III = 1. As in the theorem, we choose any Q < Q with IQI = IQ' and any map F:Q ontQ'. For s c Q and x! c I', let t ( ()(s)x!), and define hs(x!) = wst Then *(s)x; = S(shs(x!)), so M' divides M with feedback encoding. Theorem 4.16 Let M be a strong automaton with n states and at least two inputs, and let M' be an automaton with n' < n states. Then S*(M') is isomorphic to a subsemigroup of S*(M). PROOF: We use the maps 4 and h of Theorem 4.15 to define the isomorphism. Let b' = {(s',x',t')} be a singleton in S*(M') and set g(b') = {(s,hs(x'),t){j(s) = s'}; since 4 is one-to-one, g(b') is a singleton. Note that this implies that <(t) = t'. Also since hs is. one-to-one for each s E S, g(b) = g(bp) if and only if b = bl; i.e., g' 2' is one-to-one. Let b = {(s,xl,r')} and b {(r',x,t Let 1 11' 27 (rr t}a b = {(slx1r)} = g(b), b2 = {(r,x2,t)} = g(b). Then blob2 2 {(s1,hs (xix~),t2)} is a singleton of S*(M) and, as (t2) - t2, blob2 - g(blobp). Thus

105 g(bl)og(bb) = g(blob2). On the other hand, if bl = {(sl,x,tl), b= ((s1,xI,t))}, g(bl) = {(s(,Xlt1)), g(b2) = (s2,x2t2)} and t s then t1 f s,2 so that blob,= 0 and g(bl)og(bl) = 0. Now let V' be any element of S*(M'); V' is a finite set of triples of M'. Extend g to g* by g*(V') - {g(b')jbeV'}. Let S*M,(M) be {g*(V') Jv' eS*(M') } Now, if V,VC ~ S*(M, ), I. g*(Vl)og*(Vp) = [ u{(b) lb eVt ]o[u {g(d) ld: ev] - U {g(b )og(dj!)lb!EVi,d! eV' i,j dj.U {g(b!od )lb!ieV, deV 1 Jj 1 ji2 1,3 U {g (fP) lfIeV oVp} g*(VvoVI). Thus S*M,(M) is a subsemigroup of S*(M), and g* is a homomorphism. We wish to show that g* is 1-1. Suppose g*(Vl) = g*(Vp). Choose a triple bteVl, and let {b} g({bl). Then there is a triple beVI such that {b} g({b}). If b (s,w,t), bl = (.(S),xl,O(t)) and b2 = ($(s),xI, (t)), where w = hs(xl) - h (x). Then, by the lemma, x = xI, so bi b and V' C_ V. The symmetric argument gives V' = V' so that g* is 1-1, and hence g* is an isomorphism between S*(M') and S*M (M). Corollary Let M and M' be strong machines, with IQJ IQ' I. Then, unless M is autonomous but M' is not, S*(M') is isomorphic to a subsemigroup of S*(M). iOF: If M is not autonomous then S*(M') S*M, (M), by the theorem.

106 On the other hand, if both machines are autonomous then they are isomorphic, so the result follows from Theorem 4.13a. This corollary settles the Hedetniemi-Fleck conjecture except that it does not decide the case in which M is the unique (up to isomorphism) autonomous, strong, n-state machine Cn and M' is not autonomous. Due to the dependence of Theorem 4.16 on the fact that M had at least two inputs, we prefer to make the following counter-conjecture: Coniecture There is a strong, n-state machine M' such that S*(M') is not isomorphic to any subsemigroup of S*(Cn).

CHAPTER V DISTINGUISHING SEQUENCES Having presented some of the theoretical properties of realizations with feedback encoding, we now turn our attention to a specific applications area. Actually, we have already discussed one application in Example 3.4, where we showed that the use of feedback encoding can lead to more efficient cascade realizations. Given a behavior which is to be realized, one often p'.aces additional requirements on the realizing machine; perhaps the simplest such requirement is that the machine be reduced. Another requirement is that the machine have a distinguishing sequence, an input string whose output sequence uniquely identifies the machine's starting state. In this chapter we will develop techniques for realizing some machine M', with feedback encoding, by a machine M having a distinguishing sequence. It is important to note that the machine which has the distinguishing sequence is M, and not M together with the feedback encoder. Up to now we have been concerned only with the realization of state-behaviors, but in this chapter, where we will be studying distinguishing sequences, we will instead study the realization of input-output behaviors. We will need to define the realization with feedback encoding of a machine having outputs in two ways, one for the Moore case and one for the Mealy. If M = <Q,I,6,X,Y> and M' = <Q',I',I6',',Y'> are Moore machines, then M reaZliee M' with feedback encoding if there are maps,:QOno Q', h:Q x I' + I, a type I feedback encoder, and g:Y +Y', such that,(q)x' - C(qhq(x')) 107

108 and A(C(q)) = g(X(q)). (5.1) Since in most of what follows the realizations will be isomorphic, the output decoder, g, may often be just a one-to-one correspondence, and can be omitted if we assume, without loss of generality that X' ((q)) = X(q), giving the alternate conditions (q)x' ~= o(qhq(x')) (5.2) X'(o(q)) = X(q) Similarly, if M and M' are Mealy machines, then M realizes M' with feedback encoding if there are maps':Qo~n~ Q', h:Q x I' + I, a type I feedback encoder, and g:Y+ Y', such that (q)x' = (qhq(x')) (5.3) X'((tq),x') = g(X(q,hq(x')) or, if we take Y = Y' and g to be the identity map, (q)x' = 4 (qhq(x')) (5.4) XA'((q),x') = X(q,hq(x')) Example 5.1 Let M1 and M2 be the Mealy machines in Figure 5.1. Then M2 realizes M1 with feedback encoding. For example, we o;b i;b l;b O;a O;a Fgr5. ORaiainwthfeo1;b Figure 5 Realiz; a Mtion with feedb;a

109 check (5.4) at state r2, where *(ri) = q.i For x' 0, hr (x') * 1, 2 so that q20 q2' (r21), and A(q2,0) = b = A2(r2,1); for x' b 1, h rx') 0, 21 q3 = (r20) and X1(q211) a a A2(r 0). From Example 5.1, we can see the relationship between the state transition graphs of two Mealy machines when one realizes the other isomorphically with feedback encoding. Not only are the digraphs isomorphic, but the isomorphism preserves the output labels, so that only the input labels are permutedi As is well known, the concepts of Mealy and Moore machines are essentially interchangeable. We show in Theorem 5.1 that this interchangeability carries over to realizing machines. Theorem 5.1. % Let M and M' be Mealy machines, and M,^M' their Moore equivalents. If M realizes M' with feedback encoding then M realizes M' with feedback encoding. PROOF: onto We are given maps *:Q otoQ' and h:Q x I' V I such that <(q)x' I= Cqhq(x')) and X'(<(q),x') = X(q,hq(x')). Now, M has states qpr' where qp is a state of M and some transition into qp has output Wr; x(qp ) =w. Similarly, M' has states qt. Let ~ be the map defined by (qpr) = if and only if r = t and (qp) = qs. Let h be defined by h = h for all states qpr M. * qpr qp T Then, for any state qpr and input x' e I', qph (x) qt if and only if qph (x') q5 and (qp,h (x')) = wt. Thus qpr (x' =) = (qst), (q5) = (qp^hp(x')) * ~(qp)x' so that ~(qr C (x')) ~ (qpr)x'; and also, ( pr,

110 (qpr,hCq (x')) (qp h (x')) = (qs) wt X ((qst)) r C(qp ) x'). Np pr rr Hence i realizes M' with feedback encoding. Recall that for a state q and string x1... xn, q(1... x ) = Ih r(qlex2)... X(qx1... x n_,xn); of course, in a Moore machine this reduces to Bq(X1 *. xn) = X(qxl)X(qxlx2) X(qx x) Defining @ in this manner we are omitting the output associated with the current state, q, and therefore ignoring a potentially useful item.' of information. Since this information is usually available, especially in actual circuits, it is worthwhile to examine the effects of this omission on the work to follow. All the constructions which we give will be valid for either definition of 8, but bounds involving the lengths of input sequences will be larger by 1 than they would be with the "usual" definition of B. The advantage to the form taken here, as we shall see, is to make it possible to treat certain behavioral characteristics of a machine as though they were strictly structural. We say that a string x is a distinguishing sequence for a machine M if for any states q # q', 3q(x) Bqt(x). Clearly [21] any machine which has a distinguishing sequence is reduced, but the converse does not hold. We will first present some examples of machines taken from the literature which do not have distinguishing sequences, and show how each can be isomorphically realized with feedback encoding by a machine having a distinguishing sequence. Example 5.2 [21, p. 116] The machine M1 of Figure 5.2 has no distinguishing sequence. However, machine M2 certainly realizes M1 with feedback encoding, since the only difference is a permutation of the input labels at ql. The string x = 1 1 0 0 1 is a distinguishing

Ill 1;b 1;b 1;a l;a;a l;a!;a 1;b;b O;b;a O;a;b;a;a l;a M1 M2 Figure 5,2. A diagnosable realization. sequence for M2; in fact, the B-function has the values 8 (x) a b aa b (x) b abaa q2 8 (x) b ab ab q3 ( tx) ~ a b b a b 4 Bq(X) a a aba b Note that the action of x is to carry every state to q5, so that x is also a synchronizing sequence [21]; in general, however, a distinguishing sequence need not be synchronizing, and a synchronizing sequence certainly need not be distinguishing. Example 5.3 [21, p. 120] Consider Figure 5.3. Again M1 has no distinguishing sequence, while.M2, which realizes Mywith feedback encoding, has one. In fact if x - 00 then

112 S (x) = a a ql q (x) = b a q (x) = a b S (x) bb q4 0 0 M M2 1 1 -i 20 Figure S.3. Another diagnosable realization. In the case that all the symbols in a distinguishing sequence x are identical we say that x is a repeated symboZ distinguishing sequence [27]. Example 5.4 [27, p. 325] Referring to the machines in Figure 5.4, M1 has no distinguishing sequence, M2 realizes M1 with feedback encoding, and M2 has a repeated symbol distinguishing sequence; in fact, if x = 14, then S (x) =b a a a (x) = a aaa 2 3 (x) = a a ab q3 q(X) = ( aa b b (4 3q (x) = a b b a @ (x) = b b a a Bq()6 b a

113';a 0;Oa 0a;a Y1 l;b q 1l;b Before we begin our study in detail, a word Is in order about O;a 1;a 1;a just what it is that we wish to accomplish. The existence of a distinguishing sequence is properly a behavioral, rather than a structural, 1;b 1;b Figure 5,4. A repeat.ed symbol diagnosable realization. Before we begin our study in detail, a word Is in order about just what it is that we wish to accomplish. The existence of a distinguishing sequence is properly a behavioral, rather than a structural, property. Nevertheless, as the previous examples show, feedback encoding techniques can, for some machines, produce realizing machines with distinguishing sequences and yet cause no increase in state size, input set size, or output set size. We will develop some results which will give techniques, in some cases, for finding realizing machines with distinguishing sequences. These techniques will not cover all the possibilities, however. For example, while they could be successfully applied to machine M1 of Figure 5.2, they would not produce the same realizing machine as is given in Example 5.2. Rather than simply giving techniques, we are more concerned with trying to demonstrate that feedback encoding techniques can be an effective tool. The usefulness of these techniques does not lie solely in the theorems

114 which we are presenting. In a machine M, two states ql and q2 are said to merge if, for some input x c I, qlx = q2x; states ql and q2 converge if, under some input x, they merge and if, also, X(ql,x) = X(q2,x). If ql and q2 converge r q3 tuider x we write (ql,q2)x = q3. Note that in a Moore machine two states converge if and only if they merge. Lemma 5.1 [21] If a reduced machine is convergence-free then it has a distinguishing sequenc. The converse is not true; a machine need not be convergence-free to have a distinguishing sequence (see, for example, machine M2 of Figure 5.2). However, if a machine is reduced and k states converge to a single state then by state-splitting techniques, and adding additional output symbols, the convergence can be eliminated;'this requires adding on the order of {k} new states and output symbols [27]. Of the two, addition of new states is the more costly, and, as we have noted above, the techniques we present will be geared towards producing no increase in state-set size, although some increase in output-set size will not always be avoidable. We will first concentrate on the problem of designing realizing machines which have repeated symbol distinguishing sequences. Let <si> = <s0'sl',..S ml> be a sequence of symbols, where it is understood that any reference to s. is for j modulo m. For any integer n, we say that <si> has property P(n) if there is an integer in for which si Z si +n n n Let M = <Q,{1},6,X,Y> be a strong autonomous machine with m states, where q.l u qil and let A(M) be <X(qOl),...,(ql,1)>.

115 Theorem 5.2 A strong autonomous machine Mis reduced if and only if, for all n l,2,...,[ 3, X(M) has P(n). PROOF: Suppose first that A(M) has P(n) for each 1 < n < [(], but that sttes qi and qi+ are equivalent. Then for any x e {1}+, X(qix) = X(q,x), so that for r = min{k,m-k}, 1 < r s [2] and for l i+k every state qj, X(qj,l).= (qj.r,l), which would indicate that X(M) does not have P(r), a contradiction. On the other hand, suppose that M is reduced, and choose I < n < [] Since qo and q are not equivalent, there is some input sequence x, (x) > 1, such that A(qo,x) A(qn,x). Thus if r = t(x), then X(qr 1) i X (qn+r l,1),-so that X(M) has P(n). Of course, Theorem 5.2 just says that X(M) has no proper subperiods. If <si> has P(n) for each 1 n 5 [] we say that it has property P; conversely, if <si> does not have P(n) for some n we say that P fails (for n). Lemma 5.2 Let <si> = <SoS S a) If P fails for n it fails for the g.c.d. (n,m). b) If P fails for nl and n2 it fails for (nl,n2). PROOF: a) Let g = (n,m). It will be sufficient to show that so = s Since, by hypothesis, so n s 2n s *., we need only show that, for some k, kn - g(mod m). By Lemma 3.3, this congruence has a solution when (n,m)lg; but g - (n,m), so that s^ - s.

116 b) Let g = (nl,n2). In this case we look for k1 and k2 which satisfy nlk1+n2k2 = g(mod m). Since (n1,n2,m) Ig, Lemma 3.4 guarantees a solution to the congruence, so that, again, sO = sg. Theorem 5.3 Let s0 ~ SO' If <so',.,Sm l> does not have P then < S * l,,sm_il > has P. PROOF: Let n* = min{n(P fails for n}. We show that for n < n*, <s05l'*...,s n l> has P(n). If n* = 1 then the si are identical, and hence the new sequence has P. If n* = 2, then m 2 4. Thus s s s2 so P does not fail for n = 2. But since sO must have been different from sl, as otherwise we would have had n* = 1, we still have s2 = sO Z sl, so the new sequence 2 0 also has P(1). In general, we know m 2 2n*. Now suppose for k < n*, s ~ Si +k. Then since P fails for n*, sik+n* = Si Si +k = Si +k+n* Furthermore ik+n* f ik(mod m) and ik+k+n* I ik+k(mod m), since m > 2n*. Thus if we change sO to sO we can not, for n < n*, cause P(n) to fail for the sequence <,sl,s2,.,Sml> But, since sO sO s S= so the new sequence has P(n*). Now, let n* =minf{nP(n) fails for <s.>} and n* = min{nP(n) fails ~ 1 2 for <,si',...,s* l>} Then n' > nI. But could have started with <sO,si,... m_l> and changed sO to so, giving <s.>. Thus n* < n* This is impossible, so <sO,Sl,.*.,sm> has P. The next result holds for both Moore and Mealy machines. Lemma 53 Let M be a machine and suppose that in the labelled digraph consisting of D(M) together with state and output labels,

117 there is a spanning cycle whose output sequence has property P. Then M can be isomorphically realized with feedback encoding by a machine M' with a repeated symbol distinguishing sequence. PROOF: We choose M' to have the same labelled digraph as M. Let the states, in their order along the cycle, be qo,ql,..,qm 1 We will define an h map such that at such state qi, one input x. for which qixi qi+l is coded as h (x.) = 1' E I', and assign the other values of h arbitrarily, preserving the type I property. Then in M' the input symbol 1' will define a strong autonomous submachine M" of M', which, by hypothesis will be reduced. But it is known [27] that a reduced autonomous machine has a (repeated symbol) distinguishing sequence, y. Since M" has the same state-set as M', y is a repeated symbol distinguishing sequence for M' A 2-factor of a digraph is a spanning subdigraph in which each point u has id(u) = od(u) 1; a 2-factor is a union of directed cycles. Suppose that a machine M has a 2-factor Z = Z1U...UZ consisting of t directed cycles. As in Lemma 5.3, we can define a machine M' which realizes M isomorphically with feedback encoding, such that Z is the subdigraph induced by a single input symbol, x'. We now need only make sure that the autonomous submachine defined by x' is reduced. This can be done, in the worst case, by adding t new output symbols, w1,...,wt By Theorem 5.3, if we change one output label in Zi to wi, Z. will be reduced. Furthermnore, since w. ~ wj., no state in Zi can be equivalent to a state in w.. Of course, in general we could expect to need fewer than t new output symbols. In the next lemma

118 we summarize the known conditions for a digraph to have a 2-factor: if D(M) satisfies any of these conditions then M can be isomorphically realized with feedback encoding by a machine with a repeated symbol distingishiing sequence. Lemma 5.4 a) A digraph D'has a 2-factor if and only if for each set S of points, IsI < lySI [2]. b) If a strong digraph D has p points and, for each point v, id(v)+od(v) 2 p, then D has a spanning cycle [9]. Of course, the fewer the number of cycles in a 2-factor, the fewer the number of output symbols which will have to be changed. On the other hand, the more cycles in the 2-factor, the shorter will be the length of the distinguishing sequence. This trade-off is expressed in the next theorem: by [r > 0] in the theorem we mean the logical variable which takes the value 1 if r > 0 and 0 otherwise. Theorem 5.4 Let M be a machine with p states and suppose that D(M) has a 2-factor which contains t cycles, of which r are 1-cycles (loops). Then M can be isomorphically realized with feedback encoding by a machine which has a distinguishing sequence. Furthermore, the length L of the distinguishing sequence and the number N of output symbols which must be changed satisfy L < p-2t+r+l N < t-[r > 0] L+N < p.

119 PROOF: We have already indicated how the existence of a 2-factor implies the existence of a realizing machine which has a distinguishing sequence. In the worst case, we need to change one output symbol on each of the t-r cycles of length greater than one, both to reduce the cycle (see,Theorem 5.3)and to make the cycles inequivalent. In the worst case this requires adding t-r new output symbols, although this can undoubtedly be reduced in practice. Each of the r loops is already reduced, so we need add or change at most r-l output symbols to make them inequivalent. This gives N < (t-r)+(r-l) = t-l if r > O and N s t if r = 0; hence N..t-[r > 0]. The length L of the distinguishing sequence is at most the length of a distinguishing sequence for the.largest cycle, which is one less than the length of the cycle ([27], [303). For a given t and r the longest cycle is attained when all but one of the t-r cycles of length greater than I are 2-cycles, using 2(t-r-1) of p-r states. The remaining cycle then has length p-(2(t-r)-2)-r = p-2t+r+2, so that L < p-2t+r+l. Now, if r > 1, L+N s t-l+p-2t+r+l = p-t+r, which takes its maximum when all cycles are 1-cycles, so that t = r. If r = 0, then L+N s t+p-2t+l = p-t+l, which takes a maximum when t = 1, and the 2-factor is a spanning cycle. Thus, L+N s p. The bound L < p-2t+r+l < p compares quite favorably with the bound i < (p-l)pP given in [10], although it is not known if the latter is a best possible upper bound. Notice that while the theorem describes a sufficient procedure, it is certainly not necessary. The machine in Example 5.4 has a spanning cycle, but we were able to find a shorter distinguishing sequence by inspection.

120 The simplest way for a machine to satisfy the conditions of the theorem is for each of its states to have the same indcgree; such a machine is called homogeneous in [29], where it is shown that any machine whose reduced machine is strong is behaviorally equivalent to a homogeneous machine. Theorem 5.4 could then be applied to the homogeneous machine. Of course, the homogeneous machine has a much larger state set, but this disadvantage may be offset, as Miller and Winograd [29] note: McNaughton and Booth [26], however, found that in the 2-input case a particularly uniform circuit structure (for the homogeneous machine) resulted for the state to state circuitry... the uniform structure may be quite advantageous in practice, and... can be readily seen to extend to the p-input case. On the other hand, even if the condition of Lemma 5.4a does not hold, we pan use the techniques outlined above to produce a realizing machine with a distinguishing sequence. Note first that any machine can be isomorphically realized by a machine with a repeated symbol distinguishing sequence, by choosing an autonomous submachine and adding output symbols so as to make it reduced [28]. With feedback encoding a similar technique applies, but there is more freedom in choosing the substructure to reduce, and consequently less output augmentation may be necessary.

121 Any autonomous machine, whether or not it is a union of directed cycles, is a functional digraph: a digraph in which each point has outdegree exactly 1. Given any functional subdigraph D of D(M), we can, with feedback encoding, make D the digraph of an autonomous submachine M, and then add output symbols to make M reduced. Thus, for a machine M, we would choose a functional subdigraph D which was as "close to" being reduced as possible, and then add output symbols so as to make it reduced. The technique in doing this is first to make each cycle of D reduced, and to make the cycles inequivalent, as we would if D were a 2-factor. We then continue, making each component of D reduced. Two states can be equivalent only if they belong to the same component and there is a homomorphism which identifies them; homomorphisms of autonomous machines (as we noted in Chapter II the notions of SP and admissible homomorphisms coincide for autonomous machines) have been studied in detail in [34]. We can therefore state: Remark Any machine can be isomorphically realized with feedback encoding (in the sense of (5.1) or (5.3)) by a machine with a repeated symbol distinguishing sequence. Example 5. The machine M of Figure 5.5 is reduced, has no distinguishing sequence, and has no 2-factor, since Iy{q,q2,q3,q4,qs}l 1 {q2,q3,q4,qs5l = 4 < {qlq2,q3,q4,qs}l'. Suppose that we choose the functional subdigraph D of Figure 5.6; the outputs, but not the inputs, associated with the arcs are shown.

122 O;a 0;a;;a 0;b ^^^^^ l;b;a D;a;a q 46 9~~~~~~~~~~r

123 If we change the output label on the arc q5q4 to a new symbol, c, then the lower component in Figure 5.6 will be reduced. To reduce the upper component, we need only change the symbol on one of the arcs qlq3 or q2q3; since we already know that c must be decoded to an a, we can relabel qlq3 with c. The resulting reduced autonomous machine D' is shown in Figure 5.7, and the machine M' which realizes M isomorphically with feedback encoding is given in Figure 5.8.;C;a, /9f;b A~ reducec;a D Figure 5.7. A reduced autonomous maohine.

124 O;a O;b b w -;bl;a l~c tO;b o;b q 0;a O;b 1;c M' Figure 5.8. A machine with a distinguishing sequence. In fact, x = 111 is a distinguishing sequence, for q (x) = cbb B (x) = ab b'2 a (x) =bbb q3 B (x)= a c a'4 B (x) c a c q (x) = a cb Bq (x) = a a c. A reduced machine will fail to have a distinguishing sequence only if two states converge under some input. Using classical techniques, as we noted above, once a convergence is found to interfere with the existence of a distinguishing sequence, state-splitting and/or augmentation of outputs must be employed. With feedback encoding, however, it is

125 often possible to eliminate the convergence without adding states or output symbols. We first state a theorem indicating when this can be done for a 2-input (Moore) machine. Theorem 5.5 Let M be a Moore machine with two inputs xl and x2 and exactly I1 2 one convergence: (q1,ql)x = ql9 Then, if there is no state 2 q such that qx2 = q1, M can be isomorphically realized with feedback encoding by a convergence-free machine with the same output function. PROOF: 1 1 Note first that since M has only one convergence, q1x2 P q2x2. 1 2 1 2 Figre 5.9, A convergence. We can recode inputs at q1, eliminating the convergence, unless 2 1 1 2 1 1 nei uthernless athere is a q such that qx qFigure Suppose that we can 5.10).recode neither at ql nor at ql (see Figure 5.10).

126 X1< 1 x / X2 \ /1 1 2 I\\ Figure 5.10. A step in the proof of Theorem 5.5. 1 1 If, for example, there is no state qq such that q4xl = q3x2 then we can recode the inputs at q without causing a new convergence; 3 1 this, in turn, permits us to recode the inputs at q2, and hence eliminate the convergence. Continuing in this manner gives rise to sequences of states 1 1 1 1 1 1..'q-1'qO'ql'q2' q3'q4' 2 2 2 2 2'q* * o'qO'ql q2'q3" " such that 1 2 if j > 1 qjxl qj_1 1 2 qjx2 = qj 1 2 and if j _ 1 qx = qj 1 2 qjx2 = qj-l Since M is finite the process of extending these sequences must have repetitions. Suppose the first repetition is that two of the 2 2 qj coincide. If we are at the stage of choosing qj and find that it 2 2 is the same as some previously chosen qk then qk must be an x2 image of two distinct states, by virtue of the way q. is chosen, unless k = 1. If k,# 1 this implies that M has two convergences, which is a contradiction. If k = 1 then assume without loss of generality that

127 j > 0 and continue the process at the "other end" of the sequence. It then becomes impossible for the process to fail for lack of a new q-n, for this would have to imply that M has a second convergence. The process therefore must stop because of an inability to choose a new q. (or q ). I. t. tlhem wo can progressively relabel inputs at q>q..(or at qJ,qjl',.) until we finally relabel at q (or q1), eliminating the convergence. At no time have we changed any output label: this proves the theorem. The technique used in Theorem 5.5 is applicable to Moore or Mealy machines. More important, it may be used successfully with machines which have more convergences than specified in the theorem. One straightforward generalization is given by the following corollary. Corollary If M has exactly n convergences (ql,q2)x1 = qi where the x, i = l,...,n are distinct, and if there are n additional inputs x2 such that there is no state q for which qx2 = q, then the convergences can be eliminated. PROOF: For each i, apply the theorem to that submachine of M defined i i by the two inputs xl and x2. The corollary, of course, was phrased to insure that the n applications of the theorem would not conflict. Certainly, we can expect that the same techniques will apply to many machines which do not meet the strict condition of the corollary, by breaking up the convergences'one at a time. Unfortunately, the extent to which the technique can be reapplied depends both on behavioral and structural properties of the given machine. While useful as a heuristic, this

128 approach to eliminating convergences cannot easily be expressed in algorithmic form, with clearcut rules for choosing, given at some stage a convergence (ql,q2,...,qn)x = q, which subconvergence (qi',q)xl = q to break up, mid which input x2 x1 to use. In contrast to this essentially local attack on the problem of eliminating convergences, we will develop a global procedure for reducing, not the number of convergences, but rather the number of merges. With Moore machines as we have defined them the concepts of "merge" and "converge" are, of course, identical, but this, as has been pointed out, is not always a useful identification to make. Thus, the global technique we propose may often be inefficient, as it will reduce many merges which are not convergences. For machines which have a large number of convergences, however, we can expect the technique to substantially decrease the number of additional states or outputs which are required for a diagnosable realization. Unfortunately, since the technique deals with merges and not convergences, exact results on the number of additional states or output symbols which will be saved are not available. In fact, it is possible to devise examples where the procedure, while reducing the number of merges, increases the number of convergences. This will become clearer as we get into the actual mechanics of the procedure. Let D be a digraph, and number the points u1,...,up so that id(ul) 5 id(u2) <... < id(up). The indegree sequence of D is the sequence <id(ul), id(u2),...,id(up)>. Suppose that D is the digraph of a Mealy machine, D = D(M), and let M have t inputs. Let E(D) be the quantity: max{0O,id(ui)-t}. For any arc uv, let C(uv) be the. i=l input label on the arc. We will say that there are n merges at a state u if there are n+l arcs vlu,v2u,...,vn u such that

129 E(vlu) = E(vu) =... = (vn+lu); note that if all these arcs had the same output label it would be necessary to add n new output symbols to eliminate all the convergences. Let E(u) be the total number of merges at u, and E(M) be the total number of merges in M. Lemma 5.5 For any machine M with t inputs, E(M) > E(D(M)). PROOF: We will show that for each state u, E(u) > max{O, id(u)-t} Clearly, the equation holds for each state u with id(u) < t. If the number of different input labels on arcs leading to state u is m s t, where.id(u)>t,then (u) = id(u)-m. Thus, if id(u) > t then E(u) = id(u)-m a id(u)-t. Theorem 5.6 Any machine M can'be isomorphically realized with feedback encoding by a machine M' with E(M') = "(D(M)). To prove this theorem we need to investigate those properties of the assignment function C which will guarantee that E(M) = (D(M)). To this end, we introduce some additional notions from graph theory. A bigraph [12, p. 17] is a graph G whose point-set V can be partitioned into two sets V1 and V2 such that all lines of G join points of V1 with points of V2. A line-coloring of a graph is an assignment of colors to the lines in such a way that any two lines which are incident with the same point receive different colors. The smallest n such that G can be line-colored with n colors is the Zine-ohromatic number x'(G). Clearly, the line-chromatic number of a graph G is notless than A(G), the maximum of the degrees of the points.of G. For a bigraph, a stronger statement can be made. Lemma 5.6 ([23, p. 171]) For any bigraph G, X' (G) = A(G).

130 Now, let M be a machine with states {v1,...,Vp}, and form a bigraph G with points {vli,v2ili=l,...,p} where vli is adjacent to v2j if and only if v.v. e D(M); these are the only lines in G. Note how 1 3 the degrees of the points of G are related to these of the points of D(M): deg(vli) = od(vi) and deg(v2i) = id(vi). If we "color" each line vliVj of G with the corresponding input symbol S(v.vj), then if two lines are incident with the same point vli they are colored differently, since M is a deterministic machine. If lines x and y with e(x) = i(y) ate incident to point v2i there is a merge. As Lemma 5.5 shows, there must be merges at states v. with id(vi) > t, the number of inputs. To prove Theorem 5.6 we will show how to assign inputs to the arcs of D(M) in such a way that the resulting machine is complete and deterministic, hence realizing M with feedback encoding, and the only merges occur at v. with id(v.) > t. If the line-chromatic number of the associated bigraph G is x'(G) ~ t, we will color the lines of G from a set of colors {s,,...,A(G)} in such a way that only colors from {Bl,...St are used to color lines incident with points v2i with deg(v2i) s t. If we translate each color -i in { 1,...,} to the input symbol xi and assign input symbols from {x 1...,xt} to lines colored from {St+l'... } in such a way as to give a complete deterministic machine M', then there will be no merges at states v. with id(vi) < t. Furthermore, if id(vj) > t then since in the coloring of G each color'i,..St appears once on a line incident with v2j, the number of merges at v. will be exactly id(v.)-t. Thus we will have E(M') = -(D(M)). To prove Theorem 5.6, it is then necessary only to demonstrate that a coloring of the prescribed type always exists.

131 Theorem 5.7 Let G be a bigraph in which max{deg ulueV1} = n = do, and suppose that the degrees greater than n which are realized in V2 are n <d < d2 <..< dr A. Then there is a line coloring of G from {6B1,...,} such that all lines colored d.+'di+1 are incident to points of degree greater than di, for i: O,...,r-l. To prove the theorem we first: develop a sequence of lemmas. Lemma 5 7 Let G be a bigraph such that all points of maximum degree are in V2. Then G can be line colored from {(1,...,BA} such that the only lines colored A are incident with points of maximum degree. )F: We suppose the result to be true for bigraphs with q-l lines. Let G have q lines and let all points of maximum degree be in V2; let x = uv be incident with one such point v. If A(G-x) < A(G) = A then v was the only point of maximum degree in G. So any line-coloring of G-x from {B1"*..,Ai1} extends to a line-coloring of G in which only x is colored BA. If A(G-x) = A(G) = A then we can color G-x with A colors so that all lines colored BA are incident with points of maximum degree; in particular, no line colored BA is incident with v. If there is no B3-line at u, then x can be colored B^ in G. Otherwise, there is a B8-line uvl (where deg vl = A). Since deg u < A there is some color a which does not appear at u. Clearly however, there is a line vlu

132 colored a. Thus we get a sequence <u = u0,vl,uv2,..., > such that each v. has maximum degree, each ujvjl is colored 8A and each v.u. is colored a. At each step of this process we are choosing a new point. If we have just chosen v. then u. cannot be a previously chosen uk: u. # u0 as a does not appear at u0 and u. cannot be some other previously chosen uk for otherwise there would be two lines colored a incident to uk. Similarly, if u. has just been chosen, vj+1 cannot be a previously ~,sen vk or otherwise there would be two lines colored 8A at vk. Of course, since G-x is a bigraph, no vj can be equal to any uk. Then, since G-x is finite, this process must terminate when we are unable to choose a new u. or a new vj+l. Since deg v. = A, the process cannot stop with a vj, so it must stop at some uj, at which there is no!A-line. We have thus defined a component of the subgraph G-xI A and can interchange the colors a and 8A in this subgraph, preserving the validity of the coloring. But now BA does not appear at u, so x can be colored with 5a. Lemma 5.8 In a bigraph G, suppose max{deg ulueV1} = n and that there are at least two degrees greater than or equal to n realized by points of V2, the two largest being n s A' < A. Then there is a line-coloring of G from {31,.*,$^} such that all lines colored BA,+l,..' ^ are incident with points of maximum degree. PROOF: Clearly the result is true whenever n = 1. Suppose it to be true for n-l, and suppose that in G, max{deg ujueV1) = n. Note that if A-A' = 1 then the result holds by Lemma 5.7. Suppose now that the result is true for A-A' = t-l, and that in G, A-A' = t, where vl,...,v are the points of degree A. We remove

133 an independent set X of r lines, one adjacent to each of vl,...,vr, to get G':A(G') = A(G)-1 If, in G', max{deg ulueV1} = n then G' can be colored with A(G)-l colors in the prescribed manner: the only lines colored li'+*1""'- are incident with the vr. Now, the lines of X can be colored ^. Otherwise, max{deg ulueV1} = n-l. By induction on n, a line-coloring of G' with the desired properties can be achieved, and this coloring uses only A-1 colors, as above. Again, the lines of X can be colored BA~ Lemma 5.9 Let G be a bigraph in which max{deg ulueV1 = n, A(G) = A > n, and suppose there are no points of degree n+l,...,A-l. Then G has a line-coloring from {B1,.. SA} such that all lines colored n+l"''B are incident with points of maximum degree. PROOF: By Lemma 5.7 we know the result is true, for any n, if A = n+l. Also, the result is trivially true whenever n = 1. Suppose that the result holds when max{deg ulueV1} = n-l, and let G have max{deg ulueV1} = n. Since we know the result holds for A = n+l suppose that it holds for A = n+k-l, and let G have A(G) = A = n+k. Suppose the points of degree A are vl,...,vr. Remove an independent set X of lines which covers {v1,...,v }, and let G-X G'. If in G', max{deg ulueV1} = n then the r J resulting graph satisfies the conditions of the theorem with A = n+k-l, so there is a line-coloring where all lines colored n+l'..'n+k-l are incident with the v.. Then the lines in X can be colored n+k and the result holds.

134 Otherwise, in G', max{deg ulueV1) = n-l. Then the result holds for G' by induction unless there were points of degree n in V2. If so, G' satisfies the conditions of Lemma 5.8 with A'(G') = n, A(G') = A-1, and so there is a line-coloring of G' from {BO,., 1} such that all lines colored 3'+1 = Bn+i*'3AB are incident with the A+l n+l. Al v.. By coloring the lines of X with aA the result holds. If V2 had no points of degree n, then by the inductive hypothesis on n, in the line-coloring of G' all lines colored Bn,n+,,1 are incident with the vi. We can again color the lines of X with 6^, proving the theorem. PROOF OF THEOREM 5.7: The result is trivial for n = 1. Also, by Lemma 5.9 it holds whenever r = 1, so we can assume it true for bigraphs in which max{deg ulueVI} s n-l and also for bigraphs in which max{deg ulueV} = n and r-l degrees greater than n are realized in V2. Let G have max{deg ulueV1} = n and let degrees n < d1 <...< dr = be realized in V2. We first prove the result in the case dr-d1 = 1. Suppose we remove an independent set X covering the points of degree dr, to get agraph G'. If the maximum degree of the points in V1 is reduced to n-l, then G' has a line-coloring of the desired type; in particular, all lines colored 8d 2.. Bd1 are incident with points of degree r-2 r-1 dr in G', those being the points of degree d or d in G. Then r-1' r-l r by coloring the lines of X with BA, the desired coloring results. If in G' the maximum degree of the points in V1 is n, then the inductive hypothesis on r guarantees the desired coloring for G', and again we can color the lines of X with BA. Now, suppose the result holds for dr-dr = t-l and suppose that,

135 in G, dr-dr 1 t. Let v1,...,v5 be the points of degree dr A. We can remove an independent set of lines X which covers (vl,...,vs}, giving a graph G' with maximum degree A-l, and next largest degree dr-; note that (A-l)-drl t-l. If, in G', max(deg u(ueV1} = n se get a line coloring of G' from ({B1O...,A 1 with the desired properties by the inductive hypothesis on t. If not, we get a line-coloring by the inductive hypothesis on n. Either way, we can color the lines of X with B8 We state, without proof as it bears no direct relationship to our study, the following generalization of Theorem 5.7. Theorem 5.8 Let G be a bigraph which has points of degree d < d2 <..< dr A. Then G has a line-c6loring from {1'...,$)} such that all lines colored Il " i+1 are incident to points of degree at least di+l. PROOF: See [61. Example 5.6 Consider the machine M in Figure 5.11. There are convergences at states q3,q4 and q5, and M certainly does not have a distinguishing sequence. The associated bigraph G has A(G) = 3, and so can be line-colored with three colors. Two such colorings are given in Figures 5.12 and 5.13. In Figures 5.14 and 5.15 we show the deterministic incomplete machines with three inputs which the colorings induce.

~soaes 9o0axl: soouaoaAuoo MTM ouTqFom V'TFS OanT-l:q /V^ 0 \ 0~~! q~ ~~ T.'q T: q \( Zb) -9~ 9Ee

137 q. A line coloring of a bigraph.26 Figure S.12~ A line coloring of a bigraph.

138 q21 13 A line coloring satisfying the conditions of Theorem 5.7 q1 re5.13., A line coloring satisfying the conditions of Theorem 5.7.

'zr'S enurtd DuoszP pg OUD:douT y * bb Oi ~ _ 0 fq \b' V'O' If: VIO~~~~~O~ -~ 0: <'6~b 0b't~w Zbt

'~I'S oarTTI uorx poAraop GUxq:Sm olaEdmOOUT uy'ST'S "znT4 9b 0! T!:) -b tb T q 0;! bb t: e "'.....~ Zb~~~~~~~~~~~ OtTT

141 In each machine we would change the labels c to an a or b so as to make the resulting machine both complete and deterministic. For the machine of Figure 5.14 there would still be three convergences. However, as the coloring in Figure 5.13 satisfies the conditions of Theorem 5.7, the only arcs labelled c.are incident to one of the states q4 or q5 having maximum indegree, when the labels on the machine in Figure 5.15 are changed, the resulting machine, shown in Figure 5.16, has only two convergences. As we have noted, the procedure which we have outlined reduces merges rather than convergences, and for machines M in which "(D(M)) is small but nonzero may actually increase the number of convergences. Nevertheless there are two cases in which the procedure can be shown to be of definite advantage. We state these as corollaries to Theorem 5.6.

142 ba; Fa cw;On b;1 b;O a;l b;O Figure S.16. A machine M with (D(M)) convergences;

143 Corollary If M has t inputs then M can be isomorphically realized with feedback encoding by a convergence free machine if and only if every state q has id(q) = t. Such machines, of course, also satisfy Theorem 5.4; this corollary, therefore, provides an alternate method of attack. Any machine M can be isomorphically realized with feedback encoding by a machine which has at most "(D(M)) convergences, since every convergence is a merge. Of special interest is the case when M has more than 5(D(M)) convergences, Corollary If M has more than -(D(M)) convergences then M can be isomorphically realized with feedback encoding by a machine with fewer convergences.

CHAPTER VI SUMMARY We have studied both properties and applications of realization with feedback encoding, and, at this point, we should look back to see what we did, and did not, do. Most of the study of the properties of these realizations was motivated by the classical theory of automata. Thus, for example, admissible homomorphisms play the same role for realizations with feedback encoding that SP homomorphisms play for realizations. There are subtle differences between these two classes of mappings, however. On one hand, as we pointed out in Chapter II, admissible homomorphisms do not exhibit all the lattice properties oftSP homomorphisms. Thus, the full power of the Hartmanis-Stearns techniques [17] cannot be applied to admissible homomorphisms. On the other hand, admissible homomorphisms being defined on digraphs rather than on machines, are somewhat easier to manipulate and study. In this regard, an added advantage is that, as mappings between digraphs or graphs, they have interest independent of their applications to machines; Theorem 2.15 is one example of this. Another advantage of admissible maps is the procedure for checking admissibility which is embodied in Theorems 2.7 and 2.13. Of course, we have actually only scratched the surface in studying admissible homomorphisms. One major question which is still outstanding is, which digraphs have no admissible homomophic images; related to this is the problem of finding those digraphs which have no walkwisedimages. We could also ask, what properties of digraphs 144

145 are preserved by admissible homomorphisms, in the sense that the property "m(D) z k" is preserved? There appears to be little more that can be said about the basic properties of realizations with feedback encoding. One problem upon which we did not touch is the meaning of realization with feedback encoding when applied to logical nets. For example, Zeigler [36] shows that for any integer r there is a machine M such that any logical net which isomorphically realizes M has a strong component S which contains a point whose indegree, in S, is greater than r. While we strongly suspect that a similar result would hold for isomorphic realization with feedback encoding, it is not clear how to attack the problem. Zeigler's proof techniques depend heavily on behavioral properties, which of course are blurred by realization with feedback encoding, and so resolution of the problem would probably depend on a better understanding of the relationship between net structure and transition graph structure. The semigroup of a machine is quite important to the theory of realization. While we studied the S-semigroups in great detail in chapter IV, our motivation was to be able to develop decomposition properties, and we must conclude that this goal is very likely unattainable. For, we showed that zero-free homomorphisms or divisions between S-semigroups are both necessary and sufficient for realizations or divisions with feedback encoding, which makes it difficult to suppose that a finite algebraic structure with similar properties could be found. And certainly, even if decomposition properties could be related to the S-semigroups, there is no real hope of usefully applying these infinite structures to the problems of finite automata theory.

146 We also discussed two applications of realizations with feedback encoding. In Chapter III we showed that cascade realizations with feedback encoding can be more economical, in terms of the sizes of the state sets of the component machines, than realizations without feedback encoding, and in Chapter V we showed how to apply feedback encoding to the design of realizing miachines with distinguishing sequences. Prc ^.Ps c'threr applications could have been developed, but these are sufficient to show the value, and limits, of realization with feedback encoding. For, and this cannot be stressed too strongly, there are no clear rules on when to use the techniques we have developed. Even when, as with the distinguishing sequence problem, we can guarantee the applicability of the techniques to any machine, whether or not they are of any value will depend quite strongly on the specific problem. We have shown the potential power of realization with feedback encoding, but in any application it will be just one of a number of tools which can be tried, and will work better in some cases than in others.

BIBLIOGRAPHY 1. Arbib, M.A. Theories of Abstraot Automata, Prontico-Hall, Englewood Cliffs, N.J. 1969. 2. Berge, C. Th? Thoory of Graphs, Methuen, London. 1964. 3. Burks, A.W. and Wang, H. "The Logic of Automata: Parts I and II," J.A.C.M., 4, pp. 193-297, 1967. 4. Chartrand, G. and Geller, D. "On Uniquely Colorable Planar Graphs," J. CombinatoriaZ Theor,, 6, pp. 271-278, 1969. 5. Eggan, L.C. "TransitionsGraphs and the Star-Height of Regular Events," Michigan Math. J., 10, pp. 385-397, 1963. 6, Geller, D. "How to Color the Lines of a Bigraph," Technical Report 032960-12-T, Department of Computer and Communication Sciences, The University of Michigan, April 1971. 7. __, "Minimally Strong Digraphs," Proc. Edinburgh Math. Soc., 17, pp. 15-22, 1970. 8. ___ "The Square Root of a Digraph," J. Combinatorial Theory, 5, pp. 320-321, 1968. 9. Ghouila-Houri, A. "Une Condition Suffisante d'Existence d'un Circuit Hamiltonien;" C.R. Aoad. Sci. Paris, 251, pp. 495-497, 1960. 10. Gill, A. Introduotion to the Theory of Finite-State Maohines, McGraw-Hill, New York. 1962. l1. Ginzburg, A. Agebraio Theory of Automata, Academic Press, New York. 1968. 12. Harary, F. Graph Theory, Addison-Wesley, Reading. 1969. 13. Harary, F.; Hedetniemi, S.T.; and Prins, G. "An Interpolation Theorem for Graphical Homomorphisms," Port. Math., 26, pp. 453-462, 1967. 14. Harary, F.; Hedetniemi, S.T.; and Robinson, R.W. "Uniquely Colorable Graphs," J. Combinatoriat Theory, 6, pp. 264-270, 1969. 15. Harary, F.; Norman, R.Z.; and Cartwright, D. Structura Models, John Wiley & Sons, New York. 1965. 16. Harary,'F. and Trauth, C.A.,Jr. "Connectedness of Products of Two Directed Graphs," J. SIAM Applied Math., 14, pp. 250-254, 1966. 147

148 17. Hartmanis, J. and Stearns, R.E. Algebraic Structure Theory of Sequential Machines, Prentice-Hall, Englewood Cliffs, N.J. 1966. 18. Hedetniemi, S.T. "Homomorphisms of Graphs," Technical Report 03105-42-T, Department of Computer and Communication Sciences, The University of Michigan, December 1965. 19 _, "Homomorphisms of Graphs and Automata," Technical Report 03105-44-T, Department of Computer and Communication Sciences, The University of Michigan, July 1966. 20. Hedetniemi, S.T. and Fleck, A.C. "S-Semigroups of Automata," Technical Report 6, THEMIS PROJECT, University of Iowa, 1970. 21. Hennie, F.C. Finite State Models for Logical Machines, John Wiley? Sons, New Y/ork. 1968. 22. Holland, J.H. "Cycles in Logical Nets," Technical Report 2722, 2794-4-7, Logic of Computers Group, The University of Michigan, 1959. 23. Kdnig, D. Theorie der Endlichen und Unendlichen Graphen, Chelsea, New-York. 1950. 24. Krohn, K. and Rhodes, J. "Algebraic Theoryof Machines I: Prime Decomposition Theorem for Finite Semigroups and Machines," Trans. AMS, 116, pp. 450-464, 1965. 25. McNaughton, R. "The Loop Complexity of Pure-Group Events," Information and Control, 11, pp. 167-176, 1967. 26. McNaughton, R. and Booth, T.M. "General Switching Theory," Technical Documentary Report ASD-TDR-62-599, Moore School of Electrical Engineering, University of Pennsylvania, 1962. 27. Meyer, J.F. "Theory and Design of Reliable Spacecraft Data Systems," Systems Engineering Lab, The University of Michigan, September 1970. 28. Meyer, J.F. and Yeh, K. "Diagnosable Machine Realizations of Sequential Behavior," Digest 1971 International Symposium on Fault Tolerant Computing, pp. 22-25, March 1971. 29. Miller, R.E. and Winograd, S. "On the Number of Transitions Entering the States of a Finite Automaton," IBM Research Note NC-320, 1963. 30. Moore, E.F. "Gedanken-Experiments on Sequential Machines," in Automata Studies (C.E. Shannon and J. McCarthy, eds.) Princeton University Press, Princeton, 1956.

149 31. Mukhopadhyay, A. "The Square Root of a Graph," J. CombinatoriaZ Theory,, pp. 290-295, 1967. 32. Niven, I. and Zuckerman, H. An Introduction to the Theory of Nutbers, John Wiley 6 Sons, New York. 1960. 33. Oehmke, R.H. "On S*-Semigroups of Automata," Technical Report 8, THEMIS PROJECT, University of Iowa, 1969. 34. Yoeli, M. and Ginzburg, A. "On Homomorphic Images of Transition Graphs," J. Franklin Inst., 278, pp. 291-296, 1964. 35. Zeiger, H.P. "Loop-Free Synthesis of Finite-State Machines," Ph.D. Thesis, Department of Electrical Engineering, M.I.T. 1964. 36. Zeigler, B.P. "On the Feedback Complexity of Automata," Technical Report 08226-6-T, Department of Computer and Communication Sciences, The University of Michigan. Also Ph.D. Thesis. 1969.

UNIVERSITY OF MICHIGAN 3 901 5 02825 99381111111 3 9015 02825 9938