ENGINEERING RESEARCH INSTITUTE UNIVERSITY OF MICHIGAN ANN ARBOR ON THE INDECOMPOSABLE REPRESENTATIONS OF ALGEBRAS by JAMES P. JANS Project 2200 DETROIT ORDNANCE DISTRICT, ORDNANCE CORPS, U.S. ARMY CONTRACT DA-20-018-ORD-13281, DA Project No. 599-01-004 ORD Project No. TB2-001-(1040), OOR Project No. 31-124 July, 1954

PREFACE I wish to acknowledge my indebtedness to Professor R. M. Thrall who introduced me to the area of study covered in this dissertation. His encouragement and guidance helped to make it a reality. I would like to thank Professor R. C. Lyndon for his careful reading of the manuscript and for his suggestions which improved the clarity of this paper. This dissertation was written under the sponsorship of the Office of Ordnance Research, U. S. Army, Contract DA-20-018-ORD 13281. ii

TABLE OF CONTENTS Page INTRODUCTION................... 1 CHAPTER I. 1. Initial Definitions........... 4 2. History......... 5 3. Necessary and Sufficient Conditions for Strongly Unbounded Type........... 10 CHAPTER II. 1. Structure Theory........... 12 2. Basic Algebras............... 13 3. Representations of Algebras and Basic Algebras................ 15 4. Two-Sided Ideals in A and A... 18 5. Representations of Large Degree....... 19 CHAPTER III. 1. Two-Sided Ideal Lattices........ 23 2. The Two-Sided Ideal Lattice and Strongly Unbounded Type.............. 28 3. Commutative Algebras.. 37 4. Other Consequences of a Finite Two-Sided Ideal Lattice................ 40 CHAPTER IV. 1. Right and Left Ideals............ 45 CHAPTER V. 1. Graphs.................. 54 2. Graph with Cycle........... 58 3. Graph with Chain Which Branches at Each End. 63 4. Graph with a Vertex of Order Four..... 70 BIBLIOGRAPHY................... 72 iii

INTRODUCTION If A is an associative algebra, any representation of A by matrices can be decomposed into a direct sum of indecomposable representations and such a decomposition is unique up to similarity of the direct summands. (KrullSchmidt Theorem) If the algebra A is semisimple, the indecomposable representations are actually irreducible. Counting similar representations as equal, A has only a finite number of such irreducible representations. Thus, every representation of a semisimple algebra can be formed by adding together a direct sum of representations taken from this finite set. For nonsemisimple algebras this is no longer true. There are algebras with radical which have an infinite number of inequivalent indecomposable representations of the same degree for each of an infinite number of degrees. Such algebras are said to be of strongly unbounded representation type. The object of this paper is to classify and.study algebras with unity over an algebraically closed field according to the number of inequivalent indecomsable representations they have. In Chapters III, IV, and V, four independent conditions are given which imply that an algebra over an algebraically closed field be of strongly unbounded representation type. 1

2 Although some of the results obtained can be extended to the case where the field is not algebraically closed, the assumption of algebraic closure greatly simplifies the proofs of the main theorems and gives a clearer statement of the structure of the algebras involved. Chapter I contains a precise definition of the classes of algebras to be considered. Also included in Chapter I are several conjectures concerning these classes and a brief history of the work of others on this problem. The theorems stated in Chapter I are not proved there, because they are implied by theorems appearing later in the paper. Chapter II is concerned with establishing fundamental concepts used in later chapters, Basic algebras are introduced and the one to one correspondence between representation theory for algebras and representation theory for their basic algebras is set forth. The assumption of algebraic closure of the field is needed for the introduction of basic algebras. A lattice isomorphism between the two-sided ideal lattice of an algebra and that of its basic algebra is established. Also in Chapter II, a method of building representations is given which is used extensively in later chapters. The only new result in Chapter II is Lemma 2.5.A which gives a criterion for a representation to contain an indecomposable direct summand of at least a certain degree. In Chapter III the structure of the lattice LA of two-sided ideals is investigated. It is shown that finiteness and distributivity of LA are equivalent and imply that every two-sided ideal is principal. The main result of

5 Chapter III, Theorem 3.2oA, states that if LA is infinite then the algebra A is of strongly unbounded representation type. In Theorem 3.3.B it is shown that, for a commutative algebra, finiteness of the two-sided ideal lattice LA implies that the algebra has only a finite number of inequivalent indecomposable representationso Such algebras are shown to be a direct sum of polynomial algebras in Corollary 3.3.Co Finally, two lemmas on basic algebras show that if the basic algebra has a finite two-sided ideal lattice it is generated in the subalgebra sense by two elementso All the results in Chapter III are newo The two-sided ideal lattice is assumed to be finite and distributive in Chapters IV and V and the algebras under consideration are assumed to be basic algebras. In Chapter IV a second condition for an algebra to be of strongly unbounded type is given. The lattice LN of two-sided ideals contained in the radical N of A is mapped lattice homomorphically into the left and right ideal lattices in the radical, If the image of LN has a sublattice that is a Boolean algebra with more than 23 elements then A is of strongly unbounded representation type. In Chapter V a graph is associated with each twosided ideal contained in the radical. Theorems 5.2.A, 5.4.A, and 5.3.A state that if any such graph has a cycle, a vertex of order four, or a chain which branches at each end then the algebra is of strongly unbounded representation type. The results in Chapter IV and V are extensions of previous results.

CHAPTER I 1o Initial Definitions A precise definition of the classes of algebras under consideration is given in terms of the following function: Definition 1.1 oA If A is an algebra and d is a positive integer, let gA(d) be the number of inequivalent indecomposable representations of A of degree d. gA(d) is integer valued or infiniteo Definition l o.B: A is said to be of be of bounded representation type if.there exists an integer do such that gA(d) = 0 for all d > do. If not of bounded type A is said to be of unbounded representation type. Definition.loloC A is said to be of finite representation type if - gA(d) is finite. d=l Clearly, if A is semisimple, it is of finite representation type. The class of algebras of unbounded representation type can be further divided into subclasses according to the number of integers d for which gA(d) =00. Of particular interest in this paper is the subclass defined as follows. Definition loloD: A is said to be of strongly unbounded representation type if gA(d) =-o for an infinite number of integers d. 4

5 The main theorems proved in this paper are concerned with showing that algebras over an algebraically closed field are of this type. Henceforth, where it is clear that it is the representation type that is being referred to, the terms defined above are shortened to bounded type, unbounded type, etc. It is possible to define two additional subclasses of the class of algebras of unbounded type. One is the subclass of algebras of unbounded type for which gA(d) is finite for all integers d, and the other, the subclass for which gA(d)= — for a finite number of integers d. Ro Brauer and R. M. Thrall have conjectured that the latter subclass is empty, and that the former is also empty provided that the underlying field is infinite. Concerning the classes of algebras of bounded type and finite type, Brauer and Thrall conjecturedthat these two classes are identicalo 2. History It was first noted by Nakayama [5] that some nonsemisimple algebras could have indecomposable representations of arbitrarily high degreeo He remarks that if N is the radical, e a primitive idempotent and Ni-le/Nie, considered as a left A space, contains the direct sum of two isomorphic subspaces, then A is of unbounded type. In the subsequent development of the theory, considerable attention has been given to stating sufficient conditions that an algebra. be of strongly unbounded type. (The methods of showing that an algebra over an infinite field has unbounded type also show it has strongly unbounded type. This gives support to the above mentioned

6 conjectures of Brauer and Thrall that over infinite fields algebras of unbounded type are also of strongly unbounded type.) In a paper, as yet unpublished, R. Brauer [3] stated Nakayama's condition and the following two sufficient conditions that an algebra be of strongly unbounded type. Theorem 1.2.A: If N is the radical, e a primitive idempotent and Ne/N2e considered as a left A space is the direct sum of more than three subspaces then A is of strongly unbounded type. The condition in the hypothesis of Theorem 1.2.A is generalized in Chapter IV and the proof of this theorem is implied by the proof of Theorem 4.1.C. in the case the underlying field is algebraically closed. The second condition given by Brauer is contained in the following theorem. Theorem lo2.B: If A has irreducible representations F1.... Fh distinct, and F1*..oFh* distinct and representations Fj = QiPi~jl for i = j = 1... h and Wij~ = Pij Qij Yj Sij Fj for i = j+l = 1.. hFo* = Fh Yij Sij F where Fj is the top Loewy constituent Fi* is the bottom Loewy.constituent and Yij is completely independent of Pst and Spr then A is strongly unbounded type. pr.. (For a discussion of Loewy series and constituents, see Artin, Nesbitt and Thrall [1].) In Chapter V a condition on the algebra is introduced, which implies the hypotheses of Theorem 1.2.B. The proof of Theorem 5.2.A is

7 similar to the proof of Theorem 1.20B, so the proof of the latter is omitted here. R. M. Thrall in an unpublished paper [7] generalized Nakayama's condition that an algebra be of strongly unbounded type. A statement of the condition requires the following concepts concerning representation theory for algebraso The left-ideal Ae, where A is the algebra and e is a primitive idempotent, is a vector space over the field k. Multiplication on the left of Ae by o in A is defined by multiplication in the algebra Ao In this way Ae can be considered as the space of a. representation W(o) of A. Ae is then said to induce the representation W(c ). Let a representation W(o<) of A be divided into submatrices W(oe) = (Wij(d)) The submatrix Wij( ) is said to have power s if Wij(d) = 0 for all a in NS, but there exists oo in NS-l such that Wij(o 0) O. Thrall's generalization of Nakayama's condition is now given in the following theorem. Theorem 1.2oC: If e is a primitive idempotent, if Ae is the left ideal it generates, if Fi P Q (W P is a top constituent Y1 S1 Fj Y2 S2 0 Fj of the representation induced by the left ideal Ae, where Y1 has power s1 and where Y2 has power s2, s2 - s1 and where no F1 appears in the top s2 - s1 constituents of the

8 upper Loewy series for Q, then A is of strongly unbounded type. Since it is shown in Lemma 3o4oC that the hypotheses of Theorem 12o.C imply that the algebra has an infinite number of two-sided ideals, the proof of 1.2.C is contained in the proof of Theorem 3.2.A (in the case the underlying field is algebraically closed)o In the same paper [7], Thrall also introduced a method for illustrating graphically the sufficient conditions that an algebra be of strongly unbounded type in the case where the square of the radical is zero. In this paper a graph will mean a set Pi,., Pn of vertices and a binary relation f on some pairs of vertices, PitPjo Pi PJ means that the vertices Pi and Pj agre connected by an (oriented) edge. A vertex Pio is said to have right order r (left order r) if there exist distinct vertices Pio.oPir such that Pio Pi (Pivy Pio) for V =loo.,ro The order of a vertex is the largest of these two orders. A chain C is a set vertices and edges (Pi,. Pil Pi2), Pi2, Pi31sPi2h tt Pir_ls Pirlv Piar Pir (Pir lXPir, Pi g) such that successive edges are distinct. The parentheses indicate that the first and last edges of a chain may have either orientationo Note that going from one vertex to the next in a chain, the orientation of successive edges alternates. A chain C2 extends a chain C1 at the right end (or C1 extends C2 at the left end) if the first vertex of C2

9 is the same as the last vertex of C1 and identifying these vertices makes C1 followed by C2 a chain. The chain C1 is said to be a cycle if it extends itself. A chain branches at one end if it can be extended by at least two distinct edges at that end. Let A = A' + N be a decomposition of A into the vector space direct sum of its radical and a semisimple subalgebra A'. Let A' = l Ai be the ring direct sum of simple i=l two-sided ideals Ai, each with a unity element 6 io Let Pi,..-,Pn be vertices and let PictPj if (iN 6j 0 O. N2 is assumed to be zero here. Let M be the relation matrix of,. It should be noted here that if I is the identity matrix of degree n then I + M is a matrix with a non zero entry only in positions where C, the Cartan Matrix, of A has a non zero entry, For the definition and properties of the Cartan Matrix of an algebra see [1]. In the case that the radical squared is zero, sufficient conditions that an algebra be of strongly unbounded type can now be described in terms of the graph defined aboveo If there exists a vertex of order 4 or more then the algebra satisfies the hypothesis of Theorem 1.2.A. If the graph has a cycle then the algebra satisfies the hypothesis of Theorem 12.B. The fourth sufficient condition that an algebra be strongly unbounded type is given by Thrall in [7] in terms of the graph. That condition appears in the following theorem.

10 Theorem 1.2.D: If the graph G as defined above has a chain which branches, at each end then A is of strongly unbounded type In Chapter V, the definition and use of the graph is extended to the case where N2 is not necessarily zero. Theorem 5.3.A then implies the proof of Theorem 1.2.D. Hence, the proof of Theorem 1.2.D is omitted here. 3.'Necessary and Sufficient Conditions for Strongly Unbounded Type If certain other conditions are imposed on an algebra, necessary and sufficient conditions that it be of strongly unbounded type can be given. If the underlying field is infinite in each such case, the algebra is either of strongly unbounded type or finite type. D. G. Higman [4] has shown, using only group representation theory, that the group algebra over a field of characteristic p is of unbounded type if and only if it has a noncyclic Sylow p-subgroup. R. M. Thrall [7] has shown that the four conditions of Nakayama, Brauer and Thrall are necessary and sufficient conditions that an algebra be of strongly unbounded type if the underlying field is algebraically closed and the square of the radical is zero. In Theorem 3.3.B the author shows that if the algebra is commutative, the necessary and sufficient condition that it be of strongly unbounded type is that its twosided ideal lattice be infinite. If a commutative algebra is not of strongly unbounded type it is of finite type.

11 The underlying field here is assumed to be algebraically closed.

CHAPTER II It is necessary, before attempting the proofs of the main results, to consider certain preliminary concepts. Use is made here of the assumption that the underlying field is algebraically closed to investigate the properties of basic algebras and their representations. Theorem 2.4.B exhibits the relationship between the two-sided ideal structures of an algebra and its basic algebra. In this chapter a fundamental tool, Lemma 2.5.A, is provided which is used for showing the existence of indecomposable representations of large degree. In Lemma 2.5.C a method for constructing representations of large degree is given. 1. Structure Theory Considering the Wedderburn Structure Theory for algebras as given in Artin, Nesbitt and Thrall [1] an algebra A with unity element, over k, an algebraically closed field, can be decomposed into the vector space direct sum, (2.1) A = A' + N, where A' is a semisimple subalgebra and N is the radical of A. A' can be further decomposed into the ring direct sum, n (2.2) A' = +. Ai, i-1 where each Ai is a simple ideal of A'. Since the underlying 12

13 field k is algebraically closed, each Ai is a total matrix algebra over k. A basis of matrix units Ci A V= 1....f(i) can be chosen for each Ai. The set Ci A i= 1... n, = 1... f(i), where f(i) is the degree of the matrix set for Ai, forms a basis for A'. Since (2.2) is a direct sum in the ring sense and the Ciy multiply like matrix entries, they have the following multiplication formula. (2.3) CiVCjpr = Sij SpG Cir (8ij the Kronecker delta symbol) Also, the unity element of A is written n f(i) (2.4) 1 = Z Cit i=l,/=l1 Now let t be defined as n (2.5) t = i i=1 2. Basic Algebras Definition 1.2.A: The set 1 A 1 = A is called the basic algebra of A. Clearly, A is a subalgebra of A. Basic algebras were first introduced by Nesbitt and Scott [6] and were also treated by Wall [9]. It is shown by Wall that, although the basic algebra A depends on the choice of a semisimple subalgebra A' and on the choice of matrix units, a different choice yields an isomorphic basic algebra. The unity element of A is obviously 1. The radical of A is 1 N 1 = N. Thus, the decomposition (2.1) implies a decomposition of A.

14 n (2.6) = k Ci + i= 1 The Cill in A will be labeled ei. They are orthogonal idempotents by (2.3). The semisimple subalgebra A' of A corresponding to A' in (2.1) is the ring direct sum of the one dimensional two-sided ideals k ei of A'. Since irreducible representations of A have maximal kernels in A/f, it is clear that irreducible representations of A are one dimensional over k. Using the unity element I of A a decomposition of the radical f is possible. n (2.7) = + Z eieej. i,j=l Since the idempotents are orthogonal, (2.7) is a vectorspace direct sum. (2.6) and (2.7) imply that any element oi in A can be written, n n (2.8) o = 2 xi(o)ei + Z ei ej, i=l i,j=l where xi(i() is in k and ei -l ej is in eiRej. Now let Q be a representation space for A and choose in 9 a composition series - = Q 92T *...';41= 0 of r subspaces of Q. Each ji/'itl is irreducible and hence one dimensional. Pick vi in 1i but not in ti41. Let I act like the identity on 9, then Tvi= vi. There must exist ej(i) such that ej(i)vi is not in 1i+l, because if all ejvi were in Mi+l, ejvi = vi would be there too. Let j=l vi' = ej(i)vi. Clearly (vi' li=l... t) is a basis for O.

15 Note that orthogonality of the idempotents ei implies epvi' = 0 unless p = j(i), and then ej(i)vi' - vi'- Let o be an element of A. Using expression (2.8), n (2.9) cVi' = j(i )(< )vi' + 2 ep -ej(i)vi', p=l where each ep ej(i)vi' is in Mi+l- With respect to this basis vi', the representation of A has the matrix form Xj(l)( ) \ Xj(2)() 0 (2.10) R(<*) = *. Xj(t) ( It is evident that if V is an element in the radical each xi( s) = 0, so R(JV) has zeros on and above the diagonal. Also by the choice of the basis, if ep is one of the previously defined idempotents, R(ep) has zeros off the diagonal and ]pj(i) in the ith diagonal position. Let 2 be an element of epfer, then (2.11) R(ep)7R( )R(er) = R( 2). By the description of R(ep) and R(er) given above, it follows that R( y) can be non-zero only in entries directly below xj(i) where j(i) = r and directly to the left of xj(i) where j(i) = p. When considering the matrix form of a representation of A, it will always be assumed to be in the form (2.10) and the above mentioned facts will hold for it. 3. Representations of Algebras and Basic Algebras The reason for considering representations of basic

16 algebras is seen in the following development. Let V be a representation space for the algebra A. Define (2.12) V = 0. Since elements of X are of the form IoCTI, it is clear that Q is an A space. If T V = 0, then 0 = Cil1 V = Cil. V. It follows that CiLCil^V = Cit V = 0 implying IV = 0. Hence if 1 acts like the identity on V,. is not zero and T acts like the identity on Q. Also if V = V1 + V2 (A direct) then V = i(Vi + V2) = 91 I+ 2 (I direct). Thus, if V is decomposable so is 9, An important fact concerning this process of going from A spaces to R spaces is that it has an inverse. Let V be an A space, let {vi'} be a basis chosen so that the matrix form of the representation of R is as (2.10). Recall that ej(i)vi' = vi' = Cj(i)1lvi', where Cj(i)l is equal to ej(i). Now adjoin to V the additional basis vectors Cj(i) vi'; /A= 2,... f(j(i)). Call the new space V and define a representation of A on V in the following manner for the given basis of V. Let o be in A n f(t) (2.13) o(CjLvi') = E jZ Ctff o CjlVi' t=l 9=1 n f(t) Z= 2 2E Cty l(CtlCO(CjlVi ). t=l =-1 The element Ctlfp Cj4, is in A, so the expression in parentheses is a linear combination of basis vectors in 9. When this is multiplied on the left by Ctf, the resulting element is well defined in V. If ~ is an A direct

17 sum, V is an A direct sum. Clearly, these two processes are the inverses of each other. Wall [9] has proved the following theorem highlighting the value of the preceding development. Theorem 2.3.A: If V and V' are A-spaces, ~ and V' their corresponding A-spaces, then V is A-isomorphic to V' if and only if V is A-isomorphic to V. With the exception of Theorem 2.3.A all the above results are in Nesbitt and Scott [6]; they are presented here in condensed form. In addition, Nesbitt and Scott showed that the composition length of V as an A space is the same as the composition length of V as an A space. Thus, in studying algebras over an algebraically closed field with respect to their representation theory it is sufficient merely to study representations of basic algebras. Every indecomposable representation of A leads to a corresponding indecomposable representation of A and conversely. Since the factor space of two successive steps in a composition series is irreducible, its dimension is equal to the degree of that irreducible representation. There are only a finite number of such irreducible representations for any given algebra. Hence, there exist an infinite number of inequivalent indecomposable representations of a certain degree of A if and only if there exist an infinite number of inequivalent indecomposable representations of A of some smaller degree. The following theorem sums up the above results.

18 Theorem 2.e3.B: If A is an algebra over k, an algebraically closed field, and A is its basic algebra, then A is of strongly unbounded, bounded or finite type if and only if A is of the same type. 4. Two-Sided Ideals in A and A In addition to using the correspondence between the representation theory for A and for A, use will be made of the structures of their two-sided ideals. Let LA be the lattice of two-sided ideals in A and let LA be the lattice of two-sided ideals in A. (For a discussion of the lattice concepts used here see Birkhoff [2].) Let Ao be a two-sided ideal in A. Define the function 0 from LA to LA^ (2.14) 0: LA -> LA by 0(Ao) = iAol. Lemma 2,4,A: 0 is a lattice homomorphism of LA into Lo. Proof: Since Ao is a two-sided ideal lAol e Ao A. Also 1 is the unity element in A, so Aon A a 1Aol. Therefore 1Aol = Aon A. Then form 0(A1) + O(A2) = 1Al + 1A21. Distributivity of multiplication implies this equals l(Al + A2)1 = 0(A1 + A2). Let 0(A1) 0n (A2) = (AAln ) n (A2 HA). But thi.s is Al) A2 A because set intersection is associative, commutative, and A A = A. Therefore 0(A1) n 0(A2) = (Aln A2). This completes the proof of the lemma, Let- A be a two-sided ideal in A and define the function ) from LA to LA, (2 15) ~: LA -- LA by A (Ao)- 5Ao A 9

19 where {Ao}A is the two-sided ideal in A generated by elements in Ao. The following theorem gives the desired relation between LA and LA. Theorem 2.4.B: Both 0 and (0 are identity functions and LA and LA are lattice isomorphic under 0 and ~. Proof: Since 0(Ao) - Ao, I(Ao)'C Ao by the definition of.Let i be in Ao. Ao is a two-sided ideal so that Ci4,Cj is also in Ao and Cii.oCCjfi is in 0(Ao) because 1 leaves it invariant on both sides. It follows that CilCizi-CjfPCj1y is in o0(Ao). But then oa is in 0(Ao) because Oz= 11 = _2 Cio(CCjp. Hence p0 is identity. i 2,j, Certainly, A(Ao) A Ao because Ao. (Ao)A and 1Aol= Ao. Let e be in,0(Ao), then = 1ol where 0o G( and the,?= 1 1 are in Ao. Then i= Z(l l) (tl) is in Ao because Ao is a two-sided ideal in A. Therefore 0 = identity. 0 is one to one, onto, and a lattice homomorphism, hence 0 is a lattice isomorphism and % is its inverse. Corollary 2.4.C: If a two-sided ideal in A is principal, then its image under _ is principal in A. Proof: If Ao = to3 then, by the definition of A, 5(Ao).= {o A is the two-sided ideal generated by (o in A and is therefore principal. 5. Representation of Large Degree In the proofs of the main theorems in Chapters III, IV, and V, certain algebras are shown to have indecomposable representations of arbitrarily high degree. Those proofs

20 depend heavily on the following lemma, which can be used to show the existence of indecomposable representations of a certain degree without actually exhibiting the representations themselves. Lemma 2.5.A: If A is an algebra over an algebraically closed field, V is a representation space for A, L is the commutator algebra of the representation (the set of all A-homomorphisms of V into itself), and if every B, in L has more than d equal eigenvalues, then V has an indecomposable direct summand Vo of dimension greater than d. Proof: Suppose the contrary. Let V be decomposed into V = V1 +... + Vt (A direct sum) where each Vi has dimension di < d. Let y1... yt be distinct elements in the field k. Let B be the homomorphism of V into V which maps vectors vi in Vi onto yiVi. On each direct summand Vi, B acts like yi times the identity matrix. This commutes with every homomorphism of Vi into Vi, including those caused by elements in A. Thus B is in L. But this contradicts the hypotheses of the lemma, because the eigenvalues of B are the yi, each appearing as often as the dimension di of Vi and di < d. This contradiction establishes the proof of Lemma 2.5.A. A result on commuting matrices which is used extensively in Chapters III, IV, and V is given in the following development. Let

21 C 1 c 1. (2.16) Pc = ic 1 c be a. square matrix with a single eigenvalue c and l's just below the diagonal. Pc is called a primary matrix. Let V be the space on which Pc acts. Since V is generated by powers of Pc acting on a single vector, V is indecomposable. In [1] it is shown that V is indecomposable if and only if the commutator algebra of V is completely primary. If the underlying field is algebraically closed, the following lemma is a corollary to that result. Lemma 2.5.B: If B is a. matrix which commutes with Pc, PcB = BPc, then B has exactly one eigenvalue. In the proofs of the main results, it is necessary to construct representations of large degree. One method is to form direct sums of representations. Another method used by Thrall in [7] is given in the following. Let R(&) be a representation of A by matrices. Let (2.17) R(d)=- (Cij(d) ) be the matrix cut into submatrices Cij(o(), so that the Cii(o) are all square and Cij( i) has the same number of rows as Cii(o) and the same number of columns as Cjj(cd). Then t (2.18) R(o)R( ) = R(o ) implies Z Civ(d)Cyj()) = Cj(). v=1

22 Let D = (Dij) also be a matrix whose coefficients are matrices such that (Dij) has the same number of rows and columns as (Cij( )) and the diagonal blocks Dii are square. Let Dij Y Cij(o) be the Kronecker product of the two matrices Dij and Cij(d). (For the definition and properties of the Kronecker product of two matrices see van der Waerden [8].) Lemma 2.5.C: If DijDjk= Dik holds for the positions where Cij(d), Cjk(ol), Cik( ) are not identically zero then Q(o& ) = (Dij X Cij(O )) is a representation of A. Further Dii = identity implies Q(l) - identity. Proof: The properties of Kronecker products that are used here are A X (B + C) = (A x B) + (A X C) and (A X B). (C X D) - (A.C) X (B. D) if all products are defined. Q(') + Q( ) = (Dij X Cij(o)) + (Dij X Cj( )) = (Dij X Cij() + Dij X Cij( )) = (Dij x [Cij(o) + Cij()] ) - (Dij X Cij(o )) = Q( + ). Also Q(o )Q( ) = ( (Di>X ci(( )) ~ (DvJ X Cyj()) -t \ = ( -DiJ (ci d(<) * Cyj( )) = (DiJ X Cij(~ )) Q( ) Then Q(dc) is clearly a representation of A.

CHAPTER III In this chapter it is shown that every algebra over an algebraically closed field with an infinite two-sided ideal lattice is of strongly unbounded type. Finiteness of the two-sided ideal lattice is equivalent to distributivity of that lattice. This is proved by Corollary 3.1.Go If the algebra is commutative and has only a finite number of two-sided ideals it is shown to be of finite representation type. The proof of this fact also establishes Corollary 353.C which states that every commutative algebra with a finite two-sided ideal lattice is the direct sum of polynomial algebras over the field.k. Before proving these results it is necessary to establish certain results about lattices and in particular about the two-sided ideal lattice of a basic algebra. 1. Two-Sided Ideal Lattices The following facts concerning lattices will be used in this and later chapters. A modular lattice is one which has the property that D B implies D (B + C) = (Dn B) + (Dn-C). Lattices generated by subspaces of a vector space, where Dn B means set intersection and D + B means the subspace generated by D and B, are modular lattices. Since left, right, or two-sided ideals are subspaces of an algebra and DI1B, D + B are again left, right or two-sided ideals 23

24if both D and B are, the lattice of left ideals, the lattice of right ideals and the lattice of two-sided ideals are all modular lattices. The dimension of a modular lattice is the number of elements in a chain from the least element to the greatest' Modularity of the lattice is needed to show that the dimension is independent of the choice of the chain. A lattice is distributive if D n (B + C) = (D nB) + (DnC) always holds. Clearly, a distributive lattice is modular. Proofs of the following two lemmas can be found in [2]. Lemma 3.1.A: A finite dimensional distributive lattice is necessarily finite. Lemma 3.1.B: A modular lattice is distributive if and only if it fails to contain a sublattice of the form The sublattice appearing in the previous lemma is called a projective root. A lattice homomorphism of a projective root either maps it onto a single element or maps it isomorphically. If U is the greatest element in a lattice and 0 the smallest, an element D is said to have a complement D' if D + D' = U and D D' = 0. If every element is complemented, the lattice is called a complemented lattice. A complemented distributive lattice is called a Boolean algebra. An element D covers B if D D B and D > C- B implies C = B. The following lemma gives a fact that will be used in Chapter V.

25 Lemma 3.olC: In a distributive lattice, the covers of a single element generate a Boolean algebra. Now let A be an algebra over an algebraically closed field k and consider LA, its lattice of two-sided ideals. By the lattice isomorphism of Theorem 2.4.B it is sufficient to consider only basic algebras. Throughout the remainder of this section A will represent a basic algebra. Using the theory developed in Chapter II for basic algebras, the algebra A can be written n A = eiAej, ~j=l where eiAej is an additive subspace of A. The ei are the n orthogonal idempotents, forming a basis for the semisimple subalgebra A' of A. Definition 3.1.D: Let 0ij be a function from LA to subspaces of eiAej defined by 0ij(Ao) = Ao eiAej = eiAoej. Lemma 3.1.E: For each pair i,j, 0ij is a lattice homomorphism. Proof: Oij(Al + A2) = ei(Al + A2)ej = eiAlej + eiA2ej = i0j(Al) + Oij(A2) because of the distributive law for multiplication in A. Secondly, 0ij(Aln A2) = An Aa neiAej = (Aln eiAej) (t (A2n eiAej) = 0iij(A1) n 0ij(A2)' Thus, 0ij is a lattice homomorphism. Distributivity of the two-sided ideal lattice LA can now be described in terms of these n2 lattice homomorphisms 0ij.

26 Lemma 3.1.F: ij_(LA) is a chain for each pair i,j if and only if LA is distributive. Proof: Suppose LA is not distributive, then, since LA is a modular lattice LA contains a projective root R which is either collapsed entirely or mapped isomorphically by a lattice homomorphism. Let (3.1) R Al. A2 But there exists a pair i,j such that 0ij(Al) 0ij(A2). For if 0ij(A1) = 0ij(A2) for all i,j, then summing over all i,:j, Al = A2 Hence for that pair i,j 0ij(LA) contains an isomorphic image of the projective root and is therefore not a chain. To prove the "if" part, let 0ij(LA) not be a chain. Starting up from 0 in eiAej, let 0ij(Ao) be the smallest element in i0j(LA) which has at least two covers in 0ij(LA). Let Al be the sum of all two-sided ideals A~ of LA for-which!ij(A.) EC ij(Ao). Since. 0ij is a lattice homomorphism 0ij(A)) = ij(Ao). Let 0ij(Al) and 0ij(A2) be two distinct covers of 0ij(Ao)o Choose an element ol in 0ij(Al) but not in 0ij(A6) or in 0ij(A2) and form its principal ideal ( Al) = A iA. Since oi is chosen in eiAej, et/cl = 0 unless t = io For the same reason, oeer = 0 unless r = j. Using expression (2.6) for A it is clear that (,(l) can be written as (3.2) (di) = kol + o1N + Nol1 + NielN. The first + in the above expression is direct

27 because the summands after that are in a power of the radical one higher than the power of the radical that oi. is ino Clearly, the expression olN + N a1 + N o1N is a two-sided ideal because N is a two-sided ideal. By the choice of (i in eiAej it is clear that (353) Pij( ~iN + N oi + N 1N) = ij((dl)) E 0iJ(A1) where the first is properly contained in the last. By the choice of A6, o1N + Ne1 + N <1N is contained in A6o There0 fore A* = koli + A6 is a two-sided ideal and the sum is direct because c 1 was chosen not in Ao Since oe, was chosen not in 0ij(A2) 0ij(At) and 0ij(A2) are two distinct covers of 1ij(Ao) in eiAejo Running through an identical argument with the subscript 2 replacing 1, At = k 2 + A6 is a two-sided ideal, and the sum is direct. A* and At both cover Al, for their quotients are one dimensional. The sum Al + At is ko + k i2 + A6" Let 03 = -01 - &2a, then At = koe3 + AS is a third cover of A6 and A +A A* + At (3e.4) A* At AA A6 is clearly a projective root. Hence LA is not distributive. This completes the proof of the lemmao Note that the previous proof implies that k(k/1i + k2 C2) + A6 is a different two-sided ideal for every distinct ratio kl/k2o This leads directly to the following corollaryo

28 Corollary 3ol.G: LA is finite if and only if LA is distributiveo Proof: If LA is not distributive the previous lemma implies 0ij(LA) is not a chain for some pair i,j and there exist two-sided ideal A6 and the elements O1, 02 of the previous lemma. By the comment above, k(kljo + k2 2) - A6 is a distinct two-sided ideal for each ratio kl/k2. Since the field is infinite, the lattice of two-sided ideals is necessarily infiniteo Conversely, a finite dimensional distributive lattice is necessarily finite by Lemma 31l.A. 2. The Two-Sided Ideal Lattice and Strongly Unbounded Type It can now be shown that infiniteness of the twosided ideal lattice implies that the algebra is of strongly unbounded type. There is a pattern that is repeated in the proofs of each of the theorems in which an algebra is shown to be of strongly unbounded type. This pattern is outlined in the following. In the first part of the proof, the hypotheses of the theorem are used to show the existence of certain special elements omC,...,'r in the algebra and to construct a representation Rcs, s an integer, c a parameter in ko Rcs has a degree which is a fixed integral multiple of s, An element B in the commutator algebra of Rcs must satisfy the commutator equation Rcs(& )B = BRcs(o() for all oa in A. This equation is examined for cd equal to each of the special elements. Among the conditions this imposes on

29 B is that B have at least s equal eigenvalues. Lemma 2.5.A then implies that Rcs has an indecomposable direct summand Tcs of degree at least s. At this point the conclusion can be drawn that A is of unbounded type. In the second part of the proof, it is shown that for distinct values of the parameter, c + d, Tcs is not similar to Tds- Since the field is infinite there are an infinite number of such inequivalent indecomposable representations with degrees between the degree of Rcs and s. So for some integer ds between, there must be an infinite number of inequivalent indecomposable representations with degree ds. Clearly the algebra A is of strongly unbounded type. The method for showing that Tcs and Tds are not similar is to assume they are and show this produces a contradiction. If Tcs and Tds were similar there would exist an intertwining matrix P satisfying PRcs(() = Rds(c)P and P, when cut down to the space VT of Tcs, would represent an isomorphismo By letting oc equal each of the special elements in the intertwining equation, certain conditions are imposed in Po Among these conditions, is that PilPds = PcsP11 where P11 is a certain block cut out of P and Pcs, Pds are primary matrices with eigenvalues c and d respectively. It is then seen that P-l represents P cut down to a certain subspace Vo contained in VT. Then P11 represents an isomorphism and has an inverse P-1. The above equation is then impossible. This contradiction establishes that Tcs and Tds cannot be equivalent. The main theorem of this chapter is now proved according to the above scheme.

30 Theorem 3.2.A: If A is an algebra over an algebraically closed field k,.and if LA, its two-sided ideal lattice, is infinite,. then A is of strongly unbounded representation type. Proof: It is sufficient to consider only basic algebras, for an algebra has the above conditions if and only if its basic algebra has them. Hence let A be a basic algebra. By 3.1 F and 3.1.G, infiniteness of the two-sided ideal lattice is equivalent to the existance of a pair i,j such that 0ij(LA) is not a chaino If i f j then eiNej = eiAejo If i - j, then eiAej is a subalgebra of A with a single idempotent ei. If Ao is a two-sided ideal of A, 0ii(Ao) is a two-sided ideal of eiAei. If 0ii(Ao) contains ei, it is all of eiAei; if not, it is nilpotent and is contained in eiNei the radical of eiAei. Hence in the case i = j the part of Oii(LA) which is not a chain must be contained in eiNei. Thus, in any case, there exist two two-sided ideals A1, A2 such that 0ij (A1) and 0ij(A2) are incomparable and both are contained in eiN ej. Hence there exist <oi, o2 in eiNej such that oi is in AX and nin An A2, 2 isand A and not in A1o These two ideals A1, A2 and these two special elements are used in proving A is of strongly unbounded type. Let R1 and R2 be representations of A with kernels A1 and A2 respectively. (There always exist such representations, for instance, let Rt be the regular representation of A/At.) By the choice of the elements ol1 and 02 relative to the kernels Al and A2 of R1 and R2, (3-5) Rt(it) = 0 Rt(&r) - 0 t,r = 1,2.

31 Assume R1 and R2 are in the diagonal form (2.10). 2 is in N so R1(C(2) is non zero only below the diagonal. Since Rl(o2) i 0, there exists a top non zero row of Rjl(o2) the rth, r > 2, and in that rth row, there exists a non zero entry brt, t < r, farthest to the righto Let RI be the representation induced by R, by taking the square diagonal block of Ri having brt in the lower left hand corner. Since Ri is in the diagonal form (2.10), RI is the representation induced by Vt/Vr+i in the composition series for R1. Note that Rl('l) is still zero. Induce a representation RL from R2 in an analogous manner. RL( l) and Ri ( C2) are non zero only in the lower left cornero Now by multiplying lK and c2 by appropriate scalars, the non zero entries may be assumed to be 1 in ko Both o1 and C2 were chosen in eiNej, so that by the development of Chapter II, RI and R~ are assumed to be in the diagonal form (2.10). xj(i) I j (0) (3.6) Ri(d) = Pi(d1) Qi (o), R(O) i P2((d) Q2(d) YL(0) Si(d) xi(4) Y2(a) S2 () xi(a) By the choice of the representations Ri and RL and the choice of the special elements,l vO2 the following relations hold. xi(`t), xj(dt), Pr(rt), Qr('t), Sr(dt) are all zero, (3.7) for r,t = 1 or 2; Yr(ot) = 0 when t = r, 1 when t f r. Using the two representations RI and RL, form their direct sum Ri + RA = R?. Clearly,

32 (3.8) xj J L O Xj 0 I xj PI 0 Qi Pi Pi Q ~ I P1 0 P1 P1 Q1 R' 0 P2 0 Q2 0 P2 0 Q2 Y1 0 S1 0 xi Yi yllY S1 0 Xi 0 Y2 0 S2 0 xil \0 Y2 0 S2 0 xi where - indicates similarity. Let R be the representation induced below and to the right of the dotted lines. Let s be an arbitrary positive integer, I the unit matrix of degree s. Let PCS be a primary matrix of degree s with the single eigenvalue c. Then I I (3.9) Dcs = I 0 I II 0 I Pcs 0 Pcs 0 I is seen to satisfy the hypotheses of Lemma 2.5.C, hence I x xj I X P1 I ) Qi (3.10) RIs = I X P2 0 I ( Qa I x y I X S1 0 I X Xi PcsX Y2 0 PcsX S2 0 IXXi is a representation. RIS is seen to be similar to

33 I xxj IXP1 IQ I I X PI I X Q1 (3.11) RCs = IxP2 o I X Q2 I I Xyl +Pcs Y2 I S PcSXS2 IXxi PcsXY2 0 PC5SA2 0 JI xi Let Rcs be the representation induced above and to the left of the-dotted lines. Rcs is now shown to contain an indecomposable direct summand of degree at least 2s. R, cs is first evaluated at the special elements (1 and "2 by equations (3o7). 0 0 0 0 0 0 (3.12) Rcs(l^i) = 0 0 RCS(0<2) = 0 0 0 PCs 0 O0 I 0 O Let B be in the commuting algebra of RCs, B is broken up into submatrices to correspond to the divisions of Rcs given by the solid lines in (3.11). Bl B12 BB3 (3-13) B = B21 B22 B23 B31 B-32 Bs B must satisfy the commutator equation, (3.14) Rcs( o<)B = BRcs(), for all o in A. In particular it must satisfy (3.14) for P/ = c2 and ~= 01. Combining (3.12) and (3o14), it follows that (3.15) B = B33; B12, B13, B23 are all zero; BllPcs = PcsB33o By Lemma 2.5.B, matrices commuting with a primary

34 matrix Pcs have only one eigenvalueo Since B1 = B33, BllPcs = PcsB33 implies that B1L has only one eigenvalue. Considering (3515) and the form (3o13) of B, it is clear that B must have 2s equal eigenvalueso Lemma 2o5oA implies Rcs must have an indecomposable direct summand Tcs of degree at least 2s. Since s is an arbitrary integer, A is clearly of unbounded typeo This completes the first part of the proof, It is now shown that A is of strongly unbounded typeo For c f d let the two representations Rcs and Rds be as in the first part of the proofo Let Tcs and Tds be the two indecomposable direct summands of degree at least 2s shown above to be in Rcs and Rds respectively. It is shown that Tcs and Tds cannot be similar. Suppose they were similar. Let V be the space of Rcs and let VT be the space for the summand Tcso If Tcs and Tds were similar, there would exist a matrix P intertwining Rcs and Rds, which, when restricted to the space VT, represents an isomorphism. It is also an isomorphism when restricted to any space contained in VTo A particular subspace Vo is shown to be contained in VT- Let Rcs(o2)V = Voo Equations (3512) imply that Vo has for a basis the last s basis vectors of V, when Rcs is in the form indicated by (3511). Let BT be the commuting matrix of Rcs which is identity on VT and 0 on its complement in V. (3.16) BT = B22. because BT is a commuting matrix and must therefore satisfy

35 equations (3.15). By the choice of BT, B1 has unit eigenvalues and is non singular. Hence BTVo = Vo. Now apply the commutator equation Rcs(o 2)BT = BTRcs((<2) to the whole space V. Clearly, VT? Rcs( 2)VT = Rcs( 2)BTV = BTRcs( 2)V = BTVo = Vo so that Vo is contained in VT. Let P be the intertwining matrix mentioned above, P satisfies the equation (3.17) PRcs( ) = Rds(O )Po Note that RCS(~ 2) = Rds(o2), so that for. = 2 (3.17) looks like the commutator equation (3.14). Let P be divided up as B is. Then P11 = P33, and P13, P12, P32 are all zero. Now let = ol in (3.17), then using equations (3.11), Ps3Pcs = PdsPll or PliPcs PdsP1j because Pl P'S. But P cut down to Vo is represented by PL1 which must be an isomorphism. So P11 is nonsingular. But then PiiPcs PdSPii is impossible because Pcs and Pds have unequal eigenvalues c and d respectively. This shows that for c # d, Tcs and Tds cannot be similar. For every c in the infinite field k there is such an indecomposable representation Tcs with degree between 2s and the degree of Rcs- Thus, there exists an integer ds between 2s and the degree of Rcs such that A has an infinite number of inequivalent indecomposable representations of degree ds. Hence A is of strongly unbounded type and, by Theorem 2.3.B, so is any algebra with A for a basic algebra. This completes the proof of Theorem 3.2.A. An interesting consequence of finiteness of the two-sided ideal lattice is given in the following theorem.

36 Theorem 3..2.B: If A is an algebra over an algebraically closed field, and if LA its two-sided ideal lattice is finite, then every two-sided ideal is principal. Proof: If every two-sided ideal in the basic algebra is principal, then every two-sided ideal in the algebra is also principal. This follows from 2.4oC. It therefore is sufficient to consider only-basic algebraso If LA is finite, 3.1.F and 3.1.G imply that 0ij(LA) is a chain for each of the n2 lattice boromorphisms 0ij. Let Ao be a two-sided ideal in A. Pick aij in ij(Ao). Let (dij) be the principal two-sided ideal generated by idj. Either Oij((d j)) = 0ij(Ao) or 0ij(( j)) is properly contained in 0ij(Ao).,If the latter, pick cdij in 0ij(Ao) but not in 0ij((oij2)). Since 1ij(LA) is a chain 1ij(Ao) _.0iJ((&j)) 0ij(( Oj)). Since eiAej is finite dimensional there exists an element oPj such that (3.18) iij((o j)) = 0ij(Ao)~ For each pair i,J obtain such an element and let &o be the sum of all of these. Since each oej is in 0ij(Ao)SAo, the principal ideal (<oe) is contained in Ao. The n idempotents ei are orthogonal and the elements oCj were picked in eiAej, so eiroej = e ij. Hence the principal ideal (eo) contains each of the principal ideals (adj) and (3.19) 0ij((4/o))~ -ij((oCj)) = 0ij(Ao) by equation (3.18). Summing equation (3.19) for all i,J implies (o/o) Ao. Hence Ao is the principal ideal generated by coo.

37 The results of this section are referred to throughout the rest of this paper. The purpose of this paper is to classify algebras with respect to the number of inequivalent indecomposabie representations; a part of that task is accomplished. 35 Commutative Algebras The following lemma provides a tool used in the proof of Theorem 35.3B, the main result of this section. Lemma 3o3.A: If A is a basic algebra with a finite two-sided ideal lattice LA, then the subalgebra eiAei is the homomorphic image of a polynomial algebra over k. Proof: eiAei has ei for its unity and only idempotent so it can be written (3.20) eiAei = kei + eiNeio By 3.1.F and3.1l.G finiteness of the two-sided ideal lattice is equivalent to oij(LA) being a chain for every pair i,j. By the proof of Theorem 302oB this implies that there exists an element oi in eiNei such that the principal ideal (oi) = AdiA has the same image under ii as does N, (3.21) eiNei = eiAeioieiAei. Use equation (3.20) in (3.21) to obtain (3.21'). (3.21') eiNei = koi + eiNeii + oieiNei + eiNei(ieiNei =ko<i + (eiNei)2 Now use (3.21') recursively, (3121! ) eiNei = koi + koi2 + (eiNei)3. Continue in this manner until (eiNei) ti = Oo Since eiNei

38 is the radical of eiAei such a ti exists. Then eiNei can be written tei(3.21 T') eieiei = kxi + kdi2 +' ~ + ki Clearly eiNei has a basis of powers of a single element /'i. Equation (3.20) then becomes ti-i (3.20') eiAei = kei - kai + k i2 + + ki ei is the unity element of eiAei and powers of ii commute. Let k[x] be the polynomial algebra over k in an indeterminant x. Let Q(xt) = it, G(1) = e for 1 in k and extend linearly. 9 is a ring homomorphism of k[x] onto eiAei with (xti) for a kernel. This completes the proof of the lemma. The main theorem of this section can now be proved. Theorem 3.3.B: If A is a commutative algebra, A is of strongly unbounded type if and only if LA is infinite. If LA is finite A is of finite representation type. Proof: Theorem 352oA establishes that if LA is- infinite then A is of strongly unbounded type. If A is commutative, it is its own basic algebra, for any subalgebra of A isomorphic to a total matrix algebra must already be of degree one. Let ei and ej be distinct orthogonal idempotents, then eiAej = eiejA = O. Hence A can be written n (3.22) A = eiAei, i=l where the sum is direct in the ring sense, and the eiAei are two-sided ideals hereo If R is an indecomposable

39 representation of A, then the kernel of R contains all the eiAei but one. For if not, R could be decomposed into a direct sum. Assume.LA is finite then by Lemma 353oA each eiAei is the homomorphic image of a polynomial algebra and the indecomposable representation-R can be considered as a representation of k[x] Such representations are studied in elementary matrix theory. R(x) has a characteristic function f(x) which must equal m(x) the minimum function of the representation of x if R is to be indecomposableo But R(xti) =0 so m(x) must divide xti, and the degree of f(x) equals the degree of R, hence the degree of R tio A is therefore bounded type. Two such indecomposable representations of k[x] are known to be similar if and only if they have the same minimum function. But xti has only ti distinct nontrivial divisors so eiAei has only ti distinct indecomposable repn resentationso Hence A had only i ti non equivalent ini-l decomposable representations. A is therefore of finite representation typeo Contained in the above proof is the following structure theorem for commutative algebraso Corollary 35.3oC: A commutative algebra over an algebraically closed field k is the direct sum of polynomial algebras over.......'... - ~. ['.'...-.' k if and only if it has a finite two-sided ideal lattice. Proof:.If A has an infinite two-sided ideal lattice, it is of strongly unbounded type. But polynomial algebras and direct sums of them are of finite type.

40 4o Other Consequences of a Finite Two-Sided Ideal Lattice A fruitful approach to the representation theory for algebras is to consider subalgebras generated by one or a few elements. The commutator algebra of a representation of the subalgebra is given by the set of matrices which commute with each of the generators of the subalgebra. This is because a matrix commuting with two matrices also commutes with all linear combinations of products of the two, that is, with the subalgebra they generateo In the previous section, it was the fact that eiNei was a subalgebra generated by a single element that lead to the proof of Theorem 3.3oB. The following two lemmas show that finiteness of the two-sided ideal lattice allows the choice of a certain few elements of the basic algebra A which generate A in the subalgebra senseo Then to find the commutator algebra of a representation of A, it is necessary only to look at the commutator algebra of the images of these few elements. Lemma 3.4.A: If A is a basic algebra and if LA is finite then there exists ogo in N the radical of A such that the subalgebra generated by the idempotents el,.ooen and ao( is all of A. Proof: Let /i be chosen as in Lemma 3.3.A such that powers of Xdi generate eiNeio Since N is a two-sided ideal in A, there exists cdij in eiNej for which'ij((oij)) equals eiNej. dij is chosen as in Theorem 3.2.Bo Then (3523) eiNej = eiAei('oij)ejAejo But by the choice of the oji, eiAei has a basis consisting of ei and powers of ci, and similarly for e Aejo Then

41 (3.23) can be written (3.253) eiNej = koit oij O( r, =ei t,r where the sum on the right is not necessarily direct. n n Let coO = 2~ ij + 2 <(i and let S be the subalgebra genifj i=l erated by co< and the n orthogonal idempotents ei. Orthogonality of the idempotents implies eiooej = ~(ij if i f j or eio(oei = O(i. Hence a(ij and O<i are in S. By the choice of oxi, eiAei is in S and by (3.23') eiNej which equals eiAej for i ~ j is also in S. But the sum of all of these is A so S = A. A further refinement of this is given in the following lemma. Lemma 3.4.B: If LA is finite then there exist two elements c-, odl such that the subalgebra generated by them is all of A. Proof: Let o(o be chosen as in the previous lemma. Let cl,...,Cn be distinct elements of the field k, and let n 0(l = Z ciei Let f(x) be a polynomial in an indeterminant i=l x with coefficients in the field k. Since the idempotents are orthogonal n f(ol) = 0 f(ci)ei. i=l n (x - c ) Then let fi(x) 7T ( fix _ (i c )

42 It follows that fi(o) e= eio So the subalgebra generated by o al contains each of the idempotents ei. Lemma 3.4.A then gives the result. This chapter ends with a lemma which indicates that infiniteness of the two-sided ideal lattice is a generalization, in the case the field is algebraically closed, of the hypothesis of Theorem 12.Co Lemma 3.4.C: If A has a representation R Fj P Q R = YS F Y1 S1 F Y2 S2 0 Fi a top constituent of a representation induced by a left ideal Ae where e is a primitive idempotent and Y1 has power Sl, Y2 has power s2,s2sl and Q has no Fj appearing in the top s2-Si Loewy constituents of the upper Loewy series for Q, then A has an infinite two-sided ideal latticeo Proof: Starting with the primitive idempotent e it is possible to pick a set of matrix units for a semisimple subalgebra A' of A such that Cjll = e. Going over to the basic algebra A of A, e = ejo The representation of A induced by Ae goes over into a representation R' of A induced by lAej. The top constituent RI corresponding to R can be taken to be

43 xj P, Q, (3.24) R' Xj Y~ S> 0 xj R has composition factors that induce irreducible representations Ft, and R' has corresponding composition factors xt. That no Fj appear in the top s2-s1 constituents of the upper Loewy series for Q means that no xj appears in the top s2-sl constituents of the corresponding series for Q'. Then ejNe = ej N S e, for if Cr / ^S2-S1+1 ejNej/ejNs 2Sl 1ej is not zero there would exist an xj in the top s2-sl Loewy blocks of Qto By the definition of "power" Y' having power s1 means Y0(d )= 0 for all o in N 1 but there exists ol in eiNs ej such that Y1(o(1) Oo0 Since R1 was induced by Aej, the first column of R' is completely independent. That is, 9o can be chosen so that Y~(o1), P' (l), xj(ol) are all zero. Similarly Y~ has power s2 so CO2 can be chosen in eiN 2 ej such that YA((2) 0 but xj(o2), P'(02), Yi (2) are all zero. ^ A ^ AS Define A = Ao(A, A2 = kd2 + N 2 By the choice of 02,A2 is a two-sided ideal. Al obviously is too. y{(<) = 0 for all ol in A2 because Y (o2) = 0 and YI(oC) = 0 for all o in Nsl1 NS2o But Y(O (01) i Oo Hence (3.25) ilj (A2) ij(Aj)v for if'0ij(A2) contained Vij(A), Y~{(1) would have to be zero.

44 Form =ij(A) =-eiAeijlejAej. By the remark above, ejAej= kej + ejNs2 esllej. Use this in the above equation (3.26) ij(A1) = eiAo4l + eiAei oejNjsN 21 e "S2 The right hand summand is in N 2 because 1l is in N 1, so Y~(o) = 0 for' in the right hand summand. (3.27) R'(&M) = * 0 0 By looking at R' in (3.24) it is clear that * (3.28) R'()R' (d) =. * 0 * for all o in A. Therefore Y o(l) = 0 for all elements ol in eiAl1 the left summand of (3.26). But then Y (oC) = 0 for all o in 0ij(Al) and Y~(02) 3 0. Hence (3.29) i j (A) - j (A2), for if 0ij(Al) contained 0ij(A2), Yk(&2) would also be zero. Expressions (3.25) and (3.29) imply that the image of LA under 0ij is not a chain. Then 3.1.F and 3.1.G imply A that the basic algebra A has an infinite two-sided ideal lattice LA. Theorem 2.4.B implies that A also has an infinite two-sided ideal lattice.

CHAPTER IV 1. Left and Right Ideals Although infiniteness of the two-sided ideal lattice is a sufficient condition for an algebra to be of strongly unbounded type, it is not, in general, a necessary one. In this chapter another condition is given which implies that an algebra is of strongly unbounded type. The condition given in this chapter is a generalization of Brauer's second condition stated in Chapter I. As before, the underlying field k of the algebra A is assumed to be algebraically closed. It is then possible to center attention on basic algebras, and the results in this chapter are stated in terms of basic algebras. In this and the following chapter, consideration is restricted to ideals in the radical N of A. Designate by LN the lattice of two-sided ideals of A that are contained in N. Definition 4.1.A: For a two-sided ideal Ao in the radical N of a basic algebra A, let 0i(Ao) = Aoei = Aon Nei.o (Let i/(Ao) = eiAo = Aon eiN.) Lemma 4.1.B: 0i (s) is a lattice homomorphism of LN into the lattice of left (right) ideals in Nei (eiN). Proof: (Al + A2)ei = Alei + A2ei and Al A2 Nei = (Aln Nei) f (A2 nNei), so 0i preserves both + and o (Similarly for i0.) 45

46 Theorem 4o1.C: If, for any i and any two-sided ideal Ao in N, 0i(Ao) [i0(Ao)] has more than three covers in 0i(LN) ii (LN)] then A is of strongly unbounded typeo Proof: The proof given here is in terms of the lattice homomorphism ij. A similar proof holds for i0. Let'i(At) cover ai(Ao) in Nei for t = 1,2,3,4o Assume LN is distributive, for if not, A is already of strongly unbounded type. The sublattice Lo of oi(LN) generated by these elements is complemented (every element in it is a unique sum of covers of Oi(Ao) ), so Lo is a Boolean algebra. Let At' = Am; 0i(At') = Z i(Am) is the complement thm tfm of 0i(At) in Loo Let ct be picked in 0i(At) but not in 0J(Ao). n If ejo0t is in 0i(Ao) for all j then o< = Z ej6( is there also. j-= So there exists ei(t) such that ei(t ot ='t is in 0i(At) but not in 0i(Ao). Also ei(t t =-it because ei(t) is an idempotent. By the choice of at and'the definition of At', it is not in At', but ot is in Am' for m 7 t. Let Rt be a representation with kernel At'. By the choice of the special elements om with respect to the kernels AA it is clear that Rm(dt) = 0 for t # m. Moreover, for each tRt(~(t)' 0o From Rt induce a representation Rt, where Rt(ot) is non-zero only in the lower left hand corner. Since each ot is in the radical this can always be done. Multiply alt by a suitable scalar in k to obtain o~, so that the non-zero entry in Rt(o<L) is 1 in k.

47 By the choice of oC in ei(t)Nei, each Rt is in the form xi (i) (4.1) Rt(io) = Pt(o() Qt (O) Yt( () St(o) i(t)() The special elements rh were selected in such a way that Pt(rm)' Qt(& ), St(m), xi(dm)' xit(/m) are all zero (402 for all m,t, but Yt(of) = tmo Now, form the representation R which is seen to be 4 similar to the direct sum + Rm. m=l xi I x- __-__-__ —ixi P1 PI Qi P2 P2 Q2 R = P P3s Qs P4 Q4 Y1 Y1 S1 xil Y2 IY2 S2 xi2 Y31Y3 S3 xi \ Y S84 i 4 Induce the representation R' below and to the right of the dotted lines. Let s be an integer, c be an element of k. Let I2S be the unit matrix of degree 2s, I be the unit matrix

48 of degree s. Denote by Pcs a primary matrix of degree s with the single repeated eigenvalue c. Form from these matrices the matrix Dcs. I2s (I,O) I (O,I) I (I,I) I Dcs = (IPcs) I (I,0) I I (O,I) I I (I,I) I I (I,Pcs) I I This is seen to satisfy the hypothesis of Lemma 2.5.C. Then, using the construction given in that lemma, form the representation Rcs from the representation R' and the matrix Dcs I2S Xi I I...._ol_. — -.. _....... _ _.... (I,O l P. lIxQ (O,I)XP2' IxQ2 (4.3) (I,I)XP31 IXQS Rcs = (I,Pcs)XP41 IXQ4 (IO)XYl IXSI IYSxiL (0,I )XY2 IXS2 Ixi2 (I,I)XY31 IXSs Ixxi3 (I,Pcs)XY4 IXS4 Ixxi It is now shown that every commuting matrix B of Rcs has at least 6s equal eigenvalues. Let B be divided up as follows.

49 B11 B12 B: B13.. B110 B21 B22 B23 * * B21 B31 B32 B33 B310 (4.4) B = ~ o a * * Bl01 Bo02 B103.. Bo010 where B11 B12) corresponds to I2sXXi, (B31 B32) corresponds \21 B22 to (I,O)x P1 etc., but below and to the left of the dotted lines the Be, correspond to the similarly placed entries below and to the left of the dotted lines in Rcs. If B is to be a commuting matrix B must satisfy (4.5) Rcs(~<)B = BRcs(c) for all oe in A. Consider equation (4.5) when oi= o(. Using equations (4.1) and (4.2) to evaluate (4.3) for o= o< and substituting in (4.5), it is shown that (4.6) B11 = B77 and B1-, B/7 are zero for = 1, k/ 7. Now let o'= oiA and again evaluate Rcs(oa) by means of relations (4.2). Equation (4.5) then implies (4.7) B22 = B88 and B2v, B8s are zero for 2) 2, /t 8. Let o = a, evaluate (4.3) and substitute in (4.5). By equations (4.6) and (4.7) the only non-zero entries in the first two rows of B are B11 and B22. It then follows that (4.8) Bl1 = B99, B22 = B99 and Bs are zero for /4 9.

50 Finally let o= oC4 and consider (4.5) again to obtain (4.9) B1 = Bo01o, PsB22 = BioioPcs and Bilo are zero for c4 10. Consolidating equations (4.6) through (4.9), B must have the form B 0... 0 0 B1 0.... 0 * * - (4.10) B = 0. B1H 0 0 Bl, 0 0 0 B,1 0 \ 0 0 0 Bi and equation (4.9) also implies Bll has a single eigenvalue. Thus B must have 6s equal eigenvalues and by Lemma 2.5.A, Rcs must have a direct summand Tcs of degree at least 6s. A is of unbounded type and so is any algebra that has A for a basic algebra. Continuing with the proof according to the scheme given in Chapter III, it is now shown that two such indecomposable direct summands Tcs and Tds cannot be equivalent for d i c. Let V be the space of Rcs and let VT be the direct summand corresponding to Tcs. Let Rcs(o4)V be defined to be Vo. From the form of Rcs(4), Vo is an A subspace of V, and Vo has for a basis the last s basis vectors of V when Rcs is in the form (4.3). It must be shown that Vo is a subspace of VT.

51 Let BT be the matrix of the linear transformation of V which is identity on VT and zero on its complement in V, BTV = VT. BT commutes with Rcs(k) for all i. Then BT has the form (4.10) and the part of BT corresponding to Bll in (4.10) has unit eigenvalues. This means BT cut down to Vo is an isomorphism of Vo onto itself, BTVo = Vo. Now apply the commutator equation BTRcs()s(4) cs( )BT to V itself, VTRcs(^)VTRcs(cs(B4)BTV = BTRcs(o4)V = BTVo = Vol hence VT a Vo. Using this subspace Vo, it is shown that Tcs and Tds cannot be equivalent. Suppose Tcs and Tds were similar. Then there would exist a matrix P intertwining Rcs and Rds which, when cut down to VT, is an isomorphism. P is also an isomorphism when cut down to the subspace Vo in VT. P satisfies the intertwining equation (4.11) PRcs(R) = Rds ()P for all M in A. Using equations (4.1) and (4.2) in (4.3), it is clear that Rcs(ot) = Rds(at) for t = 1,2,3. Hence for C =it, t = 1,2,3, equation (4.11) is identical with equation (4.5) where P replaces B. Let P be divided up according to the same scheme as B. Then equations (4.6), (4.7), (4.8) hold for Pw replacing BA&. From these it follows that (4.12) P11 = P22 = P77 = P88 = P99Finally, let o&= o4 in (4.11) and using (4.1) and (4.2) it is clear that P^lo are zero for #/4 10 and (4.13) P11 = PIoIo, P1 Pcs = PdsP11

52 It follows that P has the form of B in (4.10). But P cut down to Vo is 0 which must be an isomorphism so P11 is non0 P11 singular. However, the equation P11 Pcs = Pd P1l is seen to be impossible, for Pcs and Pds have distinct eigenvalues c and d respectively. Hence for c 7 d, Tcs cannot be similar to Tds. Since the field is infinite there exist an infinite number of inequivalent indecomposable representations with degrees between 6s and degree Rcs. It follows that A is of strongly unbounded type. By the development in Chapter. II, any algebra which has A for a basic algebra is also of strongly unbounded type..The following is a corollary to Theorem 4.1.C. Corollary 4.1.D: If i (LN)L ij(.LN) contains a sublattice which is a Boolean algebra with more than 2s elements, then A is of strongly unbounded type. Proof: The Stone Representation Theorem as given in Birkhoff [2] implies that Boolean algebras are fields of subsets of a certain set. Thus, the orders of finite Boolean algebras are 2n, n an integer. If the hypothesis of this corollary holds, then Oi(LN) i0(LN) has a sublattice which is a Boolean algebra with 24 elements. The smallest element

53 in that sublattice has four covers in 0i(LN), so by Theorem 4.1.C, A is of strongly unbounded bype and so is any algebra which has A for a basic algebra. Still another statement of this type of condition is given in terms of the graphs developed in Chapter V.

CHAPTER V 1. Graphs In this chapter two additional sufficient conditions are given which imply that an algebra A is of strongly unbounded type. These conditions are given in terms of a graph which is defined in the following development. Throughout this chapter only basic algebras are considered, although Lemma 5.1.A is true in general. Let A be a basic algebra and let A1 and A2 be two two-sided ideals of A. Let 0ij and LA be defined as in Chapter III. Lemma 5.1.A: If A1 covers A2, then NA1 c A2, A1N 2 A2. Proof: NA1 is a two-sided ideal contained in A1. Then NA + A2 A2 A. Suppose equality holds in the previous expression. N(NA1 + A2) = NA1 that is, 2A NA2 + A2 = A1 so N2A1 + A2 = Al. Continue in this manner, NrA1 + A2 = A1 implies N2rA1 - A2 = Al. But for some integer ro, Nro =0. This would imply A2 = A1 contrary to the initial assumption. Hence NA1 + A2 c Al, proper inclusion. Therefore, NA1 C A2 because Al covers A2. The same argument with right multiplication by N replacing left shows A1N c A2. Lemma 5.1.B: If Al covers A2 then there exists exactly one pair i,j such that 0ij(A1) covers 0ij(A2)Proof: There exists at least one pair i,j for which 0ij(Al) properly contains 0ij(A2). For if 0ij(A1) equals 54

55 ilj(A2) for all i,j, the sums over all i,j of these are also equal, so A1 = A2. Let o~ij be chosen in 2ij(A1) not in 0ij(A2). Let As be k'ij + A2. Clearly etAs c As and Aset S As for all idempotents et in A. By Lemma 5.1.A Noij E A2 and &ijN c A2 because iXj was chosen in Al, a cover of A2. Hence As is a two-sided ideal covering A2 and contained in A1. The hypothesis implies that As = A1. Clearly 0ij(A1) covers 2ij(A2), since their quotient is one dimensional. Also from the form of As = A1, it is clear that 0pr(Ai) = ~pr(A2) for all pairs p,r not equal to i,j. This completes the proof of the lemma. Throughout the remainder of this chapter, only basic algebras with a finite two-sided ideal lattice LA will be considered. According to 3.1.F and 3.1.G, LA is distributive and 0ij(LA) is a chain for every pair i,j. Also by 3.2.B, every two-sided ideal in A is principal. Of primary importance in this chapter is the sublattice LN of LA consisting of two-sided ideals of A which are contained in the radical N. 0ij(LN) is also a chain for each pair i,j. Let Ao be a two-sided ideal in LN and let A1,...,Aq be all the covers of Ao in LN. Corresponding to the two-sided ideal Ao, construct the oriented graph G(Ao) as follows. Let P1,.9,Pn be n vertices and let Pid Pj, a binary relation, hold if for some cover Ap of Ao, ij(Ap) covers 0ij(Ao). Recall the definitions of the terms used in describing graphs in Chapter I. Lemma 5.1.B insures that there exists exactly one edge for each cover of Ao.

56 For each edge in the graph G(Ao) a special element in the algebra can be selected and a special representation constructed, the properties of which are described in the following lemma. Lemma 5.1-C: For the graph G(Ao),, there exist special elements,-....., (c, in A, and representations R,,'..,Rq. of A, one Special element and one representatidn.'for each edge in the graph such that: a. If Pi~.j represents the pth edge in G(Ao) then xj Rp () = Pp Qp is the representation Rp. p Sp Xi b. The special element op for that edge is chosen in eiNej. c. Rp(&ar) = 0 if r P p,Rp(&p) has only a 1 in the lower left corner, the rest is zero. Proof: LN, the sublattice of a distributive lattice, is distributive so that the covers of Ao generate a sublattice L that is a Boolean algebra. The complement At of At in L is given by Z Ar. rft Let o i be picked in 0ij(At) not in (ij(Ao). By the proof of Lemma 5.l.B, At = kot + Ao. Clearly ait is not in At the complement of At in L, for if it were then At would be in At. Let Rt be a representation with kernel AL. Rt (ot) i 0 because o-t is not in At, but & p, for r t is in Ar hence is in At so Rt(o(r) = 0.

57 As in Chapters III and IV induce a representation Rt out of Rt such that Rt(&o) is zero except in the lower left corner, the entry there being bt in k. Note that Rt(&fr) for r f t, being part of Rt(or) is still zero for r f t. Now let oit be l/bt<9o Clearly, the Rt and oCt thus defined satisfy the conclusions of the lemma. A fact concerning cycles which is used later in this chapter is given in the following lemma. Lemma 5.1D: If the graph G(Ao) has a chain C with a repeated edge then G(Ao) contains a cycle. Proof: If PigjPj appears in a chain, Pi must appear on one side of it and Pj on the other. There are two cases to consider depending on whether Pi is on the same side of PitPj each time PitPj appears or not". If the first case the chain C is —, Pi, Pi, Pj, PPj, Pj,. j Pi, PiPj, P j,. Let Co be the chain obtained from C by cutting off C before the first Pj and after the second Pj. Examine Co followed by Co with the Pj!'s identified. It is clear that the orientation of successive edges alternates. That the edges PioPj and PkoPj are distinct is true because they appear in succession in the chain Co Hence Co is a cycle. In the second case the chain C is * — Pi' PiPj, P j, P Pj, P k,~', Prk., Prj, PirP P j, P i, P. Let PioPj be the repeated edges closest together, then PkYPj can be assumed not equal to PrPPj. Let Co be the chain C cut off before the first Pj and after the second Pj.

58 Examine Co followed by Co with the Pj's identified. The orientation of successive edges alternates and,by the remark above, the successive edges Pk P and PrSPj are distinct. Co is therefore a cycle. This completes the proof of 5.1.D. 2. Graph with Cycle The following theorem gives a third sufficient condition that a basic algebra be of strongly unbounded type. This condition is described in terms of the graphs investigated in the previous section. Theorem 5.2.A: If the graph G(Ao) associated with any twosided ideal Ao c N has a cycle then A is of strongly unbounded type. Proof: Let C be a cycle. C equals Pil. Pilo t Pl, PiPi * T, j-P jr- Pi P.r' Pi, The proof of Lemma 5.1.D insures that all the edges may be taken to be distinct. Let R11, R21,..., Rrr, Rir be the representations associated with the 2r distinct edges of C by Lemma 5.1Co xj. (5-1) R^(~) = |PO Q^ (| 9) = (I,), (2,1),..o(1,r) Y xi,q SAy Xi From the submatrices of these R^ construct a matrix function Rcs of A, XT (5.2) Res = P Y S XB as follows.

59 Let XT(v) be the direct sum of IsX xj,,() for 2)= l,..,r and let XB(&) be the direct sum of Is xiA^(o) for /4 = 1,...r, where X means Kronecker product and Is is an identity matrix of degree So Let Q(od) be the direct sum of Is X( Q^y() for the 2r Q's. By Lemma 2.5.C, XT(o), XB(d) and Q(c) are all representations of Ao Let P(o) have Is PAV(o) directly below Is Xxxj^) in XT and directly to the left of I x QA,,(c) in Q for (/4,) = (1,1), o.., (r,r). Directly below Is Xxjr and directly to the left of Is XQlr put PcsX Pr(oC) where Pcs is the primary matrix with eigenvalue co Fill out the rest of P with zeros. Let S () have IsXS,(o4) directly below Is? Q in Q and directly to the left of Isx xi in XB for (/,v) = (1,1), (2,1), o.., (r,r), (l,r). Fill out the rest of S(o) with zeros. Let Y(5) have Isxyi, (o) directly below IsX xj, in XT and directly to the left of Is xxi in XB for (A,) = (1,1), (2,1),..., (r,r). Directly below Isx jr and directly to the left of Is Xxil put Pcs x Ylr(). Fill out the rest of Y(() with zeros. The form of the block Y(c) will play an important part in the proof of the theorem. Is Xy1 Ps X ylr IsX Y21. Is xy22 (5-3) Y(c) = Is yr-ir-i Is) Yr r-i IsX Yrr

60 It is now shown that Rs is a representation of A with an indecomposable direct summand of degree at least 2rs. It must first be shown that Rcs is indeed a representation of A. It certainly is an additive matrix function of A because all of the blocks that went into its construction were. Examine the expression (5.4 Rcs(o ) - R() Rcs() block by block. As was noted before, the diagonal blocks of of Rcs are themselves representations, so (5.4) can be non zero only in the positions P, S, and Y. The expression (5.5) P(&C)XT(() + Q(o)P(~) appears in the position P of Rcs(&) Rcs()a It has [IsX Pe( )] [ Is * Wxxj(y) [Is X Q4v(a)] Is X P/^ ()] which equals IsX [P,(o) xjp() + Q/()' P,^(~)] directly below Is Xxj and directly to the left of Is x Q in Q for (#,y) - (1,1),.o., (r,r). But this expression is IsX PA, (a) from the rule for PA (~) given by the form of R^ in (5.1). Below Isx xjr and to the left of IsXQir in Rcs(a) Rcs(i) is [Pcs XPlr(d)] ~ [IsX xjr()] + [Is %Qlr(C)] * [Pcs Plr( )] which equals Pcs [ Pr(d) Xjr() + Qir(P) Pr(J)] by the multiplication rule for Kronecker products. But this last expression is seen to be the corresponding entry in P(o<). Thus Rcs(o) - Rcs(d) Rcs(Q) is zero in all the positions corresponding to P(<).

61 By a repetition of the same methods, it is shown that S(d)Q(@) + XB(O)S(>) - S(oQ) = 0 and that Y(&)XT( ) + S(O)P() + XB(CO)Y(t) - Y(O) = 0o Then the expression (5.4) is zero and Rcs(C() is a representation of A. Recall that Rcs was constructed out of the representations R (A,) = (1,1),..., (1,r) associated with the 2r edges of the cycle C by Lemma 5.1.C. This same lemma also showed the existence of 2r special elements in N associated with those edges. Evaluate Rcs at the special element associated with Pi, Pj, in C. Rcs(o^) is zero everywhere except in Y directly below Is Xxj in XT and directly to the left of Isx xi in Xg. In that non zero position is Is if (/,v) = (1,1),..., (r,r) or Pcs if (/,v) = (lr). All of this follows from the properties of the R& of Lemma 5.1.C and the construction of Rcs. Now let B be in the commutator algebra of Rcs. B11 B 2 B13 (5.6) B = B21 B22 B23 B31 B32 B33 where B is divided up to correspond to Rcs in (5.2). B satisfies the commutator equation BRcs() = Rcs(o)B for all io in A. Evaluate Rcs(ol) at the 2r special elements. Since Y(oe) is the only non zero part of Rcs(), the commutator equation implies the following equations: Y(o)B1n = B33Y('), Y(o6)Bl2 - 0, Y(d)B13 = 0, B2sY(o() = 0,

62 where o is one of the special elements. According to the form of Y in (5.3) and the description of Y evaluated at the special elements, (5-7) implies that B12, B13, B23 are all zero and that Bll and B33 are direct sums of an sxs block Bo, repeated r times in each. For ao = ir, the commutator equation implies BoPcs = PcsBo so that by Lemma 2.5.B, Bo has s equal eigenvalues. Then B has 2rs equal eigenvalues and, by Lemma 2.5.A, Rcs has an indecomposable direct summand Tcs of degree at least 2rs. Clearly, A is of unbounded type. It is now shown that A is of strongly unbounded type. Let V be the space of Rcs and let VT be the space of Tcs. Let Rcs(Orr)V = Vo. It is clear that Vo has for a basis the last s basis vectors of V when Rcs is in the form (5.2). Let BT be a commuting matrix of RCs that is identity on VT and zero on its complement in V. BT must have the previously described form for commuting matrices, B33T is the direct sum of an sx s matrix Bo which must have unit eigenvalues and is therefore non singular. Then it follows that BTVo = Vo. Now apply the commutator equation Rcs( rr)BT = BTRcs(arr) to all of V. VT - Rcs(~rr)VT = Rcs(rr)BTV = BTRcs(O rr)V = Vo hence VT e Vo The space Vo is used in proving that Tcs and Tds cannot be equivalent when d f c. Suppose that they were similar, then there would exist a matrix P intertwining Rcs and Rds, which when cut down to VT or any subspace of it, represents an isomorphism. P satisfies the intertwining equation, (5.8) Rds(&)P = PRcs()

63 for all o in A. But for o = odll,.,,rrRcs(C<) equals Rds(oZ). Therefore, for those o, (5.8) looks like the commutator equation. Let P be divided into submatrices, P11 P12 P13 P = P21 P22 P23 P31 P32 P33 corresponding to the divisions of B. It is clear from the previous argument concerning the commutator equation that P12, P13, P23 are all zero and P1l and P33 are direct sums of an sx s matrix Po. Finally let d-= oir in the intertwining equation, this implies (5o9) PdsPo PoPcsBut P cut down to Vo is Po so that Po is non singular. Then, the above equation (5.9) is impossible, because Pcs and Pds are primary matrices with distinct eigenvalues d 4 c. Therefore no such P can exist and Tcs and Tds must be inequivalent. Since the field k is infinite there exists an infinite number of such inequivalent indecomposable representations Tcs with degrees between 2rs and the degree of Rcs. A is of strongly unbounded type and, by Theorem 2.3.B, so is any algebra with A for a basic algebra. This completes the proof of Theorem 5.2.Ao 35 Graph with a Chain that Branches at Each End The fourth sufficient condition that an algebra be of strongly unbounded type is described in the following theorem. Theorem 5.3.A: If the graph G(Ao) associated with any twosided ideal AC N contains a chain which branches at each end then A is of strongly unbounded typeo

64 Proof: Let C and its branches be as follows: Pk Pkl Pj,i Pk3j Pk3 (5.10) Pj Pi PJ'Pii Pk2,Pk2t Pj, Pk4 tjr Pk4 Note that the vertices Pj- and P-r at the ends of C each have order three, so that there are four cases to consider depending on whether these two vertices have left or right order three. It is assumed here that both have left order three. The other three cases are proved by analogous methods. It is also assumed that all of the edges appearing in (5.10) are distinct. For if not, Lemma 5.1.D implies that the graph has a cycle and the algebra A, by Theorem 5o2.A, is already of strongly unbounded type. Let Rpp be the representations associated with the edges in the chain C by Lemma 5.1.C, xj\ (5.11) Ra = P( Q^ (,~) =(1,1),(1,2),...,(r-lr). s Sp Let RP be the representations associated with the branches in (5.10) by Lemma 5.1.C, Xj7 (5.12) RI = P Q (f ) = (,1l),(2,l), 5,r),(4,r). YV So XkP From the submatrices of these representations form the matrix function Rcs,

(5.13) R cs = QP Y S XB as follows Let XT(a) be the direct sum of I2Ss xj (r) for 2 = l,o..,r and let Q(o) be the direct sum of I2Sx Q2(X), I2s QX Q() for (A,y) equal to (,l),...,(r-l,r) and for (f,y) equal to (1,1), (2,1),(3,r), (4,r). Let XB()) have IsX xkl() + Is XXk2() in the top diagonal block, Isxk3(c() + Is Xxk4() in the bottom diagonal block and the direct sum of I2sX xi (c), = 1,..., r-l in the middle diagonal block. XT(g), Q(<) and XB(C) are all representations of A, by Lemma 2.5.C. Let P(?) have I2s XP,(c) directly below I2sXXj and directly to the left of I2S XQ,, for (A,~) = (l,l),...,(r-l,r) and I2sX Pg(o) directly below I2sXXj, and directly to the left of I2S (QY for (y,) = (1,1), (2,1), (3,r), (4,r). Fill out the rest of P(<0) with zeros. Let S()) have I2S XSp) (o) directly below I2S Qu in Q and directly to the left of I2s x xi in XB for (A, ) = (1,1),...,(r-l,r). Put (Is,O)x SI() below I2sXQi to the left of IsXXkl, (OIs)X S (o) below I2s Q2 to the left of IsX Xk2 (Is,Is)XS (C) below I2sXQr to the left of IsXxk3, (IS,PCS ) Sr(a) below I2S XQr to the left of Is xxk4, where Pcs is the primary matrix of degree s with eigenvalue c. Fill out the rest of S(o) with zeros. Define Y(d) similarly. Let I2SX y., appear directly below I2sX xj directly to the left of I2S Xxi, for (4,V) = (1,1),...,(r-l,r). Put (Is,O)X yi below I2SXXjl to the

66 left of Is xkl, (O,Is)x yl below I2s xxj and to the left of IsXXk2, (Is,Is)xy3 below I2sXXj and to the left of IsXxk3, (Is,Pcs)X r below I2xXxj and to the left of Is xk4. Fill out the rest of.Y with zeros. Of particular importance in the proof of this theorem is the form of Y(o). IS,~0)X Yi \,,Os)x yI (O,IFi)sX Yl II2SX Yl I2sXY12 (5.14) I2s X Y22 Y(~)= I2s X Yr-ir-I I2S X Yr-ir (Is,s) x Yr (I S 9 PC S) X Yr It is now shown that Rcs is a representation of A with an indecomposable direct summand which has a degree an integral multiple of s. It is necessary to first show that Rcs is a representation of A. It is an additive function of A because all of the blocks that went into its construction were. Examine the expression, (5-15) Rcs (_)Rcs(d) - Rcs ( ) block by block. The diagonal blocks of RCS are representations themselves so (5.15) can be non zero only in the positions corresponding to P, S, and Yo Examine the position corresponding to (IS,O)x y in Rcs(o)Rcs(~). Appearing in that position is [ (Is,O)xY1(i)] ~ [2IsXXj.(v)] + [(IsO)x SI()]'[I2sxP i()] +:[Isx xk_()] * [(Is,O)X y()] which equals

67 (Is,0) x (yi(o&)xjl( ) + SI(0)PI(B) + xk1(d)yl(~) ) by the rules for operation with Kronecker products. But this last expression is (Is,O)x yl (o ) by the rule for yi(d) given by the form of the representation R, in (5.12). Hence (5.15) is zero in the position corresponding to (Is,0)X yI A repetition of these same methods implies that all of the positions in (5.15) are zero and that Rcs is a representation of A. Recall that Rcs was constructed out of the representations associated with edges of the figure (5.10) by Lemma 5.1.C. The same lemma associated a special element in A with each of these representations. Consider Rcs evaluated at the special element oAd associated with Ra by Lemma 5.1.C. Rcs(oev) is zero everywhere except in Y directly below I2s xxjx and directly to the left of I2s 1xi1. I2s appears in that position. The above fact is true for (/,2) = (1,1),..., (r-l,r). All of this follows from the properties of the R^ and o of Lemma 5.1.C and the construction of Rcs. Now consider RCS evaluated at the special elements diL,, dr, d associated with Pk!,Pjl, Pk2 Pj, Pkk3Pjr, Pk4iPjr respectively by Lemma 5.1.C. RCS(P) is zero except in the positions corresponding to Y. The only non zero entries of Rcs(o) are the following: Rcs(pi) has (Is,O) below I2SXxjl to the left of ISXxkl, Rcs(1) has (O,Is) below I2s)Xjl to the left of IsXxk2, RCs( r3)has(TIIs) below I2s)x jr to the left of Is)xk3, RCs(,:) has (Is,Pcs) below I2s Xxj to the left of Is xk4o Now let B be in the commutator algebra of Rcs. Let B be divided into submatrices to correspond to the division

68 (5.13) of Rcs, B11 B12 B13 (5.16) B = B21 B22 B23 B31 B32 B33 B must satisfy the commutator equation, Rcs(&)B = BRcs(o), for all ed in A. Consider this equation for o equal to the special elements 4, and a Since Y(d) is the only non zero part of Rcs, the commutator equation implies the following equations, Y(()B11 = B33Y(o) (5.17) (5.17) Y(o)B12 0, Y())Bl3 = 0, B23Y(oC) = 0, where oi is one of the special elements. From the form of Y in (5.14) and the description of Y evaluated at the special elements, (5.17) implies that Bl2, B13, and B23 are all zero. By letting d equal dcY,, od, c,, r-lr in (5.17), it follows that B11 is a direct sum of a 2sx 2s block Bo, B1 0 (5.18) Bo = 0 Ba B33 is the direct sum of the block Bo down to the last 2sX 2s diagonal block B', B3 0 (5.19) B' = 0 B4 Letting i= dr in (5.17), and using (5.18) and (5.19) in (5.17) implies Bi = B3 = B2. Finally, letting o=4O in

69 (5o17) implies that B1 = B4 and B1Pcs = PcsB1o Since Pcs is a primary matrix, Lemma 2.5oB implies that B1 has a single eigenvalueo Then from the above description of B, B has at least (4r + 2)s equal eigenvalueso By Lemma 2.5oA, Rcs has an indecomposable direct summand Tcs of degree at least (4r + 2)s and A is clearly of unbounded type. To show A is of strongly unbounded type proceed as in the previous theorems. Let V be the space of Rcs, let VT be the space of Tcs and let Vo be the space Rcs( )V. Vo has for a basis the last s basis vectors of V when Rcs is in the form (5.13). Let BT be the commuting matrix which is identity on VT and zero on its complement in V. The bottom s xs diagonal block B1 in BT has unit eigenvalues and is non singular. By the form of such commuting matrices BT has only zeros above B1. Then BTVo = Vo. Now apply Rcs(r)BT = BTRcs(r) to all of. V T - Rcs ) = Rcss(4)BTV =:BTRcs(r)V = BTVo = Vo, hence VT? Vo. Now it can be shown that Tcs and Tds are not equivalent for c - do Suppose they were equivalent, then there would exist P, intertwining Res and Rds such that P, when cut down to VT or Vo, represents an isomorphismo P satisfies the intertwining equation, (5o20) PRcs(d) = Rds(=)P for all c in A. Note that for all the special elements except O, Rcs() = Rds(0)o For these - the intertwining equation looks exactly like the commutator equation. Let P be divided into submatrices corresponding to those of B,

70 Pll 0 0 P P P21 22 0 P31 P32 P33 The blocks in P above the diagonal may be taken to be 0. Also P11 is the direct sum of an s xs matrix Po and P33 is the direct sum of the same Po down to the last s xs diagonal block which is P1. All of this follows from the previous arguments concerning commuting matrices and the above comment about (5.20). Now let.o= ag in (5.20). This implies that P1 = Po and PoPcs = PdsPo Note that P cut down to Vo is Po, which is therefore non singular. Under these circumstances the above equation is impossible because Pcs and Pds have distinct eigenvalues d # co The conclusion can then be drawn that Tcs and Tds are inequivalent for c ~ do Since the field k is infinite, A has an infinite number of inequivalent indecomposable representations Tcs with degrees between 4rs and the degree of Rcso A is therefore of strongly unbounded representation type. 4. Graph with a Vertex of Order Four A restatement of the condition in Theorem 4.1.C of Chapter IV can now be made in terms of the graph. Theorem 5.4.A: If the graph Go associated with any two-sided ideal AQC N has a vertex of order four or more then A is of strongly unbounded type. Proof: If Go has a vertex of order four, there exist four covers A1, A2, As, A4 of Ao such that 0irj(Ar) covers

71 0irj(Ao) (or iijr(Ar) covers ~ijr(Ao) ) for r = 1,2,3,4. The first case occurs when the vertex of order four has left order four, the other case occurs when it has right order four. Oij is the lattice homomorphism of LA into subspaces of eiAej. By Lemma 5.1.B, Ar can be written Ar = kfr + Ao where or is chosen in eirNej (or in eiNeJr). Recalling the lattice homomorphism Oj (and iZ) of LA into the lattice of left ideals (or right ideals), Oj(Ar) = kor +Aoej (or i0(Ar) = kr +- eiAo)But then Oj(Ao) has four covers in 2j(LA) (or i0(Ao) has four covers in i0(LA) ) and, by Theorem 4.1.C, A is of strongly unbounded type.

BIBLIOGRAPHY 1o Artin, Eo, Nesbitt, C., and Thrall, R. M., Rings with Minimum Condition, University of Michigan Publications in Mathematics, No. 1, 1944o 2. Birkhoff, G., Lattice Theory, American Mathematical Society Colloquium Publication, Volume XXV. 3. Brauer, R., Unpublished notes and correspondence on representation theory, 1942. 4. Higman, D. G., Indecomposable Representations at Characteristic p, 1954 unpublished. 5. Nakayama, T., Note on Uni-serial and Generalized Uni-serial Rings, Proceedings of the Imperial Academy of Tokyo, Volume 16, pp. 285-289. 6. Nesbitt, C. and Scott, W. M., Some Remarks on Algebras over an Algebraically Closed Field, Annals of Mathematics, Volume 44, (1943), pp. 534-553. 7. Thrall, R. Mo,.Unpublished notes on representation theory for algebras, 1946. 8. van der Waerden, Bo L., Modern Algebra, Volume II, New York: Frederick Ungar, 1950. 9. Wall, D., Some Results in the Theory of Algebras with Radical, Dissertation, University of Michigan, 1953. 72

DISTRIBUTION LIST No. of Copies Agency 10 Office of Ordnance Research Box CM, Duke Station Durham, North Carolina 2 Office, Chief of Ordnance Washington 25, Do CO Attention: ORDTB PS 1 Commanding General Aberdeen Proving Grounds, Md. Attention: Technical Information Division 1 Commanding Officer Redstone Arsenal Hunstville, Alabama 1 Commanding Officer Rock Island Arsenal Rock Island, Illinois 1 Commanding General Research and Engineering Command Army Chemical Center, Maryland 1 Chief, Ordnance Development Division National Bureau of Standards Washington 25, Do Co 1 Commanding Officer Watertown Arsenal Watertown 72, Massachusetts 1 Technical Reports Library SCEL, Evans Signal Corps Laboratory Belmar, New Jersey 1 Commanding Officer Engineer Research and Development Laboratories Fort Belvoir, Virginia 1 Commander Uo So Naval Proving Ground Dahlgren, Virginia 1 Chief, Bureau of Ordnance (AD3) Department of the Navy Washington 25, D, Co 1 Uo S. Naval Ordnance Laboratory White Oak, Silver Spring 19, Maryland Attention: Library Division

DISTRIBUTION LIST (continued) No. of Copies Agency 1 Director National Bureau of Standards Washington 25, D, C, 1 Corona Laboratories National Bureau of Standards Corona, California 1 Commanding Officer Frankford Arsenal Bridsburg Station Philadelphia 37, Pennsylvania 1 Technical Information Service P, O Box 62 Oak Ridge, Tennessee Attention: Reference Branch 1 Commanding Officer Signal Corps Engineering Laboratory Forth Monmouth, New Jersey Attention: Director of Research 1 The Director Naval Research Laboratory Washington 25, Do Co Attention: Code 2021 1 Jet Propulsion Laboratory California Institute of Technology 4800 Oak Grove Drive Pasadena 3, California 1 Director, Applied Physics Laboratory Johns Hopkins University 8621 Georgia Avenue Silver Spring 19, Maryland 2 Chief, Detroit Ordnance District 574 East Woodbridge Detroit 31, Michigan Attention: ORDEF-IM 1 Commanding General Air University Maxwell Air Force Base, Alabama Attention: Air University Library 5 Document Service Center UO Bo Building Dayton 2, Ohio Attention: DSC -SD

DISTRIBUTION LIST (concluded) No. of Copies Agency 1 Commanding General Air Research and Development Command P. O0 Box 1395 Baltimore 3, Maryland Attention: RDD 1 Commanding General Air Research and Development Command P. 0. Box 1595 Baltimore 3, Maryland Attention: RDR 1 Commander Uo S. Naval Ordnance Test Station Inyokern, China Lake, California Attention: Technical Library 1 U. S. Atomic Energy Commission Document Library 19th and Constitution Avenue Washington 25, Do C. 1 Scientific Information Section Research Branch Research and Development Division Office, Assistant Chief of Staff, G-4 Department of the Army Washington 25, D. Co 1 Office of Naval Research Washington 25, D. Co 1 Canadian Joint Staff 1700 Massachusetts Avenue, N~W. Washington 6, D. C. Through: ORDGU -SE 1 Commanding General White Sands Proving Grounds Las Cruces, New Mexico