ERRATA Technical Report No. 12 485322-T Page Equation (2.2) st d be mbered (2). 10 In lines 2 and 4, the equations should read X(A) a I x(t)e'J2^tdt and X(t). X()eJ2^td 15 In Eq. (2T7 the eleaents are 0 below diaoal 17T Equation 3) b read: {r for (l~-l)f <C f < 1i, i e 1, 2,..., ndl 0 for<f <n 19 ULne 4 shd read: Cy(f) has been defined equal to zero for a < f i < n n order to shasie the fact that the In Lne 8, a cc hould be placed after vectors"o Equation49) ahuld reads A X, for ((iL)Af <1f < f ifv, a i, 2, o.., 0a 0 for m f<n o 26 Une 18 ehouM reado., " that is, the matrix In EqIo ls lover tri l aro 29 Equation (a50)s readr Y(0) 0 Ly (0o 0 5 In Lne rep lace (1x) with (xh)1

Technical Report No. 142 4853-22-T A TRANSFORM TECHNIQUE FOR MULTIVARIABLE, TIME-VARYING, DISCRETE-TIME, LINEAR SYSTEMS by A. W. Naylor Approved by: /) B. F. Barton for COOLEY ELECTRONICS LABORATORY Department of Electrical Engineering The University of Michigan Ann Arbor Contract No. DA-36-039-sc-89227 U. S. Army Electronics Materiel Agency Department of the Army Project No. 3A99-06-001-01 Fort Monmouth, New Jersey January 1964

TABLE OF CONTENTS Page List of Illustrations 1. Introduction 1 2. Matrix Representation for Multivariable Systems 2 3. Background to Transform Methods 6 4. Transform Technique 12 5. Extension of Single-variable Results to Multivariable Systems 24 6. Example 29 7. Alternate Input and Output Norms 33 8. Conclusions 36 References 37 Distribution List 38 iii

LIST OF ILLUSTRATIONS Figure Title Page 1 Block diagram of multivariable system under consideration 2 2 Form of the decomposition 13 3 Frequency domain representation of an input vector 18 4 Generalized frequency response for example system 32 v

1. INTRODUCTION In the last several years the use of matrices to characterize time-varying, discrete-time, linear systems has been growing. For example, Friedland (Ref. 1), has done much important work in this area. Cruz (Ref. 2) has shown how this idea can be employed in the design of control systems. Recently, the author showed (Ref. 3) that many of the frequency response concepts of time-invariant systems could be generalized so that they were meaningful for time-varying systems. So far attention has been centered on single-input, single-output systems. It is the purpose of this article to show that the previously developed methods can be applied, after certain changes and re-interpretations, to multivariable systems.

2. MATRIX REPRESENTATION FOR MULTIVARIABLE SYSTEMS Consider the multivariable systems shown in Fig. 1. The input on the kth input channel, where k = 1,..., m, is represented by the sequence {x(t), xk(t2),... xk(tN)}. Correspondingly, the output from the jth output channel, where j = 1,..., n, is represented by the sequence {y(tl), yJ(t2),..., y(tN)}. Note that it has been assumed LINEAR, MULTIVARIABLE, x y2 TIME-VARYING, DISCRETETIME SYSTEM Xm yn Fig. 1. Block diagram of multivariable system under consideration that the sampling times are the same for all channels. Although this assumption is not required, it is employed here to simplify the equations which follow. The mathematical model assumed for this system is m N k... n k=l s=1.. N' where gjk[t,ts] is the output of the jth channel at time t caused by a unit input in the kth channel at time t. It is assumed here that the system is in its zero state before tl, that s is, all initial conditions are assumed to be zero. Equation 1 can be recast in the following matrix formulation: In the case where this assumption is not possible, the initial state can be incorporated into the input vector and a development similar to the one presented here can be carried out. 2

y (t ) 8 11t 812[1't] glm[tl'tll lgll[t1ltN] g12[tltN] *1 glmtltNl xl1(tl y2(t,) 2l [tltl] 221[tltl].. 2m tl,tl I 21 [tl,tN] 22[tltN]... g2~[tl,tN x2 (t) | y(tl) 9l_[tl _tl] 8n2[tltl] - 8*11 gnl[tlrtN] gn2[tl'tN]... gnm[tl' x (t 1) y (t2) g11[t2'tl1 g812Et2tl1... ~~ lmt2,1 y2(t2) g21[t21t l] g22[t2tl]2m... g2m[t21tl] 2(tl) nl[t2't l g8n2[t2tl... 8 2[t2^t _k......... Y.N.) 11.[tN.tl_ 812 ^tN1.t]... gzltN. t1] |gll^[tN'tN] g~2[tN'tNl... glm[ tNtN] x (tN) | 2(t.N) g2[tN- tl] 22...tl]. 2 l g2m[tN1tN] |X (t N). yn(tNj) g8n[t tN'] tNtl] 8tt3 n2[tN tl. tNtl]gnl[tN gn2[tltN].... g|[t'tN] xm(t N) (2.2) Equation 2 can be more simply written y(t 1) G[t 1 t ]... G[t'tN] -x(t 1 ) y(t2) _ 2G[t2 t1] *. G[t2' tN] x(t2 ) y(tN) G[tN t]... G[tN, tN] x(tN) (3) where yl (ti) y2(t y(t = (t ( =1,...,N), (4) yn(t gl[tf, ts]... glm[t, t G[t,ts] =... (,s =1,..., N) gnl[t ts].. gnm[t t] (5) 3

x1 (ts x2 (t x(ts) (s 1,..., N) xm(ts) (6) It is clear that y(t ) characterizes the output on all channels at time t; x(ts) characterizes the input on all channels at time t. G[t,t ] characterizes the output on all channels at time t e caused by a unit input on all channels at time ts and can be looked upon as the "multivariable unit response. " In order to simplify the expressions which appear below, Eq. 2 is further condensed by writing it in the form y = Gx, (7) where Y(t) G[t1,t1]... G[t,] tN] x(t 1 ) Y(t _ x(t2) y G x y(tN) G[tN, tl]... G[tN tN] x(tN) (8) and G is referred to as the system matrix. It should be noted at this point that G is an nN x mN matrix (i. e., with nN rows and mN columns). Therefore the matrix G is not necessarily square, which leads to an interesting and obvious consequence. The matrix G can be rectangular either in the form (for n > m) x y = G 4

or in the form (for m > n) y = G x In the first case, it follows from the rectangularity of G that not all output vectors in nN-dimensional space are realizable (i. e., for some y there does not exist an x such that y = Gx). In fact, all realizable y's must be in a subspace of dimension less than or equal to mN (the equality holds if G is of maximal rank). In the second case, it follows from the rectangularity of G that, regardless of the explicit form of G, x-vectors in a subspace of dimension (m-n)N cause zero output [i. e., the null space of G has a dimension of at least (m-n)N]. Using the vocabulary of filter theory, one could say that such a system would have a nontrivial stop band. 5

3. BACKGROUND TO TRANSFORM METHODS At this point it is worthwhile to reconsider the basic goal behind transform techniques. Simply stated, the goal is to transform, where possible, a given operation into the operation of multiplication by a function (e. g., the transfer function). Let L denote some given linear operation such that y = Lx, (9) where x is an element of the input space or domain and y is an element of the output space or range. L is transformed to a multiplication in the desired sense if an invertible transformation T and a function 0(X) can be found such that L = T 1()T. (10) Thus, if L is a linear transformation of a space H1 into itself, T is a linear transformation of H1 onto a space H2, and multiplication by 0(X) is a linear transformation of H2 into itself, we have the following situation H1 L-H j 11 -1 H2 H2 A classic example of such a transformation to a multiplication is offered by the well-known use of the Laplace transform with linear, time-invariant systems. There T is the direct Laplace transform; 0(A) is the transfer function with the complex numbers, A, taken along the Wagner-Bromwich contour; and T1 is the inverse Laplace transform. The element x might be a function or a sequence, for example; the corresponding spaces would be function or sequence spaces. In the case of the systems considered in this article the element x is a sequence and the space is a sequence space. 6

Another well-known example in the same spirit is the diagonalization of a square matrix by means of a similarity transformation. There L is the matrix to be diagonalized; T is a nonsingular (i. e., invertible) matrix; and 0(X) is a diagonal matrix. This diagonal matrix can be viewed as equivalent to a function defined on the integers from 1 through N (L being an N x N matrix). Thus, 0(1) = X 1 the first entry on the diagonal; 0(2) = X2, the second entry; and so on through 0(N) = XN. If, in a similar manner, an arbitrary vector upon which the diagonal matrix operates is viewed as a function —say x(X), defined on the integers 1 to N —then operation with the diagonal matrix can be viewed as a multiplication of the function x(A) by the function 0(X). This latter example leads one to refer to the general process as a "diagonalization of L" whether L is a matrix or not. A detailed discussion of the philosophy behind such diagonalization applied to continuous-time systems is given by Zadeh (Ref. 4). Finally, it must be emphasized that such a diagonalization is not always possible. In addition to the obvious fact that such a "diagonalization" simplifies the representation of the operator L (assuming L is "diagonalizable"), several specific aspects should be noted. Perhaps the most important of these relates to the tandem operation of two operators, say L1 and L2, which can be diagonalized by the same transform T. In this case, L = T 0(X)T and L2 = T 2(A)T. (11) It immediately follows that 1L2 = T~ (X)TT- 02(X)T -1 = T 01(X) 2(X)T -1 = T 02(A)0l(X)T 2 LL1 (12) Thus, T also diagonalizes the operators L1L2 and L2L1 as well as L1 and L2. Carrying 1'(l. 12 further, if L1, L2,..., L form a set of operators which can be diagonalized by 7

T, then any polynomial function3 of these operators and their inverses where they exist can be diagonalized by T. (For example, consider the familiar case of polynomials of the derivative operator, d/dt, and the Laplace transform as T. ) Finally, it should be noted in Eq. 11 and Eq. 12 that a necessary condition for L1 and L2 to be diagonalized by the same T is L1L2 = L2L1. Since not all linear operators commute, one cannot expect to find a T which will diagonalize all operators. Thus, an all-purpose transform, in the sense of Eq. 10, is not possible. On the other hand, it is possible to find T's which diagonalize all members of large classes of linear operators. One such class is made up of time-invariant linear differential operators. An important result of the above statement about polynomials is that the diagonalized identity operator is the identity operator in the transform domain regardless of the transform used. This fact is easily appreciated, since I = L~ or T IT = I, (13) where I is used to denote the identity operator before as well as after transformation. The importance of this property arises in the diagonalization of polynomials of the form (I + L), for if T diagonalizes L, it also diagonalizes (I + L). For example, if L and L2 can both be diagonalized by T, then both sides of (I + L1)Y = L2x (14) can also be diagonalized by T; and the importance of equations in the form of Eq. 14 for feedback systems is well-known. It is convenient for the magnitude 10(X)I to have some significance. In particular, Parseval's (Plancherel's) theorem or its analog is desirable. Returning to Eqs. 9 and 10 y = Lx = T -(X)Tx Assume that L is a bounded linear transformation from a Hilbert space H1 into itself and 3This statement is valid for a larger class ( f f irctions than polynomials. 8

T is an invertible linear transformation of H1 onto H2. Let Tx = X(X) and Ty = Y(X) and denote the inner products on H1 and H2 by (y,x)H and (YX)H2, respectively. In the case of square-integrable functions these become (y,x) = f y(t)x(t)dt 1 and (Y,X) = f Y(X)X(X)dX, (15) 2 where the bar denotes the complex conjugate. The goal is to relate (y, y)H to 0(x) and X(X). It follows from Eqs. 9 and 10 that (Y'Y)H = (Lx, Lx)H -1 -1 = [T- (X)Tx, T 1(X)Tx]. (16) If the adjoint4 of T1 is designated by (T 1)* and (T 1)*T 1 is replaced by Q, then Eq. 16 is equivalent to (y, y)H = [0X(X, Q)X(X)]H, (17) The usefulness of the above expression depends on the nature of the transformation Q. The desirable situation is for QO(X)X(X) to be easily expressible in terms 5 -1 * of 0(X)X(X). For example, if T is a unitary transformation, then T = T*; therefore, Q = I, the identity transformation, and Eq. 17 becomes (y,y)H = [0(X)X(X), O(X)X()]H2 (18) 4Recall that the adjoint of an operator A which maps H1 into H2 is that operator A*, mapping H2 into HI, for which (Au, v)H = (u, A*v)H for all u in HI and all v in H2. 5Recall that L can be "diagonalized" by a unitary transformation if and only if it is normal, i. e., commutes with its adjoint, L*L = LL*. It is not true that all "diagonalizable" operators are normal. 9

6 Thus, in the case of the Fourier transform pair X(X) = f x(t)Ec dt -o0 and x(t) = f X(X)cJ2XdX, -oo where the implied T is unitary and, thus, Q = I, Eq. 18 becomes 0c 00 __ f y2(t)dt = 0(x)0X()X(A)X(X)d. (19) -0C -o0 Thus, 1 0(X) 2 indicates the energy transfer capabilities of the system. On the other hand the transformation Q in Eq. 17 may not lead to a simple interpretation of the magnitude of 0(X). In fact, there is no reason to expect a simple correlation between the 10(X)I and the energy transfer capabilities of the system. Thus, much of the insight and many of the analytic techniques associated with the use of transfer functions based on the Fourier (or Laplace) transforms may not carry over to the general case. Clearly, it is unfortunate when they do not. In summary, then, the diagonalization discussed above has certain advantages and certain disadvantages. The advantages are as follows: A1. If the operators L1,..., Ln can be diagonalized by a transformation T, then any polynomial function of these operators and their inverses (where they exist) can be diagonalized by T. Among other things this property allows an operational calculus based on the multiplication and addition of transfer functions to be developed. In particular, if the transform T diagonalizes L, then it also diagonalizes I + L, where I is the identity operator. 6Here x(t) is restricted to the intersection of square-integrable and absolutely integrable functions. In order to consider all square-integrable functions the Fourier-Plancherel transform must be used. 10

A2. In certain cases, for example when L is normal, L can be diagonalized in a way that leads to a meaningful generalization of Parseval's (Plancherel's) theorem. The disadvantages are as follows: D1. Not all linear operators can be diagonalized in the above way. For example, not all matrices can be so diagonalized. D2. Given any transformation T, only a relatively small class of linear operators will be diagonalizable with it. In other words, there does not exist one transformation T which will diagonalize all or even a relatively large segment of the set of all linear operators. D3. Parseval's (Plancherel's) theorem can be meaningfully generalized only in special cases. In the next section, a transform technique is introduced which overcomes some of the above difficulties at the cost of sacrificing advantages. Moreover, it adds an advantage which not even the Laplace transform as usually applied to time-invariant systems has: for a multivariable system, it yields one transfer function instead of a matrix of transfer functions. 11

4. TRANSFORM TECHNIQUE Lanzcos (Ref. 5) has shown that all system matrices, G, can be decomposed as follows: G = (YAf)AX, (20) where (i) Y is a matrix whose columns are pairwise orthogonal to one another and each is of norm JN. At this point the norm employed is the familiar Euclidean norm. This decomposition is generalized subsequently so that norms based on arbitrary inner products can be employed. If n, the number of output channels, is greater than or equal to m, the number of input channels, then Y is an (nN x mN)-matrix. If n < m, then Y is an (nN x nN)-matrix. (ii) Af = 1/N and is referred to as an increment of generalized frequency. (iii) A is a diagonal matrix with all non-negative entries. If n> m, A is an (mN x mN)-matrix. If n < m, A is an (nN x nN)-matrix. (iv) X is a matrix (XT is the transpose of X) whose columns are pairwise orthogonal to one another and each is of norm VN. If n> m, X is an T (mN x mN)-matrix. If n < m, X is an (nN x mN)-matrix. 12

YAF I n>m n<m m A X L L Fig. 2. Form of the decomposition The relation between the sizes of the above matrices is illustrated in Fig. 2. This decomposition is essentially the same as the one used with single-input, single-output systems (Ref. 3). The fact that it is valid for nonsquare matrices allows the present extension to multivariable systems. As in the case of single-input, single-output systems, it is shown below that XT acts as a direct transform, A acts as a transfer function, and (YAf) acts as an inverse transform. Given a system matrix G the decomposition indicated in Eq. 20 can be carried out in the following manner. Since T T G = (XAf)AY and T 1 Ty = I 1f Af even for rectangular Y's, it follows that GT^ = (XAf)A2 XT Thus, the columns of X are eigenvectors of G TG and the main diagonal entries in A are the positive square roots of the eigenvalues of GTG. Note that A is uniquely determined but X is not. If the entries in A are distinct this lack of uniqueness for X arises from the fact that (-1) times an eigenvector is also an eigenvector. If the entries in A are not distinct the lack of uniqueness is evidenced in a less trivial manner. For example, if G G = I or 0, 13

any X-matrix satisfying (iv) is suitable. Given X and A, the Y-matrix is determined from the relation YA = GX This relation uniquely determines Y if A is nonsingular; otherwise, Y is only uniquely determined on the range of G. Case I: n > m Assume for the moment that there are more output than input channels, that is, n > m. The matrices in the decomposition are then of the following structure: y(t )4( Y1 i) y21(t) Y2(t) Ym2 1 ) 11 1 1 y1(t2) Y2(t2) N(t 2 2 2 y1(t2) Y2 (t )(t yl(t) Y2(t Y1 N(t) An equivalent, simplified form of this matrix is 14 14

Yl(tl) Y(t) mN(tl) Yl(t2) Y2(t2) YmN(t2 ) y(tN) 2(tN)... Y (t) () where 1 (t Y j(tk) Yy2(t) { J 1,, mN yn(tk) (23) jk^ jkzf jk' jk Yj(tn1) Yj(t2) yj(tN) (25) Thus, the vectors yj, j = 1, 2,..., mN, are mutually orthogonal and of norm I-N, that is, (yj,Yk) = jk f = 6jkN, where 6jk = Kronecker delta symbol. Here - - 1 2 2 n n (j, k) j(t l) Yk(t+i(t)k(t y 1Yktl) +...* + y(t N) (26) The matrix A is an (mN x mN) diagonal matrix with nonnegative main diagonal entries, that is, 2 2 0 A = X 3 AmN (27) 15

where X. > 0, i = 1, 2,..., mN. I - The matrix XT is an (mN x mN)-matrix with mutually orthogonal rows of the form 1 2 m 1 2 I 1 T x2(t1), ) xx (t1),, xt(t1 ), x(t2), x (t2 ),...,,x l(t). (t (t T 12 1 2 ~-2 1 22 2 (t2 ~- 2 ~2 ~-2'N 2 N- 1 2 m 1 m XmN(t ) X mN(t1), Xm(t), XmN(t2)... N(t2)'.', XmN(tN)... X N(tN) (28) This matrix can be simplified to xl(t1 ) x2(t1)... XmN(t 1) xl(t2 ) x2(t2)... XmN(t2 ) X =2 - xl(tN) X2(tN)... XmN(tN) (29) or X = [x1,x2m,.. N] (30) where substitutions analogous to those used with the Y-matrix are employed. Moreover, the vectors x., j = 1, 2,..., mN, are mutually orthogonal and (xj,xk) = jk/Af = N. It is important to note that Gx. = X ii, i = 1, 2,..., mN. This relation and the orthogonality of the xi-vectors and the yi-vectors is the key to what follows. It can be seen from Eq. 20 that the first step in the operation of a decomT= posed G on an arbitrary input, x, is X x. It will now be shown that this first step can be interpreted as taking the direct transform of x to obtain its generalized frequency domain representation. First note that the orthogonal set of vectors, x,..., xmN' spans the linear space of all possible inputs. Thus, an arbitrary input, x, can be uniquely represented in the form 7 T Note that X and not X is written. 16

mN x = E r.x.Af (31) j= J where the rj's are constants. Taking the inner product of both sides of Eq. 31 with x. yields r. (x, ) i = 1, 2,..., mN (32) 1 1 for the determination of the r.'s. In a manner similar to that employed in the single-input, single-output case (Ref. 3), the sequence made up of the ri's may be considered to be a "frequency domain representation" or transform of x. Moreover, a piecewise constant function, Rx(f), of generalized frequency can be introduced as the frequency domain representation of x. This function is obtained by associating an increment Af of generalized frequency with each x.-vector. Thus, Rx(f) is defined as follows - 1 r. for (i-1)Af < f < iAf, i = 1, 2,..., mN RX(f) = t 0 for m < f < n (33) where, again, Af = 1/N. An illustration is shown in Fig. 3. The frequency domain is defined to include the interval m < f < n and Rx(f) is defined to be equal to zero on this interval so that the input and output frequency domain representations can be compatible. In general, the generalized frequency domain is defined to be 0 < f < max[m,n]. Since it can be seen from Eq. 30 and Eq. 32 that r1 X x r2 LrmN (34) T* T= it follows that X x can indeed be viewed as the direct transform, that is, X x yields Rx(f). Finally, it is easily shown that this generalized frequency domain representation leads to a meaningful generalization of Parseval's (Plancherel's) theorem. In fact, trivial calculations show that mN n (x, x) = r.2Af = Rx2(f)df, (35) j=l o 17

t x A f ~0: L: H mf= f =n - mNAf =nNAf Fig. 3. Frequency domain representation of an input vector. that is, R2 (f) can be viewed as "energy" per unit bandwidth. In a similar manner, the generalized frequency domain representation of an arbitrary output, y, is given by mN y = c.y Af, (36) j=l J J and it follows that a function Cy(f), analogous to Rx(f), can be defined in the generalized frequency domain, 0 < f < n, by ci. for (i-1)Af < f < iAf, i 1,2,...,mN Cy(f) = 0 for m <f <n (37) so that n (yy) = f Cy(f)df (38) o and Cy(f) can be viewed as the transform of y. It should be noted that arbitrary vectors in the nN-dimensional vector space which contains the output vectors, y, can not be repre18

sented as a linear combination of the yj-vectors as in Eq. 36 because these vectors span only the mN-dimensional subspace which is the range of the G-matrix under consideration. Therefore, this transform is not, as presented here, applicable to the whole nN-dimensional vector space. Cy(f) has been defined equal to zero in order to emphasize the fact that the range of G is a proper subspace (recall that here we are considering the case m < n). The functions Rx(f) and Cy(f), then, are the frequency domain representations of the input and output respectively. The subscripts X and Y indicate that the transforms are with respect to the xi- and yi-vectors respectively. It is now easy to introduce a generalized transfer function relating Rx(f) and Cy(f). It is clear from the decomposition (Eq. 20) that the output corresponding to an input x. is Aiyi. This follows from the fact that A is a diagonal matrix. Thus, in the spirit of the definitions of Rx(f) and Cy(f), the transfer function for the system can be defined by X. for (i-l)Af <f < iAf, i = 1,2,...,mN A(f) = 0 for m <f < n. (39) It follows that the generalized frequency domain representation for the system operation is given by Cy(f)= A(f) Rx(f) 0 <f < n (40) and from Eq. 38 that n (y,y) = S A2(f) RX2(f)df o T= Moreover, it follows from Eq. 24 and Eq. 36 that the matrix multiplication of AX x by (YAf) is equivalent to transforming Cy(f) to the time domain by the "inverse transform" (YAf). Case II: m > n So far it has been assumed that n > m; if m > n it is merely necessary to interchange the roles of m and n. The frequency domain becomes 0 < f < m. The matrix Y becomes an (nN x nN)- instead of an (nN x mN)-matrix, A becomes an (nN x nN)- instead 19

of an (mN x mN)-matrix, and XT becomes an (nN x mN)- instead of an (mN x mN)-matrix. Thus the original operation has been transformed to a multiplication by a function A(f) defined on the interval 0 < f < n (or 0 < f < m depending on Case I or II). However, this has not been accomplished by means of the similarity transformation discussed in Section 3, for the inverse transform implied by (YAf) is not necessarily the inT verse of the direct transform implied by X. The question is how do the advantages and advantages of this transform method relate to those listed at the end of Section 3. First, what about tandem operation of the two systems G1 and G2? Given G = (Y Af)A X1T and G = (Y2Af)A2X2 the desirable situation is to have G1G2 = (Y Af)A1A2X2T 12 - 1 2 2 A sufficient condition for this reduction to take place is Y2 = X for then X1(YAf) = I. 2' 1 (2 When G1 and G2 are invertible this condition is necessary for then T T T (Y1Af)A 1A2 = (Y1lf)AlXl (Y~ f) A^ which yields A12 = 1X (Y2f)A 2 and X1 (YAf) = I Here it is assumed that the indicated matrix multiplication makes sense. 20

which implies that Y2 = X1. In case G1 or G2 or both are not invertible, it again follows that 1A2 = A 1X1 T(Y2Af)A2 When A1 and A2 are both invertible (this can happen in spite of G. not being invertible), Y2 = X1 is still a necessary and sufficient condition. If either A1 or A2 or both are not invertible, that is, have zero entries on the main diagonal, then Y2 = X1 is no longer a 2 1 T necessary condition, for some of the rows in X1 and some of the columns of Y2 may be multiplied by the possible zero entries in A1 and A2, respectively. It follows that the necessary condition becomes that Y2 = X1 except for the columns multiplied by zero. However, since these latter columns are not uniquely determined and can be chosen so that the corresponding ones in Y2 and X1 are equal to one another, it can be said that the desired reduction takes place if and only if Y2 and X1 can be selected so that they equal one another. Finally, note that this is a general statement that applies in all cases where the decomposition of G1 or G2 is not unique. It follows from the foregoing remarks that given G = (Y1Af)A1X1, one T representation for the set of all G2's for which the G1G2 = (Y1Af)A1A2X2T is G2 (X f)A2X2 where A2 and X2 are arbitrary or constrained by the requirements of physical realizability. In any event, this class of G2's is not empty; therefore, the concept of the product of two transfer functions representing tandem operation does carry over in a certain sense. On the other hand, it is true, for example, that the transfer function of G2 is not necessarily A2 (f); and this would be true if a similarity transformation were used. In summary, when the decomposition presented in this article is employed a multiplication of transfer functions results for certain classes of tandemly connected systems —just as multiplication of transfer functions results for certain, presumably other, classes of tandemly connected systems when a decomposition based on a similarity transformation is used. 21

Next consider the question of the decomposition of I + G. If the decomposition of G is given by9 G = (YAf)AXT and Y / X, it follows that (YAf)IXT I. Then I + G / (YAf) (I+A)X, which means that except for the special case Y = X, transformation of I + G and G cannot be carried out by the same Y and X transformations. This is not the case, of course, when similarity transformations are used. In regard to a generalization of Parseval's (Plancherel's) theorem, it is clear from the foregoing discussion that a meaningful generalization is always possible. This is an extremely important property and one which is not present when similarity transformations are employed. Another advantageous property of the present decomposition, and one not present with a similarity transformation, is that it can be applied to an arbitrary system matrix, G. So much for advantages and disadvantages. T One last point: The nature of the transform implied by the matrix X and its constituent x.-vectors should be carefully appreciated. Since arbitrary inputs, x, are represented as linear combinations of the x.-vectors, the xi's can be viewed as basic or fundamental inputs, and the response to these fundamental inputs completely characterizes the system. The key point is that these fundamental inputs can and probably do involve simultaneous inputs on more than one channel, which is in contrast, for example, to the approach to time-invariant multivariable systems implied when the final system characterization is a matrix made up of transfer functions. There each column of the transfer function matrix is the Laplace transform of the output when a unit impulse is applied to one input channel while the other input channels have zero input. Thus, the present approach might be characterized 9Here G is assumed to be square. 22

as treating all input channels simultaneously rather than one at a time. Related remarks can be made regarding the (YAf)-matrix and its constituent yi-vectors. 23

5. EXTENSION OF SINGLE-VARIABLE RESULTS TO MULTIVARIABLE SYSTEMS It has been shown in Ref. 3 that many important results follow from the matrix decomposition discussed in the foregoing section when it is applied to matrices representing two-port systems (single-input, single-output channel systems). Since the same decomposition has been applied here to matrices representing multivariable systems, it is not too surprising that many of the results pertaining to two-port systems also pertain to multivariable systems. Some of these extensions are outlined below: Gain. The entry X. on the main diagonal of the A-matrix is referred to as the gain over the frequency interval (i-1)Af < f < iAf. Gain-Squared Bandwidth Product. The gain-squared bandwidth product, DI(G), is defined by max[m, n] min[m, n] c(G) = f A2 (f)df = A i2Af. (41) o i=l Moreover, it should be noted that O(G) = E gij [tk t i]f i,j,k,g where the gij2 [tkt ]'s are the elements of G. Norm of G. The norm of G, IIGII, is defined by IIGII = max IIGxll. lxll =1 It can be shown that IIGII = max X. (42) 24

Assuming for the moment that the Xi's are distinct and that X 1 > X2 >... > XnN (or mN), it follows that for all x-vectors of a fixed norm, say /N", the one causing an output vector with a maximum norm is x1 and the corresponding output vector is A 1y1. Considering all input vectors of norm NT which are orthogonal to xi, the one which causes an output vector with a maximum norm is x2 and the corresponding output vector is 2Y2. This pattern continues through XnN (or mN) and XnNYnN (or AmNYmN). This is an important property of the decomposition presented and shows that the xi and yi-vectors characterize the "extremal" inputs and outputs of the system. In case the i's are not all distinct, as assumed above, any fixed norm linear combination (say norm equal ANT) of x.-vectors associated with equal I's yields an output vector whose norm is both independent of the linear combination used 1 and the maximum output norm possible over the appropriate subspace of inputs. For example, if > 2 >...> Aj = j+1 > Aj+2 >. > AnN (or X mN) and the input vectors of norm,/N which are orthogonal to x1, 2 x,. are considered, the x's associated with the maximum output are all linear combinations of the form axj + bx.j, where a +b2 = 1, and the corresponding outputs are a jyj + bX j+lj+l. Bandwidth. As in the case of two-port systems, a meaningful generalization of the concept of bandwidth is given by Bw = 2(G) (43) Physical Realizability. Let the columns of the matrix XT be designated by uj(tk), where xi(tk) x(tk) _ xFtk)= 1,2,...,N uj(tk) = x(tk) xN(tk) (44) 25

The uj(tk)'s are referred to here as the input ensemble vectors. Let the rows of the matrix (YAf)A be designated by vj(tk): xlY 1Y(tk) vj(tk) J(tk) = 2y2(tk { k 1,2,,N mNYmN(t (45) The vj(tk)'s are referred to as the output ensemble vectors. Referring to Eq. 3 it can be appreciated that the matrix G corresponds to a physically realizable system if G[tk,tr] = 0 for r > k. (46) This is easily shown to be the case if and only if [vj(tk), u(tr)] = 0 for r > k. (47) That is, the output ensemble vector at time tk must be orthogonal to all input ensemble vectors occurring later in time. Roughly speaking, the output at any time is "independent" of all future inputs. In the case of single-input, single-output systems the G-matrix for a physically realizable system is lower triangular. For multivariable systems this is not the case; however, it can be seen from Eq. 2, Eq. 46 and Eq. 47 that matrices representing physically realizable systems are what might be called "lower staircase," that is, Eq. 3 is lower triangular. Pseudo-Inverse of G. The system matrix G is not necessarily invertible; in fact, if G is rectangular it will not be invertible regardless of its structure. On the other hand, given a desired output y it is often necessary to find an input x, if one exists, which causes y. That is, given G and y, an x must be found such that Gx = y. (48) Assuming that 26

G = (YAf)AX, it follows that Eq. 48 has a solution if y is an element of the space spanned by the columns of Y (i. e., yi-vectors) associated with nonzero X's. Simply stated, y must be in the range of G. If y is in Lk(G), then it follows that a solution to Eq. 48 is given by x0 = (XAf)A+Y y (49) where A+ is a diagonal matrix for which i = 1/Xi if Xi f 0 and + = 0 if X. =0. If G has a nontrivial null space, then the general solution to Eq. 48 is x = x + xN o N' where xN is any element in the null space of G and, owing to the selection of x, x is orthogonal to x. The validity of this latter statement follows from the fact that the null space of G is orthogonal to the subspace spanned by the columns in X associated with nonzero A.'s. Thus, lXoll < lx + xNI for all XN that is, if a solution exists x is the I 11011 110 Nii N' 0 "smallest" one. Now assume that y is not in the range of G, that is, Gx = v f y, where x 0 o 0 is determined from Eq. 49. Since Y =Gx = (YAf)AX (XAf)A +yT y Yo O T= (YAf)AAY y, it follows that yo is the orthogonal projection of y onto the range of G. Thus, yo is the vector in the range of G "closest" to y, that is, Iy - Gxil is minimized by x. Obviously, yo o = Gx = G(xo + X) thus, again, |I|xo < II x + xN\. Note that when G-1 exists it equals (XAf)A YT Whether G1 exists or not, (XAf)A Y is referred to as the pseudo-inverse (Refs. 5, 6 and 7) and is designated by G. Finally, the physical realizability of G does not necessarily imply that G is physically realizable. For example, consider the following matrix representation of a physically realizable single-input, single-output system. If 27

"0 0 0 G = 1 0 0 0 1 0 its pseudo-inverse is 0 1 0 G+ = 0 0 1 0 0 O which does not correspond to a physically realizable system. 28

6. EXAMPLE Consider the two-input, two-output channel system characterized by the following difference equation: ['(k)1 [-1'!k1 [ (k-1) jx (k)1 L = y (k = 1,2,...,N) N N 2 1 (0) _ y2(0) O (50) It follows from Eq. 50 that the inverse, G 1, of the system matrix for N = 10 is given by 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1/10 -9/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -9/10 -1/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2/10 -8/10 1 0 0 o 0 0 0 0 0 0 0 0 O 0 0 0 0 0 -8/10 -2/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -5/10 -7/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -7/10 -3/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -4/10 -6/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -6/10 -4/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -5/10 -5/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o -5/o1 -5/1o 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -6/10 -4/10 1 0 0 0 0 0 0 0 0 o 0 0 o 0 0 0 0 0 -4/o10 -6/10 0 1 0 o 0 0 o 0 o 0 0 o 0 0 0 0 o 0 0 0 -7/10 -/10o 1 0 0 0 0 0 0 0 0 0 o 0 o 0 0 0 0 0 -3/10 -7/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -8/10 -2/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2/10 -8/10 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -9/10 -1/10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1/10 -9/10 0 1 (51) Note that the nonsingularity of this matrix follows from its lower triangularity and from the fact that all its main diagonal elements are nonzero. Since the decomposition of G is given by 29

G = (Y~f)AXT it follows that -1 Therefore, the decomposition of G as given in Eq. 51 is equivalent to the decomposition of G itself. It was found with the aid of a digital computer that the parts of this decomposition are as follows: 0.145 0.423 0.003 -0.824 -0.664 -o.0O46 -0.745 0.845 -0.716 o.001 0.951 1.020 0.243 0.973 0.007 1.4.67 0.908 -0.763 -0.550 0.288 0.145 0.423 -0.003 o.824 -0.664 0.046 0.745 0.845 0.716 -0.001 0.951 -1.020 -0.245 0.973 -0.007 -1.467 0.908 -0.763 -0.550 0.288 0.288 0.763 -0.005 1.377 -0.973 0.049 0.795 0.845 0.485 -0.001 0.423 -0.102 -0.024 -0.145 0.008 1.491 -0.664 0.951 0.908 -0.550 0.288 0.763 0.005 -1.377 -0.973 -0.O49 -0.795 0.845 -0.485 0.001 0.423 0.102 0.024 -0.145 -0.008.1.491 -0.664 0.951 0.908 -0.550 0.423 0.951 0.004 -1.328 -0.763 0.014 0.234 0 0.742 -0.002 -0.763 -1.394 -0.332 -0.951 0.004 0.760 -0.423 -0.423 -0.951 0.763 0.423 0.951 -0.004 1.328 -0.763 -0.014~ -0.234 0 -0.742 0.002 -0.763 1.394 0.332 -0.951 -0.004 -0.760 -0.423 -0.423 -0.951 0.763 0.550 0.951 -0.003 0.781 -0.145 -0.086 -1.411 -0.845 -0.844 0.002 -0.763 -1.242 -0.296 0.288 0.001 0.219 0.973 -0.423 0.663 -0.908 0.550 0.951 0.003 -0.781 -o.145 0.084 1.411 -0.845 0.844~ -0.002 -0.763 1.242 0.296 0.288 -0.001 -0.219 0.973 -0.423 0.665 -0.908 0.664 0.763 0.001 -0.224 0.550 0.081 1.322 -0.845 -1.728 0.004 0.423 -0.443 -0.106 0.908 0 0.003 -0.202 0.951 -0.145 0.973 0.664 0.763 -0.001 0.224 0.550 -0.081 -1.322 -0.845 1.728 -0.004 0.423 0.443 0.106 0.908 0 -0.003 -0.288 0.951 -0.145 0.973 0.750230150091112-.700.01.10.10.3-054-.950O70-.6-063-.25-.1 0.763 0.423 -0.123 0 0.951 -1.172 -0.072 0 -0.004 -1.811 0.951 -0.132 -0.534 -0.425 -0.04~7 0 -0.763 -0.763 -0.423 -0.951 0.845 0 0.434 0.00O2 0.845 1.486 -0.091 0.845 -0.001 -0.502 0 -0.343 1.441 -0.845 -o.334 0.002 0.845 0 0.845 0.845 0.845 0 -0.454 -0.002 0.845 -1.486 0.091 0.845 0.001 0.522 0 0.343 -1.441 -0.845 0.334 -0.oo2 0.845 0 0.845 0.845 0.908 -0.423 0.913 0.003 0.288 0.802 -0.049 0.845 -0.002 -0.987 -0.951 0.277 -1.165 0.550 1.056 -0.005 0.14~5 0.763 -0.973 -0.664 0.908 -0.423 -0.913 -0.003 o.288 -0.802 0.049 0.845 0.002 0.987 -0.951 -0.277 1..165 0.55O -1.056 0.005 0.145 0.763 -0.973 -0.664 0.951 -0.763 1.325 0.004 -0.423 -0.224 0.01.4 0 0 -0.083 -0.423 0.134 -0.572 0.763.1.701. 0.009 -0.951 -0.951. 0.7635.2 0.951 -0.763 -1.325 -0.oo4 -0.423 0.224 -0.014 0 0 0.083 -0.423 -0.136 0.572 0.763 1..701. -0.009 -0.951 -0.951 0.7635.2 0.973 -0.951 1..497 0.005 -0.908 -0.841 0.051. -0.845 0.001. 0.684 0.763 -0.1.94 0.816 -0.664 0.937 -0.005 0.550 0.423 -0.288 -0.145 0.973 -0.951 -1.497 -0.005 -0.908 0.841. -0.051 -0.845 -0.001 -0.684 0.763 0.194 -0.8].6 -0.664 -0.937 0.005 0.550 0.423 -0.288 -.4 ~1;2 83 84 85;6 87;8;9;].o~; 11 2;1;14;1 ~16;7;]18;19 820 (52) 30

_ 6.691 o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2.247 0 0 o 0 0 0 0 o 0 0 0 0 0 0 0 0 0 0 0 0 1.818 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o o 0 o 1.818 o 0 0 0 0 0 0 0 0 o 0 0 0 0 0 0 o 0 o 0 1. ~69 0 o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.128 0 0 0 0 0 o 0 0 0 0 0 0 0 o 0 0 0 0 0 0 1.128 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o 0 0 0 0 1.000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.954 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O. 951~ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.802 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.801 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.801 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.682 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ~ 0 0 0.638 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.6~8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.605 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.555 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O. 5e~ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.~6 (53) 0.973 0.973 o.951 0.951 0.908 0.908 0.8~5 0.8%5 0.763 0.763 0.66b 0.66% 0.550 0.550 O. ~?.3 o.~9~ 0.e88 o.e88 O. lb5 O.lb5 ~1 0.951 0.951 0.763 0.763 0.%23 0.%2~ o o -o.b,2~ -0.423 -0.763 -o.76~ -0.951 -o.951 -0.951 -0.951 -0.763 -0.763 -0.%25 -0.~4~ ~2 0.oo5 -o.005 -o.00~ 0.0o~ 0.OO3 -0.003 -O.O(m o.oce 0 0 o.22~ -o.za4 0.781 -o.781 1.~8 -1.528 1.~77 -1.377 0.824 -0.82~ ~ -1.497 1.497 1.305 -1.~05 -0.91~ 0.913 0.~55 -0.~55 -0.12~ 0.123 0.001 -0.001 0.003 -0.003 0.00~ -0.001~ 0.005 -0.005 0.005 -0.003 -0.908 -0.908 ~-0.g25 -0.1~2~5 0.288 0.~88 0.845 0.8~5 0.951 0.951 0.550 0.550 -0.11~ -0.11~5 -0.76~ -0.763 -0.97~5 -0.97~5 -0.66% -0.66~ 7e -0.051 0.05]. 0.~.~57 -0.].37 0.0~9 -0.04.9 -0.091 0.091 0.072 -0.072 1.~ -1.~ 1.1~11 -1.~,11 O.R~. -O.R~. -0.79~ 0.79~ -0.'1'~.] 0.?~.5 ~6 -o.8~ o.8~ o.~ -o.z~ o.eo~ -o.am -~.g86 ~.~86 ~.17~ -~.~7~ -o.o8~ o.oax -o.o86 0.086 -o.o~ o.ol~ o.o~9 -o.o~9 o.o~6 -o.o~6 0.8~5 0.8~5 o 0.-o.8~ -o.8~ -o.8~5 -0.8~5 0 0 o.8~ 0.8~ 0.8~ 0.895 0 0 -o.8~ -o.8~ -0.8~ -o.8~5 % -0.68% 0.68% -0.08~ 0.08~ 0.987 -0.987 -0.522 0.522 -1.811 1.811 0.0o% -0.00% -0.00~ 0.00~ -0.00~ 0.0o~ 0 0 0.001 -0.001 7e9 0.001 -0.001 0 0 -0.00~ 0.002 0.001 -0.001 0.00% -O.00A~ 1.728 -1.728 -0.8~ 0.8%4 -0.7~5 0.7~5 0.~,85 -0.%85 0.716 -0.716 = ~10 0.76~5 0.765 -0.~ -0.~5 -0.9~5~ -0.95~ 0 0 0.9~1 0.951 0.1~.5 0.1~9.~5 -0.76~ -0.76~5..0.76~5 -0.76~5 0.%,9.~5 0.1~95 0.95~ 0.95~ ~n 0.8].6 -o.816 0.572 -o.~572 -L165 1.165 -1.~1 ]..~1 -o.55~ o.~9 0.].o6 -o. lo6..o.296 o.~36 o.~N~ -o.3~ -o.2~,2 o.~2 -o.2~2 o.292 lL9 0.~9~ -0.].9~ 0.1~6 -0.].~6 -0.~77 O.ZT7 -0.~ 0.~5 -O.l~ 0.].~ -0.~3 O.g~5 X.~ -X.~g~ -X.393 ~.~93 0.].0~ -O.X~ ]..O2O -1.20~ ~5 0.66g 0.66~ -0.76~.-0.763 -O.55O -o.55O 0.8~ 0.8~ 0.~.~5 O.g~3 -0.9O8 -0.9O8 -0.~88 -0.288 0.951 0.951 0.~ 0.1~,5 -0.97~5 -0.97~ /,..~ 0.005 -0.005 0.009 -0.009 0.005 -0.005 O.00e -0.002 0 0 0.0~0 -0.0~0 -0.219 0.~19 0.760 -0.760 -1.1~9]. 1.1+91 1.1~67 -1.g67 ~15 0.937 -o.937 1.7Ol -L7o~ 1.o~6 -Lo56 o. 33~ -o. 3~ o.o~7 -o.o~7 o o o.ool -o.ool -o.oo~ o.oo~ 0.oo8 -o.oo8 -o.oo8 0.oo8 ~'16 0.550 0.550 -o.951 -o.951 o.1%5 o.1%5 o.8~5 o.8~5 -0.765 -o.765 -0.288 -0.288 0.975 o.97~ -o.%25 -o.%z5 -o.66~ -0.66% 0.908 0.908 ~17 -o.~23 -o.%25 o.951 0.951 -0.76~ -0.765 o 0 o.76~ o.76~ -0.951 -o.951 o.~ 0.~9~ 0.%~ 0.%25 -0.951 -0.951 o.76~ O.76D ~1~ -0.288 -0.288 0.76~ 0.7615 -0.97/5 -o.973 0.8~5 0.8~.5 -0.1~2~5 -0.1~2~5 -0.11~5 -0.11~5 0.661~ 0.66% -0.9~51 -0.951 0.908 0.908 -0.550 -0.55K) ~tl,, O.].A~ 0.]-~,5 -O.gZ3 -0.~3 0.66~ 0.66~ -0.8~ -0.Sg} 0.951 0.9~51 -0.973 -O.973 0.9O8 0.9O8 -0.76~ -0.76~5 0.550 0.55O -0.~88 -0.288 ~0 (54) 31

10 9 8 t 7 X 6 t 5 _ l ~' -- o ODoO~ Xooino Fig. 4. Generalized frequency response for example system A plot of the generalized frequency response, A(f), in Fi Fig. 4 that A 1 is considerably larger than the remaining A i's. If the number of sampling times, N, were increased from 10, it would be found that A 1 approaches infinity as N-0c. This is be[1, 1] independently of k. Finally, the insight into system behavior given by Eq. 53 or Fig. 4 is direct GENERALIZEDand obvious. On the other handFREQUENCY, in the Y and X-matrices, Fig. 4. Generalized frequency response for example system A plot of the generalized frequency response, A(f), is shown in Fig. 4. Note in Fig. 4 that even for this relatly larger than the remaining If the number of samplming. Clearly, there is a need for were increased froexample, the number of sign changt approaches infinity as channel mightis is be cused. Many other possibilities sugges the matrix in Eqlves. 50 is and its associated eigenvector isn the [1, 1] independently of k. Finally, the insight into system behavior given by Eq. 53 or Fig. 4 is direct and obvious. On the other hand, the mass of numbers present in the Y and X-matrices, even for this relatively simple example, can be overwhelming. Clearly, there is a need for some limited, perhaps not unique, characterization of the x. and y.-vectors which make up X and Y. For example, the number of sign changes or sign changes per channel might be used. Many other possibilities suggest themselves. As more experience is obtained in the use of these decompositions it should be possible to discover which characterizations are the more suitable. 32

7. ALTERNATE INPUT AND OUTPUT NORMS So far the decomposition of the system matrix, G, has been based on the following inner product: (xz) = y(tl) Zl(tl) +... + y(t) z(t) +... + y (tN) z (tN)+... + yn(tN) zn(tN) (55) Moreover, this inner product is used to characterize the range as well as the domain of G. There are many situations in which this characterization is not desirable. For example, it may not be realistic to weight the inputs and outputs on all the channels exactly the same. This uniform weighting is implied by Eq. 55 and it may not be desirable to weight all the sampling times exactly the same, which is also implied by Eq. 55. Therefore, the decomposition of G is generalized here so that an arbitrary input and an arbitrary output inner product can be employed. An aribtrary inner product on a finite dimensional vector space can be represented by - T 2= (x1, x2) 2 = X2 P X' P where9 P is a symmetric, positive definite matrix. Let the desired input and output inner products be characterized by Pi and Po respectively. Given y = G, (56) let x = Pi u 9The matrix p2 is used instead of P in order to avoid the notation AP. Since every positive definite symmetric matrix has a positive definite symmetric square root, this procedure results in no loss of generality. 33

y = Pv Then v = P GP. u o 1 = Hu But the norm in the v- and u-spaces is again Eq. 55; therefore H can be decomposed as before. Thus H = (YAf)AX Then G = (PO YAf)AXTP. o 1 = (Pl YAf)A(Pi X)T P. T 2 = (Yp f)Ap p XTP p (57) 0 0 1 1 o o ii where Yp = P Y A, i =, and Xp= P X. Thus, the decomposition of G takes on Po oi 1 a slightly altered form. Nevertheless, the analogies discussed previously carry over. It T 2 is easily seen that operation on an arbitrary input vector, x, with XpPi is equivalent to T taking the direct transform. Let the rows of Xp be denoted x.. These vectors are pairwise orthogonal and Ilxil = = = 1//- relative to the input norm determined by Pi, for T p 2X XTp-1 2p -1X P.i P. i 1 i I T = XX = NI If x is expanded in terms of the x.-vectors, mN x = r af k-l k k k=34 34

and r =(x, X). =XPi 2= 1 but (X, X1) T 2= XP. 1' (X, XmN)P Therefore the analogy does indeed carry over. Clearly, Ap p in Eq. 57 acts as a generalo i ized frequency response, and (Yp Af) acts as the inverse transform. It should be noted that 0 the columns of Yp are pairwise orthogonal and of norm -/N relative to the output norm 2~ determined by P. A discussion of the properties of the decomposition in Eq. 57 would parallel the discussion given for the Euclidean norm, Eq. 55 which is now a special case of Eq. 57. However, it is of interest to consider the method whereby Eq. 57 is generated. Since P=2 2 = A2 -2 T 2 2 i 0 A P P. p. i it follows that the x.-vectors are the eigenvectors of P. G P G and that the A.'s are its ~~1 i~~~ ~ O 0 eigenvalues. The yi-vectors can be determined from the relation Gx. = X.y. 1 1 i1 under the assumption that A. / 0. If A = 0, then, as before, the yi-vectors are also the x.-vectors and need not be uniquely determined. 35

8. CONCLUSIONS It has been shown that multivariable, time-varying, discrete-time, linear systems can be handled in a manner equivalent to single-input, single-output systems. A transform technique for the latter systems is extended to multivariable systems, and frequency response concepts are shown to carry over in a straightforward way to multivariable systems. The key point of the present development has been a de-emphasis of the channelized character of the input or output and the treatment of an arbitrary input or output as a single vector in a linear vector space. Thus, much of the insight associated with singleinput, single-output systems has validity for multivariable systems. 36

REFERENCES 1. B. Friedland, "A Technique for the Analysis of Time-Varying Sampled-Data Systems," Transactions of AIEE, Vol. 75, January 1957, pp. 407-412. 2. J. B. Cruz, "Sensitivity Considerations for Time-Varying Sampled-Data Feedback Systems," IRE Transactions on Automatic Control, May 1961, pp. 228-236. 3. A. W. Naylor, "Generalized Frequency Response Concepts for Time-Varying, Discrete-Time Linear Systems," IEEE Transactions on Circuit Theory, September 1963. 4. L. Zadeh, "A General Theory of Linear Signal Transmission Systems," J. Franklin Inst., April 1952, pp. 293-312. 5. C. Lanczos, "Linear Systems in Self-Adjoint Form," Amer. Math. Monthly, Vol. 65, 1958, pp. 665-679. 6. T. N. E. Greville, "The Pseudoinverse of a Rectangular or Singular Matrix and its Application to the Solution of Systems of Linear Equations," SIAM Review, Vol. 1, No. 1, January 1959. 7. R. Penrose, "A Generalized Inverse for Matrices," Proc. Cambridge Philos. Soc., Vol. 51, 1955, p. 406. 37

DISTRIBUTION LIST Copy No. 1-2 Commanding Officer, U. S. Army Electronics Research and Development Laboratory, Fort Monmouth, New Jersey, ATTN: Senior Scientist, Electronic Warfare Division 3 Commanding General, U. S. Army Electronic Proving Ground, Fort Huachuca, Arizona, ATTN: Director, Electronic Warfare Department 4 Chief, Research and Development Division, Office of the Chief Signal Officer, Department of the Army, Washington 25, D. C., ATTN: SIGEB 5 Commanding Officer, Signal Corps Electronic Research Unit, 9560th USASRU, P. 0. Box 205, Mountain View, California 6 U. S. Atomic Energy Commission, 1901 Constitution Avenue, N. W., Washington 25, D. C., ATTN: Chief Librarian 7 Director, Central Intelligence Agency, 2430 E Street, N. W., Washington 25, D. C., ATTN: OCD 8 U. S. Army Research Liaison Officer, MIT-Lincoln Laboratory, Lexington 73, Massachusetts 9-18 Defense Documentation Center, Cameron Station, Alexandria, Virginia 19 Commander, Air Research and Development Command, Andrews Air Force Base, Washington 25, D. C., ATTN: SCEC, Hq. 20 Directorate of Research and Development, USAF, Washington 25, D. C., ATTN: Electronic Division 21-22 Hqs., Aeronautical Systems Division, Air Force Command, Wright-Patterson Air Force Base, Ohio, ATTN: WWAD 23 Hqs., Aeronautical Systems Division, Air Force Command, Wright-Patterson Air Force Base, Ohio, ATTN: WCLGL-7 24 Air Force Liaison Office, Hexagon, Fort Monmouth, New Jersey - For retransmittal to - Packard Bell Electronics, P. 0. Box 337, Newbury Park, California 25 Commander, Air Force Cambridge Research Center, L. G. Hanscom Field, Bedford, Massachusetts, ATTN: CROTLR-2 26-27 Commander, Rome Air Development Center, Griffiss Air Force Base, New York, ATTN: RCSSLD - For retransmittal to - Ohio State University Research Foundation 28 Commander, Air Proving Ground Center, ATTN: Adj/Technical Report Branch, Eglin Air Force Base, Florida 29 Chief, Bureau of Naval Weapons, Code RRR-E, Department of the Navy, Washington 25, D. C. 38

DISTRIBUTION LIST (Cont. ) Copy No. 30 Chief of Naval Operations, EW Systems Branch, OP-35, Department of the Navy, Washington 25, D. C. 31 Chief, Bureau of Ships, Code 691C, Department of the Navy, Washington 25, D. C. 32 Chief, Bureau of Ships, Code 684, Department of the Navy, Washington 25, D. C. 33 Chief, Bureau of Naval Weapons, Code RAAV-33, Department of the Navy, Washington 25, D. C. 34 Commander, Naval Ordnance Test Station, Inyokern, China Lake, California, ATTN: Test Director - Code 30 35 Director, Naval Research Laboratory, Countermeasures Branch, Code 5430, Washington 25, D. C. 36 Director, Naval Research Laboratory, Washington 25, D. C., ATTN: Code 2021 37 Director, Air University Library, Maxwell Air Force Base, Alabama, ATTN: CR-4987 38 Commanding Officer - Director, U. S. Naval Electronics Laboratory, San Diego 52, California 39 Office of the Chief of Ordnance, Department of the Army, Washington 25, D. C., ATTN: ORDTU 40 Commanding Officer, U. S. Naval Ordnance Laboratory, Silver Spring 19, Maryland 41-42 Chief, U. S. Army Security Agency, Arlington Hall Station, Arlington 12, Virginia, ATTN: IADEV 43 President, U. S. Army Defense Board, Headquarters, Fort Bliss, Texas 44 President, U. S. Army Airborne and Electronics Board, Fort Bragg, North Carolina 45 U. S. Army Antiaircraft Artillery and Guided Missile School, Fort Bliss, Texas 46 Commander, USAF Security Service, San Antonio, Texas, ATTN: CLR 47 Chief, Naval Research, Department of the Navy, Washington 25, D. C., ATTN: Code 931 48 Commanding Officer, 52d U. S. Army Security Agency, Special Operations Command, Fort Huachuca, Arizona 39

DISTRIBUTION LIST (Cont.) Copy No. 49 President, U. S. Army Security Agency Board, Arlington Hall Station, Arlington 12, Virginia 50 The Research Analysis Corporation, 6935 Arlington Rd., Bethesda 14, Maryland, ATTN: Librarian 51 Carlyle Barton Laboratory, The Johns Hopkins University, Charles and 34th Streets, Baltimore 18, Maryland 52 Stanford Electronics Laboratories, Stanford University, Stanford, California, ATTN: Applied Electronics Laboratory Document Library 53 HRB - Singer, Inc., Science Park, State College, Pennsylvania, ATTN: R. A. Evans, Manager, Technical Information Center 54 ITT Laboratories, 500 Washington Avenue, Nutley 10, New Jersey, ATTN: Mr. L. A. DeRosa, Div. R-15 Lab. 55 Director, USAF Project Rand, via Air Force Liaison Office, The Rand Corporation, 1700 Main Street, Santa Monica, California 56 Stanford Electronics Laboratories, Stanford University, Stanford, California, ATTN: Dr. R. C. Cumming 57 Stanford Research Institute, Menlo Park, California 58-59 Commanding Officer, U. S. Army Signal Missile Support Agency, White Sands Missile Range, New Mexico, ATTN: SIGWS-EW and SIGWS-FC 60 Commanding Officer, U. S. Naval Air Development Center, Johnsville, Pennsylvania, ATTN: Naval Air Development Center Library 61 Commanding Officer, U. S. Army Electronics Research and Development Laboratory, Fort Monmouth, New Jersey, ATTN: U. S. Marine Corps Liaison Office, Code AO-C 62 Director, Fort Monmouth Office, Communications-Electronics Combat Development Agency, Bldg. 410, Fort Monmouth, New Jersey 63-71 Commanding Officer, U. S. Army Electronics Research and Development Laboratory, Fort Monmouth, New Jersey ATTN: 1 Copy - Director of Research 1 Copy - Technical Documents Center ADT/E 1 Copy - Chief, Special Devices Branch, Electronic Warfare Div. 1 Copy - Chief, Advanced Techniques Branch, Electronic Warfare Div. 1 Copy - Chief, Jamming and Deception Branch, Electronic Warfare Div. 1 Copy - File Unit No. 2, Mail and Records, Electronic Warfare Div. 3 Cpys - Chief, Security Division (For retransmittal to - EJSM) 40

DISTRIBUTION LIST (Cont.) Copy No. 72 Director, National Security Agency, Fort George G. Meade, Maryland, ATTN: TEC 73 Dr. B. F. Barton, Director, Cooley Electronics Laboratory, The University of Michigan, Ann Arbor, Michigan 74-97 Cooley Electronics Laboratory Project File, The University of Michigan, Ann Arbor, Michigan 98 Project File, The University of Michigan Office of Research Administration, Ann Arbor, Michigan 99 Bureau of Naval Weapons Representative, Lockheed Missiles and Space Co., P. 0. Box 504, Sunnyvale, California - For forwarding to - Lockheed Aircraft Corp. 100 Lockheed Aircraft Corp., Technical Information Center, 3251 Hanover Street, Palo Alto, California Above distribution is effected by Electronic Warfare Division, Surveillance Department, USAELRDL, Evans Area, Belmar, New Jersey. For further information contact Mr. I. O. Myers, Senior Scientist, Telephone 5961262 41

UNIVERSITY OF MICHIGAN 3 9015 03483 5580