RSD-TR-16-84 SIGNAL ESTIMATION FOR SECOND-ORDER VECTOR DIFFERENCE EQUATIONS1 A. I. Iskanderani N. Harris McClamroch Program in Computer Information and Control Engineering The University of Michigan Ann Arbor, Michigan 48109-1109 September 1984 CENTER FOR ROBOTICS AND INTEGRATED MANUFACTURING Robot Systems Division COLLEGE OF ENGINEERING THE UNIVERSITY OF MICHIGAN ANN ARBOR, MICHIGAN 48109-1109'This work wn supported by the Air Force Office Office of Scientific RecarcB/AFSC, United States Air Force under AFOSR contract number F49620-82-C0089.

RSD-TR-15-84 TABLE OF CONTENTS 1. INTRODUCTION..................................................................................................... 3 2. SIGNAL MODEL AND BASIC ASSUMPTIONS............................................. 3 3. EVOLUTION OF THE CONDITIONAL MEAN...................................................... 5 4. GAIN COMPUTATIONS......................................................................................... 7 5. EVOLUTION OF COVARIANCE MATRICES........................................................ 8 6. INNOVATIONS REPRESENTATIONS................................................................... 9 7. MAIN RESULTS...................................................................................................... 10 A. hMIN RESULTS 10 8. CONCLUSIONS 10 8. CONCLUSIONS......................................................................................... 10 9. REFERENCES......................................................................................................... 11 11

ABSTRACT This paper considers a linear estimation problem for a stochastic process viewed as the output signal of a linear second-order vector difference equation (VIDE) driven by a white-noise input. An innovations approach is applied directly to develop the one-stage prediction estimator and associated error covariances. It is shown that the estimator can be expressed as a second-order recursion that preserves the mathematical structure of the given signal model with innovations feedback loops. It is also shown that the innovations can be computed through a first-order recursion in terms of one-stage prediction estimates and the measurements. Signal Estimation 2

RSD-TR-15-84 1. INTRODUCTION Discrete-time stochastic processes can be represented in various ways, for example using state-space realizations, transfer functions or vector difference equations. There is an extensive theory of recursive estimation based on state-space realizations 12,5]. Our objective here is to develop a theory of recursive estimation when the stochastic process is characterized in terms of linear second-order vector difference equations (VDE). Such representations have been used in [4] in an image processing context. Under certain sampling assumptions, models of elastic and mechanical systems can be described in terms of second-order vector difference equations I6]. Other applications where the signal dynamics are inherently second-order can, no doubt, be given. A recursive equation for the one-stage prediction estimate is developed here, based on a linear second-order VDE signal model. The innovations approach [1,3] is applied directly to the second-order model thereby producing a recursion for the one-stage prediction estimate in second-order form. The resulting estimator is shown to to have the same mathematical structure as the given signal model with innovations feedback loops. 2. SIGNAL MODEL AND BASIC ASSUMPTIONS The signal model considered here is a linear second-order VDE given by k+ --- Ak zk + Di k- + rk t (1) vk = =C z + Ek zikl + tk (2) where ki1,2,3,... and 0 and 21 are the initial vectors, {(z) is an n-vector stochastic Signal Estimation 3

RSD-TR- 15-84 process. Ak, and Dk are real n X n index-varying matrices, {Wk } is an r-vector zero mean gaussian white-noise process with covariance EWk.wrT] = Qk 6l1 (3) where, Qk is an rXr index-varying matrix, and bt = -I for k=-, 6Al = 0 for k4l, rk is n X r index-varying matrix. {yk is an m-vector output measurement process, Ck and Ek are real m X n measurement matrices; {vk } is an m-vector, zero mean gaussian white-noise process with covariance Elvk vIJ = Rh 6kI (4) Assume also the following (1) The initial vectors x0 and z are jointly gaussian random vectors with means El[01 = o0,and E<lz21l (5) and covariances E(Zo-'oXz o- o)T] = o1 - (6) gE(zi i-lKo l-i)T =- 110F (7) EL(Zo-oX:z -i ) 1) 1 n,1o (8) (2) The vectors zo, { I, {v } and {wk ) are mutually independent (3) Rt is an m X m positive-definite matrix for each k. The one-stage prediction estimator could be developed by converting the given second-order VDE to a state-variable form and applying Kalman filtering techniques. Our approach is to develop the estimator directly using an innovations approach [1,3]. 4 Signal Estimation

RSD-TR-15-84 3. EVOLUTION OF THE CONDITIONAL MEAN It is desired to find a recursive equation for the conditional mean Zk+ll -I= I 2-1 l i (9) where Y) = {(y,y,.,yk }. The function zt-i is referred to as the one-stage prediction estimate of zt given Yk-l. Define the set Yk = {y^ 2l,,,Yt }, where {(t ) is called the innovations sequence of {yk } defined by 1/k = -k E[yk IYkr-1, 1 =- l - C11i - EliZ (10) where Elyk It|-,] is the conditional mean of yk given Yk.- = {(y/yl,2, * *,1-}. The innovations sequence defined by (10) has three properties that are exploited in our subsequent development. First, it has zero mean, second the set Yk constitutes an independent set.; and third the sets Yk and Yk span the same space. The proof of these properties is found in [2]. For simplicity in the subsequent formulas define 0 == 0. Without loss of generality, assume FO=71=0 in the subsequent development. Our approach is to make use of the properties of the innovations to write k- I k E= l-k+il{ (,Yk-l, t- ) 2 - Etk* lilyk + E[Zk+lI-ykl + ElZk+,lIrY-2 (11) Next, each term of (11) is evaluated separately. The first term of (11) is evaluated as E[lzkl Iy= -Gk2 (12) where the n X m gain matrix Gk2 is defined by S2i-ona lt+,,~ )[cEtst,Y, )1-' Signal Estimation 5

RSD-TR- 15-84 It can be shown that uw and jL- are independent so that the second term of ( 1) is E[kili,-1] =- ETAk z + + rDk + l w kIJ,1 = Ak E[zt IYk- 1 + Dk GCl (14) where the n X m gain matrix Gl' is defined by Gl-cot,y, )O[co(k,C k )1-1 (15) Since uw and Yk-2 are independent, the third term of (11) is evaluated as E1,.I lk-2l1 = ETAk k + D t-l + r k I Yk-2 = Ak Et |I Yk-21 + Dk Et2-^I Yk^-2 = Ak E[tk I rY-2 + Dk EZ- I Yt-21 + Ai Etzk lIyk-,] - Ak Elzl Ik-11 Atk;^lki- + Dk it-lk-2 - Ak E[kzt lY-l Hence, substituting from (12), (14), and (18) into (11), the one stage prediction estimator can be written as Zk+1. = Ak ttl + Dk Z-ijk-2 + GCk2 + Dk Giyi-1 (17) where G', Gk' are as defined by (13) and (14) and {yk } is given by (10). In order to express -y in terms of one-stage prediction estimates, a first-order recurrence relation is developed for yk in a later section. Equations (13), (15), and (17) yield a recursive onestage prediction estimator. It is interesting to note that (17) is a second-order recursion that preserves the form of the second-order VDE signal model with innovations feedback loops. A similar observation has been made in [4]. Given the one-stage prediction estimate ilj-I, the filtered estimate defined by ik E _l__ lY I (18) 6 Signal Estimation

RSD-TR-16-84 can be determined in terms of the one-stage prediction estimate as follows okk*= Ek Z kIYEk J + 2Elz kk J. —=~lt-~i + Gkl|y = lkk-1 + GkIk (19) where Gk' is given by (15). 4. GAIN COMPUTATIONS The estimator that has been developed in the preceding section involves two gain matrices Gkt and Gk2 expressed in terms of the indicated covariances. These gain matrices are computed in this section. The prediction and filtered error vectors are defined respectively by - - (20) jk = Z ikk (21) Ykik =:, - Tklk Next observe that the innovations sequence {Yk } can be written as k -= C Zkklt- + Et i-lli + Vk (22) Define the covariance matrices by ~Plk-1 ~ El[-l*ki2.-llJ (23) Al E2tl k Lk 1 (24) ni -= Ei-t- l (25) Now coit(k,yk ) is evaluated as Signal Estimation 7

RSD-TR- 15-84 cotit.Y ) = Ct E^kki CkT + Ck nk41,Er + Ek nlk_, Ctr + Ek -ilt -,Ek + Rk -K (26) Clearly Kk is an m X m positive definite matrix. Next GkC can be evaluated as Gk1 e=[Ik-I CIO + tllrk EkKt-' (27) Gk2 is computed in a similar way as Gt2 = Ak Gl' + Dk [nIklklCtr + Ek,lklE^Kt-JK (28) The gain matrices GAI and G 2 thus depend on the covariance matrices EIC1tl, rlit-, and E, k. Recursive formulas for these covariance matrices are developed in the next section. 5. EVOLUTION OF COVARIANCE MATRICES In this section, recursive formulas for the previously mentioned covariance matrices are developed. First note, if (17) is subtracted from (1) Atlsi = Ak iA k-l + D 2-iiA-2 + r wk - C - D CtlGlt (29) Also from (19) and (22) ZAk+1lj+ = - k+1k -l Gk AYA+1 (30) and from (29) and (30) Zk+1l = AA It-i+ D + Dk i-t - Gk lk (31) Then from (24) and (30), the evolution of ElAt can be shown to be given by 8 Signal Estimation

RSD-TR-61584 sEki = ---- - Gc'Ct E^,I - G'E nl,, (32) Next from (23), (31) and some algebraic manipulations ES-ik- = (Dk - Gk2Ek XEt-llt-,Dr + fllklkAkT) + (Ak - G2Ck Xnrl-_ZDk + EiT_-lAr) + rk Qk rk Finally, a recursive formula for nklk- is obtained in a similar way as nH+,-k = Eki Akr+ (Hk- Gk'Ck - GLk'Ek E kiki)Dk (34) The recursive equations (32)-(34) are initialized by the initial matrices (7)-(9). 6. INNOVATIONS REPRESENTATIONS A useful characterization for the innovations in terms of the one-stage prediction estimates and the measurements is developed. The innovations sequence ({y }, defined by (10). can be written as Y1k = k - - Ck Zilkl - Ek k-1lki, = Yi - C2 V- E1l0 (35) Hence,.k is a linear combination of both the one-stage prediction and the filtered estimates. Since our estimator (17) is expressed entirely in terms of one-stage prediction estimat it is desirable to characterize - through one-stage prediction estimates only. This can be done by substituting from (19) into (35). This yields Yk + Ei G'' (36) k + Ek Ckl YIk- = k - C'kik-i - Ek -k-l (36) with Y = -Yi - -Ci - EO, ~o = 0 (37) which shows that the innovations satisfy a first-order recursion driven by the one-stage prediction estimates and the measurements. Signal Estimation 9

RSD-TR-15-84 7. MAIN RESULTS The results can be summarized in the following theorem. Theorem 1 The one-stage prediction estimator for the system (1) and (2), with the stated assumptions, is of the form zt+llk = Ak kl- 1 + Dk zt-ll_2 + GkAk + Dk Gkl k- (38) for k = 1,2,..., with initial vectors zo0-i = lo and z llo0 (39) The innovations satisfy Ilk + Et G-lyA-ll = V - Cat Zl - EL iZ-112 p k = 2,3,... (40) Y1 1 - C11 - E1io 0, Yo 0 (41) The gains GkA and GA2 are given respectively by (15) and (13) and the associated covariances are given by (32)-(34) and initialized by (7)-(9). In addition, the filtered estimate is given by equation ZAiA = t-I + Gk'yk (42) 8. CONCLUSIONS It has been demonstrated that the innovations approach can be applied directly to estimate signals described by linear second-order vector difference equations. This yields a recursive one-stage prediction estimator in second-order form that preserves the structure of the signal model with innovations feedback. It has also been shown that the innovations can be computed through a recurrence relation based on the knowledge of 10 Signal Estimation

RSD-TR-16-84 one-stage prediction estimates and the measurements. This approach can be easily extended to obtain similar recursions for the filtered estimates and for the innovations expressed in terms of the filtered estimates and the measurements. In addition, the approach can be extended to the estimation of signals that are described in terms of higher-order vector difference equations. 9. REFERENCES (1) Aasnaes, H. B., and Kailath, T., "An Innovations Approach to Leastsquares Estimation - Part VII: Some Applications of Vector Autoregressive-Moving Average Models," IEEE Trans. on Autom. Control, Vol. AC-18, No. 6, pp. 601-607, Dec. 1973. (2) Anderson, B.D.O., and Moore, J.B., Optimal Filtering, Prentice-Hall, 1979. (3) Gevers M.R., and Kailath, T., "An Innovations Approach to Leastsquares Estimation-Part VI: Discrete-time Innovations Representations an Recursive Estimation," IEEE Trane. on 4utom. Control, Vol. AC-18, No. 6, pp. 588-600, Dec. 1973. (4) Jain. A.K.. and Angel, E., "Image Restoration, Modeling, and Reduction of dimensionality,"IEEE Trans. on Computers, Vol. C-23, pp. 470-476, May 1974. (5) Kalman, R.E., "A New Approach to Linear Filtering and Prediction Problems, "Journal of Basic Eng., Vol. 82, pp. 34-45, March 1960. (6) McClamroch N. H., "Stabilization of Elastic Systems using Sampled Data Feedback," Proceedings of 2~rd Conference on Decision and Control, Las Vegas, Nov. 1984. Signal Estimation 11