PARAMETRIC SPECTR-AL ESTIMATION: 7 ~ ~~... -..-...~~~~ ~ ~~ ~...... A: FAST RATIONAL MODEL APPROACH Spilios D. Fassois* Report UM-MEAM-89-04 (~1989 by Spilios D. Fassois All rights reserved.......:.: -': * sitn rfsoMcaia nier.. (g)1989-~~~~ by.lis D. Fassoi: All rihsreevd ~~~~~~~~~~,., *,.,.. *,. -,b..:~..... ~S. —.....,.....* "*. ~. t.i:0.?'':-.,~;' A -'. -.; s S *, X * w 7, 5, s~~~~~~~~~~ ~ Assi~~~~~~~-s.*"tant Prfeso, M e hnia E ninern an pleehnc

/4g

ABSTRACT A novel approach to fast ARMA spectral estimation is proposed. The proposed approach is based on a correlation version of the recently introduced Suboptimum Maximum Likelihood technique, and significantly reduces the computational and memory storage requirements, while maintaining the high estimation accuracy. The consistency of the proposed estimator is established, and two modified versions, which overcome the algorithmic instability problem associated with most available techniques, and guarantee appropriate estimation even in the case of spectra characterized by sharp spectral zeros, are developed. The effectiveness and accuracy of the proposed approach, with both long and short data records, is finally established through numerical simulations and comparisons with the Blackman-Tukey and the AR spectral analysis methods.

I1 INTRODUCTION In characterizing a second-order stationary stochastic process, the power spectrum is an often preferred descriptive statistic, since it conveniently conveys meaningful information, reveals hidden periodicities, and may be efficiently used for discrimination and classification of time series data. As a consequence, a reliable and computationally efficient spectral estimator is important in many applications (Wu et al, 1980; Braun, 1986). Because of the well-known limitations of the classical Blackman-Tukey and periodogram approaches (Kay and Marple, 1981), modern, parametric techniques, such as the autoregressive (AR), moving-average (MA), and autoregressive moving-average (ARMA)spectral analysis methods, have been more recently proposed (Kay and Marple, 1981; Kay, 1988). Among these, and mainly because of its associated computational efficiency, AR spectral estimation is the one that has been most extensively studied and applied (Makhoul, 1975; Kay and Marple, 1981). Nevertheless, for a variety of reasons, which Include improved performance and parsimony, a strong Interest In ARMA (or rational model) spectral estimation has been generated (Gutowski et al, 1978; Kaveh, 1979; Kay, 1980; Kay and Marple, 1981; Friedlander and Sharman, 1985; Kay, 1988). The difficulties associated with ARMA spectral estimation stem from the fact that the Maximum Likelihood (ML) method, which is known to yield statistically optimal (consistent, efficient, and asymptotically Gaussian) estimates, is rather complicated, and leads to a highly nonlinear likelihood function that needs to be maximized through cumbersome, iterative, optimization techniques (Box and Jenkins, 1970; Makhoul, 1974). In addition to the high computational complexity of the problem, the likelihood function may be also characterized by several local maxima; a fact that may lead to completely erroneous results (AstrOm and aderstr6m, 1974). In order to overcome these difficulties several suboptimum methods, have been introduced.

A rather popular family of such methods is based on the extended Yule-Walker equations for the estimation of the AR parameters, and, subsequent spectral factorization, or related techniques, for the MA parameter evaluation (Kaveh, 1979; Kay,1980; Cadzow, 1980, 1982; Friedlander, 1983; Moses et al, 1985). The performance of the extended Yule-Walker equations estimator, is, however, known to be poor (Kay and Marple, 1981; Marple, 1987), and special procedures, such as the use of an overdetermined set of equations (Cadzow, 1980, 1982; Friedlander and Sharman, 1985), are typically necessary. Additional difficulties are also encountered with the available MA parameter estimation techniques in the important case of signals with MA roots close to the unit circle (spectra characterized by sharp spectral zeros) (Friedlander, 1983). An alternative, simple approach has been proposed by Graupe et al (1975), and is also used by Pandit and Wu (1983) for the estimation of initial parameter values which are subsequently refined by a nonlinear optimization scheme. Gutowski et al (1978) proposed a technique which is based on an iterative estimation scheme. Convergence may, however, be quite slow, and relatively long data records are required for effective estimation (Cadzow, 1980). Finally, a linear, multi-stage approach has been proposed by Mayne and Firoozan (1982). This method, tends to be computationally more involved, and, similarly to most available techniques, it has no guaranteed stability, and is thus inappropriate for the estimation of spectra characterized by sharp spectral zeros (Kay and Marple, 1981). In this paper, a new, fast, ARMA spectral estimation method, which can handle all types of spectra while also offering high accuracy at a minimal computational cost, is proposed. The proposed method is based on a correlation version of the recently introduced Suboptimum Maximum Likelihood approach; an estimation technique that utilizes a modified version of the asymptotic likelihood function, and has been shown to offer high accuracy at a computational complexity which is only a small fraction of that required by the ML method (Fassois et al, 1988). The fast correlation version discussed in this paper further reduces the computational 4

load to an absolute minimum, and makes the proposed spectral analysis algorithm an ideal candidate for fast, microprocessor implementation. The consistency of the algorithm is also established, and modified versions, characterized by theoretically guaranteed stability are developed. The presentation is organized as follows: Certain preliminary observations on ARMA spectral estimation are discussed in subsection 2.1, and the Suboptimum ML approach briefly reviewed in subsection 2.2. The fast spectral estimator is developed in Section 3, and its consistency established in Section 4. Two modified versions, suitable for the estimation of spectra characterized by sharp spectral zeros, are developed in Section 5, and simulation results and comparisons with the Blackman-Tukey and the AR spectral estimation schemes are presented in Section 6. 2 ARMA SPECTRAL ESTIMATION 2.1 Preliminaries The ARMA spectral estimation problem may be defined as follows: Given a set of time series data {x[tl)t l, generated by a second-order stationary stochastic process characterized by the power spectrum Sxx(z), estimate a rational ARMA spectral model of the formn: Sxx(Z) = (z).e(z-1) ye (1) that "best" fits Sxx(z). In (1), z represents the z-transform variable, cre a positive constant, and ~(z), e(z), the monic AR and MA polynomials: 0(z) a 1+1.z+.... +9nizn (2a) e(z) = 1 + 01 Z +. + Om zm (2b) respectively. The theoretical justification for the validity of ARMA spectral estimation stems from the fact that any continuous spectral density function may be approximated arbitrarily 5

closely by an ARMA spectral model of suitable orders (Koopmans, 1974). Based on the representation theorem (Astrom, 1970), it may be then concluded that any stochastic process {x[t]} with continuous spectral density S'x (z) admits an ARMA representation of the form1: S: ~P~(B) - x[t] = e~(B). w[t] (3) where B represents the backshift operator (B. x(tf = xit-1]),0O(B), 00(B) are of the form (2) and relatively prime, and {w[t]} a zero-mean, white noise (innovations) process with variance o{2. Since the spectral model (1) also admits such a representation, the ARMA spectral estimation problem is essentially reduced to the problem of estimating a model Fo(r), out of the model set: n = {Xl(vj) (B, V). x[t] = 0(B,,) ~ e[t/V, /,V [ao, coef e, coef e0] E +x 3LH() x %H(e)} (4) that "best" describes the given time series. In expression (4) e[t/l] represents the one-step ahead prediction error of the model, and, K(4), %(e), the subsets of tn", Am, respectively, for which the polynomials 4*(z) -. 4* (z-1) and 0*(z) ^ zm e(z-1) are minimum phase. The importance of ARMA spectral estimation may be further emphasized by noting that most discrete signals in physical sciences and engineering are of the ARMA type. Indeed, the discrete covariance-invariant equivalent to a continuous AR process of order p is known to be an ARMA(p,p-1) model (Bartlett, 1946), and discrete AR sequences are transformed into ARMA processes in the presence of measurement noise. This may be easily seen by considering the superposition of the AR process: x[t] = <>1(B) ~ wit] (5) where E{w[t]. w[s]) = 8t,s - c2, St,s representing the Kronecker delta and E{.) expectation, and measurement noise {n[t]}, with E{n[t]J} 0, E{n[tJ] n[sl} = St,s-ao, E{n[tl.w[sJ} = 0 V(t,s). The signal thus generated has power spectrum: S(z) = w + 0(z) ~ (z.) ~n (6) 0(z).0(z-1) 1The superscript o is used to designate the actual process. 6

which is of the form (1), and clearly indicates an ARMA(n,n) process. When compared to the popular AR spectral analysis method, the ARMA approach offers not only parsimony, but also optimal spectral fitting. Indeed, despite the fact that any ARMA process may be approximated arbitrarily closely by an infinitely long AR model (Pandit and Wu, 1983), such an approximation is impractical, and, a truRcated AR model would require a large number of parameters, while still leading to an error (residual term) in the asymptotic (L->oo) likelihood function, which may be shown to-be: EAR () fI ( {I t(z) + ~(z) 12 + I (z) 12. I (z ) 12 + (z ) (z).[((z-1) + GO((z-)] dz >0 (7) where ~ represents the AR model parameter vector, j the imaginary unit, GO(z)'e(z)/00(z) the Green's function of the process, b&(z) - G~(z)-1, 0(z) - 0(z)-1, and the integral is evaluated on the unit circle. 2.2 Suboptimum ML ARMA Estimation In this subsection, the Suboptimum ML method, for ARMA process estimation is briefly reviewed. A more detailed discussion may be found in Fassois et al (1988). The ML estimation of an ARMA model F,(~) within the set t,, may be shown (Hannan, 1960) to lead to the problem of minimizing the functional: 1 2 11 Ir 12 dz VL(Vl) = L. I[le[t/vlJlI2 L2r j( X(z)2 (8) where the C2-norm is defined as Ile[t112 A_ (Me2[t]) l2v1l = [coef p, coef e], and X(z) t represents the z-transform of the observed time series {x[t])tL1. From this expression the nonquadratic dependence of VL(v1) on e(z), and hence the highly nonlinear nature of the optimization problem, is evident. The Suboptimum ML method overcomes this difficulty by modifiying VL(V1) as: A A 1 ~ i(z)I 2i.l Z IX(z) 12. A A 2 VLi(+ii-O1f= L 21XjIiI7W 1Tii ([=I2.eit/.iO i-ll C2 (9)

where i represents an iteration index, di-,(z) the estimate of e0(z) at iteration i-1, and {ei[t/.,.]} a modified prediction error sequence. From expressions (8) and (9) it is evident that: I(z) - i1 (z) = VLi () — VL () (10) so that minimization of (9) leads to an approximate (suboptimum) ML estimator. Furthermore, {ei[tI.,*]} may be viewed as a prediction error sequence corresponding to the model set: A A t { t(el)': 3i(B,ei) i-1 [t,i-1] = ei[t/e01i.1 /e E an} (11) where: x i- [te0i-] = xi[t] = A (12) 8gj1(B) Since VLi (4i,ei.1) is quadratic in 4i, the AR parameter estimator: =i ~ arg min VLi (Oi, i-1) (13) where arg min denotes minimizing argument, may be evaluated through linear least squares (LLS). The estimation of the MA parameters is subsequently achieved through the expressions: ^ ^ j-1^ ^ ^A Oji = -Ij -, Oki. Ij-k + ji j=1,2.,m(14) k=1 where {Ij} represents the inverse function of the ARMA model, defined as (Pandit and Wu, 1983): A_ b((z1) A 1 I(z-i) - e(z-1 ) + I, + z-1 + I2. z-2 +....(15) The required inverse function estimates may be obtained by observing that the ARMA process (3) can be equivalently expressed as: S: I~(B) x[t] = w[t] (16) where: I(B) A =(B) (17) 0~(B) 8

with {Ij} E ~2. Truncated versions of the inverse function may be thus estimated within the model set: t,2 = {nf(I): I(B,I) x[t] = ep[t/I] / IE R P} (18) where: I 1 I2.... Ip] (19) and {epit/I]} denotes the one-step ahead prediction error sequence associated with the model in (18). This is achieved through the linear least squares estimator: A 1 2 I = arg min L II ep[t/I] 11 ~2 (20) After V1i = [j o T]T has been estimated, the prediction error (residual) sequence e [t/Ai1]} is evaluated as: A A A e[t/yli] = e'l(-1B,V) Cb(B,~1 i ) ~ x[t] (21) and the innovations variance estimated as: a2 i 2 (22) i II et/W] 11 A Finally, for the initialization of the algorithm, an initial MA parameter estimate Ho is required and may be obtained through the expressions: A m A Ij + A Oko Ij = 0 j = max(n,m) +1,....,max(n,m) + m (23) k=l 3 THE FAST SPECTRAL ESTIMATION ALGORITHM The proposed fast ARMA spectral estimation algorithm is based on a correlation equivalent version of the Suboptimum ML method and uses the autocovariance function estimates as a sufficient statistic. The computational and memory storage requirements are therefore significantly reduced, and the savings become even more dramatic in the case that the algorithm is iterated, or, the process orders are unknown and a sequence of models needs to be fitted. The Algorithm For the estimation of a truncated version of the process inverse function, the model 9

form in (18) is expressed in the time domain as:. Ij x[t-jl = ep[t/I] (Io= 1) (24) j=o and the estimator expression (20) rewritten as: A 1 P PP 1 I = argminL ( Ij.x'[t-j])2 = arg min IiIj IL x'[t-il.xt-i]x'[t- (25) t j=o i=o jo jt where t now varies from -00 to +00 and: x[t] 1 t<L x'[t] = (26) 0 elsewhere By observing that: Z x'[t-il J x'[t-jl = x'[tl.x'[t+li-jil (ij = 0,1,2....,p) (27) t t=l and defining the sample autocovariance function as: rxx' [i,ji rxx' [i-l] = - t *x'[t+li-jil (i,j=0,1,2,....,p) (28) t=l (25) may be expressed as: AA & PP P( I =arg min VL (I) argmin I Ijrx'x'[ij (29) i=o j=o The conditions for minimization of this quadratic criterion are: k VL(I) = 0 i k { Ij + Ii j, k} rx'x'[JI-ll = 0 i=O j=O Ij. rxxx[ lk-jl] + rxx, [lkl 0 (k=1,2,....,p) (30) j=1 and the resulting "normal equations" may be put in matrix form as: 10

A A A 1 |rx'x'O1] rX'X'[1 ] -.r. I IIrxx-1] 1'[ 1] A A AAA rxx,[1 ] rx'x'[O]... rxx'[p-2] I2 rX1X[2] - RIrI A A IAi -r [x'p-1 ] r I#[p-2],..rxx'[O Ip r"x'x'lPl (31) Since the system (31) is Hermitian Toeplitz, Durbin's algorithm (Makhoul, 1975) can be used for the efficient evaluation of I as follows: Eo = rx'x,l[ (32a) Ki xx + [i] /E (32b) j=1 i= Ki (32c) j(i) = ij ( i-1) + K.(i' l) (1lj<i-1) (32d) E = (1-K2). Ei- (32e) where equations (32a) - (32e) are solved recursively for i=1,2,...,p, and the final estimates are: Ij = j(P) 1,2.,p) (33) The computational load of this solution is p2 + 2p multiplication/division operations, and is considerably lower than the p3/3 + 3p2/2 + p/6 operations required by standard techniques, such as Gaussian elimination. Once the truncated inverse function has been estimated, the initial MA parameters are obtained through expressions (23), exactly as in the original version of the Suboptimum ML method. The resulting system of equations is non-Hermitian Toeplitz and may be efficiently solved by using Zohar's algorithm, an approach that requires only 3m2 multiplication operations (see Fassois et al, 1988). For the AR parameter estimation, the model form in (11) is rewritten as: Zji' Xi-[tei-1]i = el [fV/iei-l] ( oi 1) (34) j=O and the estimator (13) expressed as: 11

A 1 p, A 2 i= arg min L * ( _ ji' x i-1[t,ei-1]) (35) t j=o where t varies from -oo to +oo, and: ~xi Ax Ji-1 [ t] i 1 [t] x[t]/e il (B) 1 t (36) lo elsewhere Similarly to the inverse function case, the estimator,(35) may be rewritten as: A P P A i;1' = arg min I 1 *ji *4 ki ~ rx x [li-kl (37) j=Ok=O where: A ^1 A i.1.A 1 L-ij-klI r, [j,k] = rx''[I Ikl] = -[t] Xi [t+lj-kl] (j,k=O,1,2,...,n) (38) t=1 and the normal equations obtained as: n A^ J A^i p 4ji' rx'x[lk-jl] + rxx'[Ikl] = 0 (k=1,2,...,n) (39) j=1 which form a Hermitian Toeplitz system that can be also solved via Durbin's recursion. In order for the AR parameter estimator to be complete, however, the autocovariance A i;1 function estimates rXX []i (j = 0,1,...,n) are also needed. According to the proposed algorithm, this is accomplished without operating on the original time series {x[t]}; a characteristic that yields significantly reduced computational and memory storage requirements. Indeed, based on (36), the autocovariance {rx'x'[k]} of the time series {x'[t]} may be expressed as: m mA A rxx,[k] = E { x'[t]. x'[t+k]}) = _ 0 j(i-1) (i1i) rx [k = j=0 L=O = _ dj r'iX,[k-j] (40) j=-m where: A, m-j (41A autocovariances {rxx,[k]} and {rix,[k]} and may be used for the estimation of the samples r^ xx1[k] (Oksn) of the latter from the available samples ^rx'x[kJ (0~ksr) of the former. 12

Nevertheless, because of the recursive form of the dependence of ilA[kJ on r'x[k],and the fact that the initial conditions are not known, some approximation is necessary. The following two methods are thus appropriate: Aoproximate Method I: The main idea in this method is to evaluate rx-x1i[k] for IklSr (r>n) by assuming: rx x',[k] 0 (Ikl>r) (42) which is reasonable for stationary processes and "large" values of r, and then discarding the unnecessary estimates while keeping those in the interval Ikl<n. For this purpose expression (40) may be rewritten as: m A A m rxx,[k] - dj rx[k-j]l + do'x x k] + ~ dj r'X1,[k+j] (43) j=1 j-1 and by using the symmetry of the autocovariance function and (42), the following set of equations obtained: A(r,0) A(0,0) 1) A(0,r) A 1 [ irri I"'X Ix1xO] rx'x'[0] A(1,0) A(1 1) A(1,r) i i~ rx ~x11 rx ~XXP 1(44) A1 A',[r] A(r,0) A(r,1)...A(rr) rxxI[r] rx where: rdi+j + djli-j (j 0) A(ij).= 0<i,j<r (45) di+j (j-0) with di = 0 for i<0 or i>m. For given {8j(i. 1)} j.1i the elements A(ij) may be evaluated from (45) and (41), and the autocovariance rex[kJ (O ksr) subsequently estimated from (44). The accuracy of this method apparently increases as r-n increases, while the computational load is equivalent to (m+)(m+2) + 3 (r+1)3 + 3 (r+1)2 + I (r+1) multiplication/division. operations. Qi 13

Approximate Method N: The idea behind this method is simple, and amounts to substituting the infinite impulse response (IIR) filter 1/Ei.1(B) in (36), by an approximately equivalent finite impulse response (FIR) filter Q(B) of order q: Aq 1Bk 1 Q(B) I ~ qkBk ^ (46) k=O i-1(B) with coefficients calculated as: o (i1) (= ) qk = 1 m (47)'00 (i-1) qk-. (k=1,2,...,q) where qk-c = 0 for k<L. Expression (12) then becomes: xi-j1 [t] = Q(B) ~ x[tj (48) and the relationship between {rxx,[k] } and { (r1,[k]) may be shown to be: x'x'"[k] f M ~ _C ~ q/+j - rx'x'[k-jl (4 9) j=-q t=0 This expression is then used for the evaluation of the autovariance samples ix'l'[k] (0;k;n) from the available samples of {rx'x'[k] } The accuracy of this method depends upon the equivalent FIR filter order q, which needs 1I to be selected sufficiently high. On the other hand, its computational load is q ~ (m+l) - rn. 1 1 (m-1) + 2 (q+1).() + + q2 (n+1)(2q+n+2) + 1 multiplication/division operations (assuming p > q + n), which is generally lower than that of Method I. 0 After the AR parameters have been estimated, the final MA parameter estimates are obtained through expressions (14), the innovations variance estimate through (21), (22), and the power spectrum through the expression: kx (i() $i z). iz-') z=ePJT ei (50) where o represents frequency in rads/sec and T the sampling period. The proposed fast spectral estimation algorithm may be thus summarized as follows: 14

Step 1: Estimate the autovariance sequence rx'x,[k] for k=0,1,...,r through expression (28). The parameter r has to be selected such that: L >> r>> p>>m (51) Step 2: Estimate the truncated inverse function Ij 0=1,2,...,p) by using Durbin's recursion (32). Step2 3: Obtain an initial estimate go of the MA parameter vector by using expressions (23). Step 4: Evaluate the autocovariance samples ^,x 1[k] for k=0,1,...,n by using Approximate Method I (eqs.(41), (44), (45)) or Approximate Method II (eqs. (47), (49)). Step 5: Estimate the AR parameter vector * by using Durbin's recursion to solve the normal equations (39). Step 6: Estimate the MA parameter vector 0 by using expressions (14). Step 7: Estimate ao through expressions (21), (22), and the power spectrum through (50). The following brief remarks are finally in order: (a) The required order p of the inverse function model in (18) may be determined through appropriate order determination techniques, such as the Akaike Information Criterion (AIC) (Marple, 1987). It has been, however, observed, that the spectral estimation results are not sensitive to p, provided that a value sufficiently higher than the MA order of the model is selected. (b) Steps 4 through 6 of the proposed algorithm may be iterated until convergence in the AR/MA parameter estimates is achieved. The improvement thus obtained is, however, typically marginal, and two to three iterations normally sufficient. (c) The major part of the computational effort of the spectral estimation algorithm is that associated with direct operations on the time series data. The corresponding savings offered by the proposed fast algorithm (in comparison with the original Suboptimum ML version) are approximately equal to (p2 +n2 + n + m). (p2 +n2 + n + m + p)-1. 100 (%). Notice that the savings increase even further if the algorithm is iterated, or, the process orders are unknown 15

and a number of models must be fitted (in this: case only Steps 3 through 7 need to be implemented for each model). (d) Finally, for relatively long data records and large values of r, the computational effort required for the estimation of the autocovarience sequence rxx,[k] (k=O,l,...,r) may be significantly reduced by using- the fact that: rx',x] = L x'[t] * x'[-t ] J t= k (52) where the asterisk denotes discrete convolution. Indeed, { r,x,[k] } may be then evaluated by computing the T-point FFT {X Dk]) of {x'[t]} T1 and evaluating the inverse FFT of {I X [jk]121. In this case T must be selected as: T > L + r (53) and the computational load is reduced from (r+l).(L- ) to approximately T.Log2 T multiplication operations. 4 ESTIMATOR CONSISTENCY All estimators associated with the proposed algorithm are implicit functions of two variables: The number of data L, and the inverse function order p. In this Section their consistency, in the sense 4(L,p).-i(p) with probability one (w.p.l) as L-oo, and, y(p),-e~ as p-oo (p<<L), is established under the following usual assumptions: (Al) The time series {x[tl) is generated by a second-order stationary, ergodic process. (A2) The polynomials 00*(z) - zn * 4(z-1), 0 *0(z) zm 0 0(z-1) of the process, have zeros smaller then y (O<y<l) in magnitude, and are relatively prime. These consistency results are very important since they imply that the asymptotic (as L-*oo) bias error of the proposed spectral estimation algorithm may be made arbitrarly small by appropriately selecting the inverse function order p. In order to examine the consistency of the inverse function estimator I = (L,p), the vector I(p) E Rap is defined through the expressions: 16

rxx[O] rxx[...P-1 ] - 1 rl(p) Frxx[l1 rxx[l] rxx[O].., rxx[P-2] I|([ p[r2l ~ R. Ix(p)'= rI (54) Lrxx[p-1] rxx[P-2]...rxx[0] LIp(p) Lrxx;[P] where { rxx[k] } represents the true autocovariance of {x[t]}, and the infinite-dimensional vectors I(p) and I~ are defined as: 1(p) 4 [IT(p) oT]T e L1 (55) = 1 I 2 I 3....] E t 1 (56) where t1, or more precisely t1(0,oo), represents the Banach space of scalar, unilateral sequences x = (x,x2,....) with norm Ilxllt1, Ixil < o. The following result then holds: i=1 A A Proposition 1' The estimator I = I(L,p) of eq. (20) is consistent, that is: i) I = I(L,p) - I(p) w.p.1 as L-,oo (57a) ii) I(p) - I~ as p-*oo (57b) Proof: See Appendix E Regarding the eo estimator, it is first observed that expressions (23) may be rewritten in matrix form as: H(I).00(I) = h(I) (58) and because of (17): H (I~)O~ = h(Io) (59) A A A A A Bydenoting 80 = 80(L,p) = Oo(I), eo(p) - Oo(I(p)), the following result may be proven: A Proposition 2: The estimator eo, of eqs. (23) is consistent, that is: A A A i) 6o = 80(L,p) -, o(p) w.p.1 as L.-oo (60a) ii) Oo(p) -,~ 0 as p-.o (60b) Proof: See Appendix. El Regarding the estimator ^i notice that expressions (39) may be rewritten as: R(Oo) * *(90) = r(eo) (61) 17

and4 = (L,p) A,*(o), where the subscript i has been dropped for the sake of simplicity. ~(p) is then similarly defined through the expression: R(o0) * (OO) = r(0O) (62) which is formed from the scalar equations (compare with (39): X *ji(P) rxXlrlk-jl] + rx cdlkl = 0 (k-=,2,...n) (63) j=1 {ri1x [Ikl]} representing the true autocovariance of {xi-l[t]}. Hence *(p) = (eOo(p)), and A since the model set n1 (eq. (11)) includes the actual process for i = ~, e -1 = O~, the expression: R(eo) ~ )~ = r(e~) (64) is obtained, and the following result holds: Proposition 3: The estimator e of eqs. (39) is consistent, that is: i) = 4(L,p) -. *(p) w.p.1 as L-+oo (65a) ii) I(P)- 4~ as p —,oo (65b) erof: See Appendix. a' A A Finally, the estimator 9 8O represented by eqs. (14) may be compactly written as: X (I,) eO(I,M) = x(I,U) (66) so that: 8.= (L,p) - O(I,) and 9(p) = O(I(p),M(p)). In addition (17) implies: X (I~,4~) ~ ~ x(I~0,~) (67) and the following result holds: Prooosition 4: The estimator e of eqs. (14) is consistent, that is: i) e = 0(L,p).- 0(p) w.p.1 as L-+oo (68a) ii) 9(p) -+ 00 as p —oo (68b) PErf. See Appendix. El 1 8

5 THE ESTIMATION OF SPECTRA CHARACTERIZED BY SHARP SPECTRAL ZEROS Despite the well-known fact that ARMA spectral estimation is most important for the accurate modeling of spectra characterized by sharp spectral zeros (MA roots close to the unit circle), most existing techniques are inappropriate for this case because of algorithmic instability problems (Kay, 1988). The same may be claimed for the algorithm proposed in Section 3, since there is no guarantee that the initial MA parameter estimate 0o e I3(8), and hence for the estimator stability (see eq. (36)). Although this may not be a serious problem for the estimation of spectra with MA roots well within the unit circle, it will apparently be a potential problem in cases where spectra of processes having MA roots close to the unit circle are to be modeled. In order to circumvent this difficulty, two alternative versions of the proposed spectral analysis algorithm, that modify Step 3 so that o e %L(e), are proposed in this Section. Alternative Version A This version is based on the expressions: m (O/~) Ij~+n-k = j (j0,1,2,....) (69) k=o Ij; 0(j<0), I~ 1, which may be derived from (17). From these, the following equation may be formed for j=0,1,2,...,p+m-n:. ~ ~ ~ I kn0 P ~~In+:-I 0.4 0 (p+m-n+l) x (m+l) (m+l) (p+m-n+l) 19.~L ~ ~ ~ ~ ~ ~ ~ O~

By substituting Ij (n-m; j < p+m) by: I A f I jsa (71) 0 elsewhere in (70), the expression: He ~ e + (72) where e represents a (p+m-n+l) error vector, is obtained. *The parameter vector Oe may be thus estimated through the LS estimator: Oe = arg min T = (HeT He)-1. HeT. = A A~ In r^I~ 1'.'* ~a.11 (73) A. ~ AII,1A r II[m r II1] I rII[O] with: A A P. A A (74) rII[k] = 2 I 1 i.k (k=0,1,2,...m) (74) i=n+k and the initial MA parameter estimator obtained as: (0o)A = Oe e 01 2 e * 0meJ (75) A A where ie, represents the i-th element of 0e. This estimator then satisfies (eo)A E C(e). Indeed, expression (70) under the assumption (71) implies that: {0oe,e1e,...,eme} * {InIn+1-.,Ip} {1,0,....,0} (76) where the asterisk denotes discrete convolution. Hence 0e of eq. (73) is the zero-lag LS inverse of {Ij}j Pn, and hence minimum phase (Robinson, 1967). The fact (0o)A E K(e) then follows. 20

As a final remark notice that, because of the assumptions implied by (71), the modified estimator (0o)A may lead to a slight loss of accuracy in the initial MA parameter estimates. Nevertheless, no serious effects in the final spectral estimates were observed in practice. Alternative Version B In this version the estimator of eqs. (23) is first used to yield an initial estimate 00, and the s-th order zero-lag LS inverse {ai}1jSo of {eij}imO then obtained through the expression: A T -1 T = (eT e)-1 8 (77) where: 1 A 010 e(~~ z-' A (m+s+l) x (s+1) (78) emo' o Om01010 0 Imo L ~ smoA After & has been obtained, the m-th order zero-lag LS inverse of Aia!!=o is estimated as: A{ AT -o T o= (A A) *A.8 (79) where A is an (m+s+l) x (m+l) Toeplitz matrix of structure similar to 8. The initial MA parameter estimator then is: a(no)B is necessario.... 6m inmuT (8 0) and is necessarily minimum phase. The advantage of this version is that no assumptions, such as (71), are imposed, but the the computational load is increased. 21

6 SIMULATION RESULTS AND DISCUSSION In this Section the performance of the proposed ARMA spectral estimator is evaluated through numerical simulations and comparisons with the classical Blackman-Tukey procedure combined with a Hamming spectral window (Blackman and Tukey,1959), and, the popular AR spectral analysis method (Makhoul, 1975). The data were generated by driving a signal with approximately flat spectrum through discrete linear filters, and the convergence of the algorithm was monitored through the index: A A = I 0-1) II (81) 1v -(i-1) 1181 where I1-11[ denotes octahedric vector norm. The best model was finally selected as the one offering the lowest estimated prediction error variance 2i. Several spectral estimation results have been thus obtained, and five typical cases are presented in the sequel. Test Case I In this experiment the estimation of the power spectrum of the narrow-band AR(4) process: x[t] +.250 x[t-1 ] + 1.994 * x[t-21 +.249. x[t-3] +.994 - x[t-4] = w[t] characterized by closely-spaced spectral peaks, from data corrupted by white measurement noise at a 20% N/S ratio, is considered. In such cases overdetermination is often necessary (Kay, 1980), and the power spectrum estimated by fitting an ARMA(8,8) model on a record of 1000 data points is depicted in Figure 1a. The estimated spectrum is very smooth, and nicely indicates the two spectral peaks and the background white noise. A close approximation of the uncorrupted AR(4) process spectrum may be obtained by discarding the MA part of the estimated model, which is due to the noise, and plotting the power spectrum corresponding to the AR part (Figure ib). Figures Ic and Id, finally depict the AR(47) and Blackman-Tukey spectra, respectively. In both cases the spectral peaks are clearly indicated, but the estimated spectra have a "noisy" appearance, despite the fact that the actual spectrum is smooth. Test Case II 22

In this case the estimation of the: power spectrum of the ARMA(4,2) process: x[t] - 1.007 ~ x[t-11 + 1.132 ~ x[t-21 -.277. x[t-31 +.323 x[t-4] = w[t] - 1.786 ~ wit-l] +.974. wit-2] based on an 1000 data record, is considered. The estimated ARMA(4,2) spectrum is compared to the theoretical in Figure 2a, and an almost perfect fit is observed. On the other hand, the results of the AR spectral analysis (coupled with the Akaike Information Criterion for order selection) and the Blackman-Tukey procedure are depicted in Figure 2b and 2c, respectively, where poor match of the spectral zero accompanied with an overall "noisy" appearance, are observed. Regarding the Blackman-Tukey spectrum it is further remarked that negative spectral estimates were obtained at several frequencies and were arbitrarily set equal to a small positive number. Test Casee mThe case of the ARMA(4,2) process: x[t] - 2.155 ~ x[t-1] + 2.752 ~ x[t-2] - 1.952 ~ x[t-3] +.839- x[t-4] wit] - 1.143. wi[t-1l +.945. wit-2] characterized by two spectral peaks and one zero is now considered. An overdetermined ARMA(8,6) model was estimated by using the proposed approach and a record of 1000 data points, and very good spectral fit is observed (Figure 3a). AR spectral analysis was also performed for different model orders, and, the AR(29) spectrum (the order indicated as adequate through the AIC (Figure 3b)), is depicted in Figure 3c. This spectrum is not very smooth, and, despite the fact that the spectral peaks are matched, this is certainly not the case with the zero. The use of overdetermination was also examined,and the estimated AR(64) spectrum is depicted in Figure 3d. It is observed that the spectral zero match is improved, but the spectrum becomes "noisier", and spurious spectral peaks are introduced. This behaviour is typical in AR spectral estimation and indicates that overdetermination, which is often useful in the ARMA case, should be used with caution in AR spectral analysis. Finally, in evaluating the 23

Blackman-Tukey spectrum, negative spectral values were, once again, estimated at certain frequencies, and treated similarly to the previous case. The spectrum thus obtained is compared to the theoretical spectrum in Figure 3e. Test Case IV In all cases examined so far, the spectral estimation was based on 1000 data points. In many applications, however, a significantly shorter record may be available. In order to examine the performance of the proposed method in such cases, the spectrum of the previously discussed process is now estimated based on 100 data points. The result obtained by the proposed method is depicted in Figure 4a and is still very good, considerably better than that obtained through AR spectral analysis (Figure 4b), which is characterized by poor fit and spurious spectral peaks, or the Blakcman-Tukey procedure (Figure 4c). Test Case V In this final example the estimation of the spectrum of the ARMA(2,2) process: x[t] -.950. x[t-1] +.903 * x[t-2] = w[t - 1.715 w[t-1] +.980 - w[it-2 based on 100 data points, is considered. Similarly to the previous case, a superior spectral fit is obtained through the proposed algorithm (Figure 5a), whereas considerably poorer results are obtained through the AR (Figure 5b) and Blackman-Tukey (Figure 5c) procedures. 7 CONCLUSION In this paper a novel, fast, ARMA spectral estimation method, was presented. The proposed method is based on a correlation version of the recently Introduced Suboptimum ML estimation procedure, and is an ideal candidate for microcomputer implementation, since it offers high quality estimates at remarkably low computational and memory storage requirements. The savings associated with these requirements are even higher in the practically important case where the model orders need to be estimated as well. 24

The consistency of the proposed estimator was also established, and, two alternative versions, that, in contrast to most available techniques, guarantee algorithmic stability, and are thus appropriate for the estimation of spectra characterized by sharp spectral zeros, were developed. The effectiveness and accuracy of the method were finally evaluated through numerical simulations and comparisons with the well-known Blackman-Tukey and AR spectral estimation techniques. The results obtained indicated the superior performance of the proposed approach with both long and short data records. 25

REFERENCES Astrom, K. J., 1970,"Introduction to Stochastic Control Theory', Academic Press. Astrom, K. J., Sbderstrbm, T.,1974., "Uniqueness of the Maximum Likelihood Estimates of the Parameters of an ARMA Model," IEEE Transactions on Automatic Control, Vol. AC-19, pp. 769-773. Bartlett, M. S., 1946, "On the Theoretical Specification and Sampling Properties of Autocorrelated Time Series," J. Roy. Statis. Soc., Vol. B8, pp. 27-41. Baxter, G., 1962, "An Asymptotic Result for the Finite Predictor," Mathematica Scandinavica, Vol. 10, pp. 137-144. Berk, K., 1974, "Consistent Autoregressive Spectral Estimates," The Annals of Statistics, Vol. 2, pp. 489-502. Blackman, R. B., Tukey, J. B., 1959, "The Measurement of Power Spectra From the Point of View of Communications Engineering," Dover, New York. Box, G. E. P., Jenkins, G. M., 1970, "Time Series Analysis: Forecasting and Control," Holden-Day. Braun, S. (ed.), 1986, "Mechanical Signature Analysis: Theory and Application", Academic Press. Cadzow, J. A., 1980, "High Performance Spectral Estimation-A New ARMA Method," IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-28, pp. 524-529. Cadzow, J. A., 1982, "Spectral Estimation: An Overdetermined Rational Model Equation Approach," Proceedings of IEEE, Vol. 70, pp. 907-939. Caines, P.E., 1976, "Prediction Error Identification Methods for Stationary Stochastic Processes," IEEE Trans. Autom. Control, Vol. 21, p.500. Fassois, S. D., Eman, K. F., Wu, S. M., 1988, "A Suboptimum Maximum Ukelihood Approach to Parametric Signal Analysis," ASME Journal of Dynamic Systems, Measurement, and Control, to apprear. Friedlander, B., 1983, "Instrumental Variable Methods for ARMA Spectral Estimation," IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. ASSP-31, pp. 404-415. Friedlander, B., Sharman, K. C., 1985, "Performance Evaluation of the Modified YuleWalker Estimator," IEEE Trans. Acoustics, Speech, Signal Processing, Vol. ASSP-33, pp. 719725. Graupe, D., Krause, D. J., Moore, J. B., 1975., "Identification of Autoregressive Moving-Average Parameters of Time Series," IEEE Trans. Automatic Control, Vol, AC-20, pp. 104-107. Grenander, U., Szego, G., 1958, "Toeplitz Forms and Their Applications," University of California Press. 26

Gutowski, P. R., Robinson, E. A., Treitel, S., 1978, "Spectral Estimation: Fact of Fiction," IEEE Trans. Geoscience Electronics, Vol. GE-16, pp. 80-84. Hannan, E. J., 1960, "Time Series Analysis," Menthen, London. Kay, S. M., 1980, "A New ARMA Spectral Estimator," IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. ASSP-28, pp. 585-588. Kay, S. M., 1988, "Modern Spectral Estimation: Theory and Application," PrenticeHall. Kay, S. M., Marple, S. L., Jr., 1981, "Spectrum Analysis: A Modern Perspective," Proceedings of IEEE, Vol. 69, pp. 1380-1419. Kaveh, M.,1979, "High Resolution Spectral Estimation for Noisy Signals," IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. ASSP-27, pp. 286-287. Koopmans, L. M., 1974, "The Spectral Analysis of Time Series," Academic Press, New York. Makhoul, J., 1975, "Linear Prediction: A Tutorial Review," Proceedings of IEEE, Vol. 63, pp. 561-580. Marple, S. L., Jr., 1987, "Digital Spectral Analysis with Applications," Prentice-Hall. Mayne, D.Q., Firoozan, F., 1982, "Linear Identification of ARMA Processes," Automatica, Vol. 18, pp. 461-466. Moses, R. L., Cadzow, J. A., Beex, A. A., 1985, "A Recursive Procedure for ARMA Modeling,", IEEE Trans. Acoustics, Speech, and Signal Processing, Vol. ASSP-33, pp. 11881196. Pandit, S. M., Wu, S. M., 1983, "Time Series and System Analysis with Applications", John Wiley and Sons. Robinson, E. A., 1967, "Statistical Communication and Detection," New York: Hafner. S~derstr6m, T., 1975, "Ergodicity Results for Sample Covariances," Prob. Control and Inf. Theory, Vol. 4, p.131. Wu, S.M., Tobin, T. H., Chow, M. C., 1980, "Signature Analysis for Mechanical Systems via Dynamic Data System (DDS) Monitoring Technique", ASME Journal of Mechanical Design, Vol. 102, pp. 217-221. 27

APPENDIX Proof of Proposition 1: i) Assumption (A2) implies that there exists a real, positive number m' < oo such that: IIji < m'.yi (Vj). The ergodicity assumption (Al) then implies that (Sbderstrdm, 1975): RI- RI, A rI1-rI w.p.1 as L — oo. By considering (31), (54), and "the positive definiteness of the covariance matrix, (57a) follows. ii) From Theorem 2.2 of Baxter (1962) (also see Berk, 1974) with 1 = Io(p) = 1, X=O, the following expression is obtained: p I I(p) c Il = 2 2i < m. jip+ (A.1) where ap - E{ep:[t/l]} is the variance of the prediction error corresponding to the p-th order model in (18), and m a positive constant. (A.1) may be rewritten as: 1 II _ ~. i Ij(p)J( - jI+ 0 i(P) i M 1 02a j aM 0 2 (12 jj=p+1l (A.2) where: J1 i: ap2 IJI(P) IJ I (P)2 } (A.3) ap2 aAv2 J2= { i lii(p) Ii ~ c 2;21 (A.4) By using assumption (Av2), which implies that Iji m1 (m<), (A.2) gives: j=p+l 2 8

pim /'1 ~ lp) o12 /j~ )j Lim Limj(p) - 4jl = 0 im I(p) 0 (p-p) Ij.7) i) Property (57a) clearly implies: H(I) - H(I(p)) w.p.1 as L-OO (A.8) I H (I),C1 otcmxm (A.01 0) I<mh(I) ~Lim yP 0m (A.5) Cw ow 1- oo Finally, the fact that (Grenander and SzegO, 1958)' L im Tw2 -02 (A.6) comb ined with (A.5) implields: im IIj(p) I= = I(p) ) H(I) I (p)-0 ) (A.17) P-+j =O0 Proof of Proposition 2: i) PrIt is knownperty (57a) clearly implies: H (I) -. H (I(p)) w.p.1 as L-oo, 0 (A.18) h (I h(p) R-( r() ) w.p.1 as L — (A.9) Hence, (58) with det H~O implies (60a). ii) Since the maps: o -H R(I) ~: -m-Znxn (A.15) o ->r(I )': m-,l n (A.161) are continuous, (57b), combined with (58), (59) yields: eo(p) = H-l(i(p)) ~ h(i(p)) -- H-(I (I) = e~ (p->-0) (A.12) Proof of Proposition 3: ri) it is known (Caines, 1976) that: h(eo)-+R(eo) > 0 r (eo) — r(eo) w.p.1 as L-,o, V eoEH(e) (A. 13) The combination of (61), (62), and (A.13) then yields: *(0) = R'1(o~ r(eo) --- R'1 (0o) ~ r(eo) = (eo) w.p.1 as L..oo, V 0eo j:(e) (A. 14) Also, since the maps:'29

are continuous, the fact R(OO) > 0 implies that the map: go -_> (0o): m._>n (A.17) is also continuous. Property (60a) then yields: - 5(Lwp) - 0 ((~) - 0*(eO(p)) w.p.1 as L —o (A.18) and (A.14) implies: *A (A.19) 0 (0o(p)) - (Oo(P)) - 0(p) w.p.1 as L-oo (A 1 9) Hence = e(L,p) = **(0o) - 0(P) w.p.1 as L-oo ii) Because of the continuity of the maps (A.15), (A.16), the fact R(OO) > 0, and (62), (64), the convergence property (60b) implies that: A(p) = (e(p)) R1 (eo(p)) ~ r(0o(p)) -+ R'1(O~)r(O~). O~ as p-+o (A.20) Proof of Proposition 4: 0 i) The convergence properties (57a) and (65a) imply that: X(I],)-+ X(I(p),0(p)) w.p.1 as L-+.. (A.21) x(fI,)-+ x(I(p),0(p)) w.p.1 as L.oo (A.22) Since: det X(I,0) 0O V (I,o) E lPxlLn (A.23) (66) yields: 0 0 (t,O) - 0(I[(p),0(p)) A 0(P) w.p.1 as L-oo (A.24) ii) The maps: ( +)-1 x~(t),L1 x]"-tlnmxm (A.25) (I0) - x(Xi): LI x1n.-1Zm (A.26) are continuous. Hence the convergence properties (57b) and (65b), combined with (66), (67), (A.23), yield: (p) = X-1 ((p),0(p)).x(I(p),((p)) -4 X-l(I~,0~).x(I~o, ~) = ~0 as p-oo (A.27) 30

LIST OF FIGURES Figure 1: Test Case I: (a) The ARMA(8,8) spectrum (b) The AR part of ARMA(8,8) spectrum (c) The AR(47) spectrum (d) The Blackman-Tukey spectrum. Figure 2: Test Case II: (a) The ARMA(4,2) spectrum (b) The AR(16) spectrum (c) The Blackman-Tukey spectrum. Figure 3: Test Case III: (a) The ARMA(8,6) spectrum (b) The AIC Values (c) The AR(29) spectrum (d) The AR(64) spectrum (e) The Blackman-Tukey spectrum. Figure 4: Test Case IV: (a) The ARMA(4,2) spectrum (b) The AR(16) spectum (c) The Blackman-Tukey spectrum. Figure 5: Test Case V: (a) The ARMA(2,2) spectrum (b) The AR(15) spectrum (c) The Blackman-Tukey spectrum. 31

150.0 150.0 - ARMA(8,.8)-Es t. Actuol-System:AR(4) __... —-- AR port of ARMA(,)-Est. 100.0 -/ 100.0 O 50.0.3 50.0 uJ O. 0 L~ O. 0 FEUEC C(H)FEUNY(z,,' L 0.0 L w 0.0 o~ 0 0. D 0 5 O Q, I I e-50.'.0 02 0-5. 0.4..20.3 0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) FREQUENCY (Hz) la lb 150.0 150.0 - Act ua1 I-Systt Cm:AR(4) - Actuo-Syst em:AR(4) -— A dequate AR:AR 4U 10- 100.0 50.0 IP (/) o o 0...0 -50.01 -50.0 % - 0.80 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) FREQUENCY (Hz) Ic Id Figu i Test Case I: (a) The ARMA(8) spectrum (b) The AR part of ARMA(8,8) spectrum (c) The AR(47) spectrum (d) The Blackman-Tukey spectrum.

50.0 m 0.0 WI Actuol-System:ARMA(4.2) -50.0 ----- ARUA(4,2)-Est. 0 a~-100.Q00 I - -100'0.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 2a 50.0 SO..0.0 0 Uj ~ A.tuoI-Systm:ARMA(4 2) ) -— 50.0 — dequot AR: AR16) 0 O-100. 0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 2b 50.0'o 0 0 X-, Actuol-System:ARMA(4.2) -50. ----- BT-SPECTRUM -1 O 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 2C Figure 2: Test Case II: (a) The ARMA(4,2) spectrum (b) The AR(16) spectrum (c) The Blackman-Tukey spectrum.

-.... A ( 11om))-A (4(2) t. _ 100.0::) ~~~~~~~~~~~~~150.0 C. 50.0 w U) w 0.0 50.0 1 ~0.0-50..0. 0, 0.5. 0 20 30 40 50 o O,1 0.2 0.3 o.4 0.5 A * FREQUENCY (Hz) AR ORDER 3a 3b 150.0 15150.0 Actuo I-Sys t em:ARMA(4,2) ActuaI-Sytm:ARMA(4,2) AOEOUATE AR MODEL AR(29) AR()Et.'o I.' 100.0!- 100.0 aJ 0.0c:.o 50. Vo O 50. #00 0 - 0. 1 0: 2 0: 3 0 5 0. 1 0.2 0.3 0. 4 0. FREQUENCY (Hz) FREQUENCY (Hz) 3c 3d 150.0 Actuol-Syst m:ARMA(4.2). —-- BT-SPECTRUM 100.0 IC. 50.0 w cL. w 0.0 -50. ~0.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 3e Figure_ 3: Test Cm Ill: (a) The.ARMA(B,$):sp.ectrum:~(b):: The AIG- (C)TI spectrum(d)( The AR(4): spectrum (e). The Tuk'y~pectfi:.

150.0 Actuol.-Systm:ARVA(4.2).... ARMA(4, 2)-E t..0 100.0 I — 0C 50.0 150. cn Lw 0.0 0 50%o0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 4a1 150.0 - Actuol I-Syst em:ARMA(4,2) -— ADEQUATE AR ODEL: AR(16) - 100.0 O) 50.0 c. a0 -50. 0.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 4b 150.0 Ac tuo I -Sys te m:ARMA(4,2).. ---- BT-SPECTRUM - 100.0. 50.0 O. 0.0 0 FREQUENCY (Hz) 4C Figure 4: Test~a ciV. (a) The ARMA(4,2) spectrum (b) The AR(16) spectum (C) The....'~Tukey spectrm.

50.0 -50.0 ARMA(2.2)-Est. o.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 5a 0 -50.0 AR(15)-Est. 0 *.0 0.1 0.2 0.3 0.4 0.5 FREQUENCY (Hz) 5. 50.0 LU Actual-System 0Fgus5: r Caw V: (aT R (s} (b) T R15)sp w ()heA2R(u Bgm~T~- o _i L._~_T llla~-r-rpimmmlm

3 9015 028267246