RL 770 A Method of Resolving Data into Two Maximally Smooth Components David A. Ksienski The Radiation Laboratory Department of Electrical and Computer Engineering The University of Michigan Ann Arbor, Michigan 48109 ABSTRACT An algorithm is described which resolves data into maximally smooth components given some knowledge of the relative behavior of these components. The algorithm is used to analyze the specific problem of resolving frequency domain scattering data from a complex target into two components associated with different areas of the target. The resulting matrix problem is not only well conditioned but also amenable to extremely rapid solution.

1. INTRODUCTION Frequency domain data are often strongly periodic. This is true, for example, in the case of the backscattered field of a target where the periodicity is attributable to the interference between the signals produced by localized scattering centers on the target, and the determination of the components due to each is then important for diagnostic and other purposes. The simplest situation is that of a single dominant period indicative of two such centers whose physical separation can be deduced from the period or from an examination of the geometry of the target. The task of extracting these components becomes well posed when they are assumed to be in some sense maximally smooth and this is the problem treated. 2. FORMULATION Assume p (O) to be strongly periodic, frequency domain data. The periodicity is assumed to arise from interference resulting from a difference, d, in path lengths associated with the two primary scatterers. To estimate what portion of the data p (ca) is associated Nwith each of the two scatterers, p i() and P2j(), the periodic properties of p(o) must be exploited. Thus, writing the phase relationship of p()), plT(), and pz(C) explicitly, p (c) = p +xp1(w)+e (i d/ c)p2( ( ) where pi(w) and p (ca) are phase referenced to the forward scatterer and p2(c) is phase referenced to the rear scatterer. Requiring p1(c) and p2(C) to be smooth in addition to satisfying (1) permits a unique decomposition which is easy to implement. For the purpose of defining the decomposition, smoothness is measured using a weighted Euclidean norm of the second derivatives of p (c) and p2() and the maximally smooth solution will have the smallest norm. The weighting of the norm may be employed to specify the smoothness of different portions of pi(^) April 25, 1984 1

and p 2(). To affect a rapid solution of (1) subject to the constraint of maximally smooth components, it is necessary to linearize the problem. Linear problems, in turn, are most easily discussed using matrix terminology. To this end, let p represent an N element vector containing p (w) sampled over N frequencies where the ith element of p is p(i). Similarly, pi and P2 are replaced by p, and p2, respectively. Thus, P1 [P1(C1), PI(2), Pi(s3),..... l(N)T (2) and P Pl(C3) —Si(C2))+PI(CJI) P l( c4)-2p l( 3)+Pl( ) ] (3) [ (AW()2 (ACj)2 where pi is an N element vector; pi is an N-2 element vector and represents the second derivative of pa; an and p are defined similarly. The precise sampling rate is unimportant subject to two constraints. N 4 (4) since using less than 4 data points makes the problem singular. Secondly, A m rm(7n7c/d), - = 1, 2, 3, (5) since sampling at a multiple of C / d would cause the exponential in (1) to become unity and eliminate the only observable difference between pi and P2. The data are assumed to have been sampled uniformly and thus the (AWI)2 is factored out and ignored. Proceeding, then, to define the matrix problem, (1) becomes P= Pi + UP2 (6) where U = diag [exp (i2rd/ c), exp (i2czd/ c),..., exp (2ONd/ c) ] (7) and April 25, 1984 2

1 = Ap, (8) 2z = Ap2 (9) with -1 2 -1 -1 2 -1 A= (10) -1 2 -1 From (6) and (9) = AU p - AU P (11) To minimize p1 and p2 and thence obtain the optimal p, and p2, define the error vector Pi c = *** (12) P2 (i.e., the concatenation of pi wAith pr) and minimize c. As mentioned above, this minimn-ization is with respect to a weighted, Euclidean norm, which is now defined as leI = TMe (13) where KN is a diagonal matrix of order 2(N-2) whose elements are positive real numbers which specify the weighting at each point of p1 and P2. Since the weighting of each point may be specified independently, there is considerable freedom in defining how the components should be smoothed. From (8), (11) and (12) Api e=.............. K p - g (14) AU p- AU P where April 25, 1984 3

A K = - l (15) -AU and 0 g =........... (16) -AUp By the projection theorem 1l], the minimum e, call it e,, will be perpendicular to the range of K, where perpendicular is in the sense of the inner product space associated with the norm defined in (13) eo M K i=0, for all p1 (17) -T K M(Kpo - g) (18) where Plo is the Pl associated with the maximally smooth solution. All that remains is to solve HpKo c Mg (19) for P,,, where -T 1 = K M K (20) and then obtain p2 from (6). As may be seen from (7), (10), and (15), H is an Hermitian banded matrix of half-bandwidth 2. It may be shown that H is also nonsingular and positive-definite if M1/2K has independent columns. This is guaranteed when sampling is constrained by (4) and (5). Since H is Hermitian and positive-definite with half-bandwidth of 2, highly efficient algorithms such as the Cholesky decomposition [2] may be employed. Further, the number of matrix elements which must be computed to solve for p, and P2 grows linearly with N, instead of N2. Thus to decompose 50 data points, only 3 x 50 elements need be computed and stored as opposed to the usual 50 x 50 which result from an unbanded matrix problem. April 2b, 1984I 4

3. RESULTS The algorithm was programmed and tested on data resulting from backscattering from a resistive strip under edge-on illumination. The strip was of width d, and 41 data points were obtained with c = nrrc / (d), n = 8, 9, 10,..,,, 48. It was desired to determine what portion of the backscattering was due to the leading edge and what portion was due to the trailing edge. The periodic nature of the data (see Figure 1), indicated that the data might be amenable to the decomposition described in the previous section, and, in fact, was the motivation for the development of the current method. The data was assumed to have the form p (n) = p,(n) + exp (in7r/4) p2(n) (21) and the program was run on 41 data points associated with n varying from B to 48. Although the approximation of two independent scatterers deteriorates for wavelengths less than the strip width, d, (n less than 16), it was desired to test the robustness of the algorithm. As can be seen from Figure 2, the periodicity which prompted the decomposition does not appear in the components. Consideration of the radiative interaction between the two scatterers indicated that p, should be smoother than P2. Thus the problem was solved with the smoothness of p, weighted more heavily than that of P2. Several weights were tried, and the results of the extremal weights of 1 and 100 are presented in Figure 2. An interesting result is that although the 00 weighting case did produce a very smooth pi, p2 did not deteriorate significantly. For the example presented, the matrix problem was formulated and solved using only 0.01 seconds of cpu time on an Amdahl 5860. Finally, it should be noted that the matrix is well conditioned, with estimated condition numbers of 103 and 104 for the matrices associated with the uniform weighting and 100 to 1 weighting, respectively. These condition numbers, for a matrix of order 40, are quite good and indicate a highly stable matrix which is re April 25, 1964

latively insensitive to numerical round-off. 4. DISCUSSION The example treated in this communication was solved by using the periodic nature of p (ca) and knowledge of the physical distance separating the two scatterers. If phase information is not available, then the present algorithm is inadequate due to the extra degree of freedom in the relationship betweenp, pI, and P2 Conversely, additional information may be incorporated into the maximally smooth decomposition so as to produce a decomposition of the form p(wc) = a( w) pI(c) + b ()2) p() (22) where a(co) and b(co) are completely arbitrary and may be chosen to explicitly factor out of p i(w) and P2(), respectively, any behavior which is known a priori. The decomposition into p 1(w) and P2()) may be obtained by following the analysis of section 2. The derivation of the maximally smooth decomposition resulting from (22) proceeds much as before with U still a diagonal matrix whose ith elenient is now b (wi)/ a(c). Since the algorithm presented in this communication is not only efficient but also well conditioned, a generalization to more than two components should be feasible. Although such additions would complicate the K matrix and increase the bandwidth of H, the matrix H should still be sufficiently ordered to permit an efficient solution. Finally, an additional use of the algorithm is to interpolate the original data. Since the original data is rather sparse and oscillatory it is difficult to gauge the locations of minima and maxima. Conversely, working with the components, each of which is very smooth, interpolation is easy; the interpolated components may then be recombined to yield a highly accurate interpolation of the original data. April 25, 1984 6

REFERENCES [1] D.G. Luenberger, Optimization by Vector Space Methods. New York: Wiley, 1969. [2] G. Dahlquist and A. Bjorck, Numerical Methods. Englewood Cliffs, New Jersey: Prentice Hall, 1974. April 25, 1984 7

FIGUZREI CAPTION Original data(oo). Maximally smooth decomposition: Solid line decomposition with uniform weighting, Dashed line - decomposition with weighting of P1 multiplied by 100. April 25, 19&4 R

-~00 00 0 CD 0 000 CD CD 0 0 -0 0 00 0. 00 CD 0 0 C: 0 0 010 00

00 0 o0 10 0 0 00 0 C)00 0 00 00 Oo 000 00 WC\J Cnnrnaz,-eun 50