THE UNIVERSITY OF MICHIGAN RESEARCH INSTITUTE ANN ARBOR SYNTHESIS OF R-L-C.NETWORKS BY DISCRETE TSCHEBYSCHEFF APPROXIMATIONS IN THE TIME DOMAIN Technical Report No. 107 Electronic Defense Group Department of Electrical Engineering By H. Ruston Approved by: 6 t4 /C iI4 Charles B. Sharpe Project 2899 TASK ORDER NO. EDG-1 CONTRACT NO. DA-36-039 sc-78283 SIGNAL CORPS, DEPARTMENT OF THE ARMY DEPARTMENT OF ARMY PROJECT NO. 3-99-04-106 Submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in The University of Michigan April 1960

ACKNOWLEDGEMENTS The author wishes to express his sincere appreciation to the Chairman of his Doctoral Committee, Professor C. B. Sharpe for his advice and counsel during this investigation. He is also grateful to the other members of the Doctoral Committee for their helpful comments concerning the material. He feels especially indebted to Professor N. R. Scott for his guidance and suggestions. Further, the author wishes to acknowledge his gratitude to Professor A. B. Macnee, whose lectures on Network Synthesis aroused the author's interest in the subject. Thanks are also expressed to Dr. B. F. Barton for his assistance. Finally, the author wishes to thank Mr. R. E. Graham and Mrs. P. E. Stohrer for their preparation of this paper for publication. The author's wife typed the complete rough draft of the manuscript and offered many valuable suggestions. For this, and for her continued encouragement the author is deeply grateful. A part of the research reported in this paper was supported by the U. S. Army Signal Corps. ii

TABLE OF CONTENTS Page ACKNOWLEDGEMENTS ii LIST OF TABLES v LIST OF FIGURES vi LIST OF SYMBOLS vii ABSTRACT xii CHAPTER I. INTRODUCTION 1 CHAPTER II. STATEMENT OF THE PROBLEM 6 CHAPTER III. STATUS OF THE ART 9 3.1 Fourier Series Approach 9 3.2 Impulse Method of Approximation 11 3.3 Numerical Calculations by Time Series 12 3.4 Approximation by Means of Least Square Criterion 14 3.5 Synthesis Through Matching of Time Moments 16 3.6 Continued Fraction Expansion and Pade' Approximants 18 CHAPTER IV. THE IMPULSE RESPONSE APPROXIMATION PROBLEM 22 4.1 Introduction 22 4.2 The Determination of Pole Locations 24 4.2.1 Reduction of the Problem to an Overdetermined System of Linear Equations 24 4.2.2 Solution of Overdetermined Systems of Equations by Means of Discrete Tschebyscheff Approximations 33 4.2.2.1 The Theory of Tschebyscheff Approximations to Overdetermined Systems of Equations 34 4.2.2.1 Application to Solution of Overdetermined Systems of Equations 51 4.2.3 The Formulas for Optimum Poles 52 4.3 The Determination of Residues 59 4.4 The Discussion of Errors in the Approximation Process 61 4.5 Applications 66 4.5.1 Determination of a Network with Impulse Response / (l-+t)2 67 4.5.2 Determination of a Network with Impulse Response te-t2 92 4.6 Conclusions and Summary 124 iii

TABLE OF CONTENTS —Continued Page CHAPTER V. CONCLUSIONS 134 APPENDIX 1. STEP INPUT RESPONSE PROBLEM 137 APPENDIX 2. THE ARBITRARY INPUT PROBLEM 141 LIST OF REFERENCES 143 iv

LIST OF TABLES Page TABLE I Pade' Table for H(s) 20 TABLE II Rules for the Hyper-plane to Be Replaced 47 TABLE III Formulas for Poles 55 TABLE IV Values of h at Interval Points 69 TABLE V Values of blm at Interval Points 73 TABLE VI Values of bm and bm at Interval Points 84 lm 2m TABLE VII Values of h at Interval Points 92 TABLE VIII Values of bum, a' and b' at Interval s lm 2m 2m Points 111 v

LIST OF FIGURES Figure Page 2.1 Time-Domain Synthesis Requirements 6 2.2 The Approximation Process 8 5.1 Desired Impulse Response of N 9 3.2 Periodic Repetition of h(t) 10 3.3 Approximation of h(t) by h*(t) 11 5.4 Approximation of h(t) by Rectangles 12 3.5 Impulse Response with Oscillatory Terms 17 3.6 Block Diagram Showing the Process of Obtaining the "Indirect Pade'Approximant" 21 4.1 h(t) and Corresponding Ordinates for Equal Time Increments 25 4.2 Geometrical Interpretation of Hyper-planes and Normal Vectors 39 4.3 Transformation of the Interior of the Unit Circle Into the Left Hand Plane 57 4.4 Impulse Response h(t) = I 68 4.5 Network Realizing the h*(t) of Eq. (4.103) 77 4.6 Network Realizing the h*(t) of Eq. (4.156) 91 4.7 h(t) and h*(t) for One-term and Two-term Approximations (Example of Sec. 4.5.1) 93 4.8 [h*(t)-h(t)] for One-term and Two-term Approximations (Example of Sec. 4.5.1) 94 4.9 Impulse Response h(t) = tet 95 4.10 Network Realizing the h*(t) of Eq. (4.247) 123 4.11 Plot of h(t) and h*(t) (Example of Sec. 4.5.2) 125 4.12 Plot of [h*(t) - h(t)] (Example of Sec. 4.5.2) 126 vi

LIST OF SYMBOLS Symbol Description First Used on Page Ak Residue of H*(s) at the pole s=sk 7 B Constant term in Eq. (A2) 137 skt Bk Coefficient of e in Eq. (A2) 137 Ck Coefficient of ejkt in Eq. (3.1), coefficient of cpk(t) in Eq. (3.11) 10 D(s) Denominator polynomial of H*(s) in Eq. (3.10) 13 [E ] Reference composed of n+l hyperplanes selected in the n dimensional Euclidean space Rn 36 E1 E2'...E p hyperplanes in the n dimensional Euclidean space Rn 36 G*(s) An approximation to H(s) defined in Eq. (3.5) 13 H mth row, nth column Pade table mn approximant to H(s) 20 H(s) Prescribed system function 6 H*(s) System function of the network N 6 I1 Quantity defined in Eq. (4.18) 30 I2 Quantity defined in Eq. (4.19) 30 I Quantity defined in Eq. (A15) 140 J(z) Function defined by Eq. 3.23 18 K(s) Laplace transform of k(t) 18 N Network to be synthesized 6 N(s) Numerator polynomial of H*(s) in Eq. (3.10) 13 vii

LIST OF SYMBOLS —Continued Symbol Description First Used on Page P(rl,... rn) A point with coordinates rl,r2,..,rn in the n dimensional space Rn1 3 m -k P (Y) P (y) = Z r 28 m m rkY k=O Q,(s) Laplace transform of an approximation to a pulse 11 Q2(s) Laplace transform of an approximation to a delayed pulse 12 Rn n dimensional Euclidean space 36 T Period of h (t) in Eq. (5.1) 10 T-point Center of a reference 41 Tk point Center of the kth reference 50 T'-point A point with a maximum error equal to that of the T-point 41 a Real part of Ai, where AI is complex 60 a' Coefficient of a2 in Eq. (4.71) 60 bp Imaginary part of Al, where Al is complex 60 skt bk bkm = e 60 bm Coefficient of b2 in Eq. (4.71) 60 Ck Coefficient of b(t-kd) in Eq. (B4) 142 ck Coefficient of rk in Eq. (4.47) 58 d Uniform time increment between given points of data 25 dk Coefficient of s in D(s) in Eq. (3.10) 13 e Base of the natural logarithm system: e = 2.71828182... 6 viii

LIST OF SYMBOLS —Continued Symbol Description First Used on page ei(t) Arbitrary input function 8 ek(t) Quantity defined in Eqs. (3.12) 15 e (t) Response of N to ei(t) 8 e (t) Approximation to e (t) 142 f f = lim K(s) 18 O O S S -- 0oo f, (k = 1,2,...) Coefficients in continued fraction of Eq. (3.24) 19 f. Coefficient of e.(t) in the kth sequence of differential gums in Eqs. (3.12) 15 gk Coefficient of sk in G*(s) 13 h h h(tm) 25 h(t) Prescribed response to unit impulse function 6(t) 7 h (t) Periodic repetition of h(t) 10 h*(t) Response of N to unit impulse function 6(t) 6 J j = s-1 10 k k = k(t ) 138 m m tm) k(t) Prescribed response to unit step function 137 k*(t) Response of N to the unit step function 137 l*(t) X*(t) = k*(t) - B~ 138 mk kth moment of h(t) around the origin 16 nk Coefficient of sk in N(s) in Eq. (3.10) 13 ix

LIST OF SYMBOLS —Continued Symbol Description First Used on Page n rm r =(- 1)m YkYk Yk 27 k=l 12 m k >k v+l v sk kth pole of H*(s) 7 t Independent variable; time in seconds 6 t Abscissa of the mth point of data 25 Wk Mapping of yk in w-plane 57 Xv Vector normal to the hyperplane Ev in the Euclidean space Rn 36 skd Yk Yk e 26 Sktv < < kv Zkv = Ake (1 v q) 26 "aI ~ Re s,, where s, is complex 60 I, Re (sQ), where sQ is complex 60 r, Im (se), where s is complex 55 6k Re (Yk) for complex Yk 53 6(t) Im (Yk) for complex Yk 6 The error for the center of a reference 40 Cev ~ (v = 1, 2,... p), the error of the vth equation 35 cv The error at the vth pole location 61 Ek Error for the center of kth reference 49 E' Error of T'-point respective to E 41 x

LIST OF SYMlBOLS —Continued Symbol Description First Used on Page H Coefficient of x in Eq. (4.29) 36 Ok Coefficient of xk in Eq. (4.37) 44 7t 3.14159265... 10 a As a subscript denotes a choice of n+l numbers from the series 1, 2,..., p 36 cpk(t) (k = 1,2,... n) kth orthonormal function in Eq. (3.11S 14'(x) Function defined in Eq. (3.19) 18 T) CD T 10 ^Ao (v + m = 1, 2,...,q) k -k 139:+m v+m+l v+m At Width of an approximating rectangle in Fig. 3.4 12 ~ Summation containing only the selected (n+l) n+l terms of the reference 36 xi

ABSTRACT The purpose of this thesis is to develop a new method for synthesizing an R-L-C network, which when excited by a prescribed unit impulse input will have a prescribed output function as its response. A synthesis problem generally requires the solution of two problems: (1) the approximation problem; (2) the realization problem. In this study attention is focused on the approximation problem. In particular, a method is developed for approximating a prescribed impulse response by a function which can be an impulse response of a realizable R-L-C network. The more general problem of obtaining a network with a prescribed response to an arbitrary input (i.e., input different from the unit impulse) can be reduced to an equivalent one with a "prescribed" impulse response by known approximation techniques, and can thus be solved by the method presented. The method proposed here is a numerical approximation process, which yields an impulse response function approximating the prescribed one. The error of the approximation, which is defined as the difference between the approximate impulse response and the prescribed one, is minimized in the Tschebyscheff sense. The approximating impulse response is such that its Laplace transform is an R-L-C network function. Thus one can find a network having an impulse response approximating the prescribed one. In this work the approximate impulse response function is represented as a sum of exponential functions of a form Akeskt (k = 1, 2,..., n) where sk [Re (Sk) < 0, k = 1, 2,..., n] is the position of the kth pole of the approximating function, and the coefficient Ak (k = 1, 2,..., n) is the residue of the approximating function at the pole sk. The number n denotes the number of terms in the approximating impulse response function. Since an efficient approximation process requires optimizations of both the pole positions and the residues, two such optimizations are made here. Both these optimizations are made through the application of the discrete Tschebyscheff approximation theory, and yield, as stated above, an error of approximation which is a minimum in the Tschebyscheff sense. The final part of this investigation consists of presentation of two examples illustrating the approximation process. xii

CHAPTER I INTRODUCTION The advent of radar, automatic control mechanisms, electronic computers, and other new devices has made it necessary to provide means for the synthesis of networks meeting prescribed time requirements. This type of synthesis is commonly designated as "time-domain synthesis" or "transient-response synthesis." The problem of synthesis in the time domain consists of prescribing an electrical input (usually voltage or current input) and an electrical output (commonly voltage or current output), where both these quantities are functions of time. The solution to the problem consists of finding a network which, when excited by the prescribed input, will yield the prescribed output. A solution to a problem in network synthesis is rarely an exact one. The limitations of physical realizability, and of the modest number of elements which may be employed in a practical network, stringently limit the results which can be obtained using synthesis. A given synthesis problem may reduce to the selection of the simplest network meeting prescribed requirements, or, perhaps to the determination of the network of restricted complexity which would best fulfill the requirements. This means that, in general, the network which is found will, when excited by the prescribed input, yield an output different from the prescribed one. The difference between the two outputs constitutes an error. In the normal case it is desired to find a network which will minimize the error between these two outputs in some sense. In network synthesis, the three most commonly employed approximations are Taylor, least-mean-square error, and Tschebyscheff. The 1

2 Taylor approximation provides the best approximation to some given function at a single point. At that point the error and as many of its derivatives as possible equal zero. An error which is zero at this point and which increases slowly in the immediate vicinity results. Generally the price paid for this desirable behavior is a much larger error away from the zero-error point. The least-mean-square-error approximation minimizes the mean-square error. The parameters in the approximating function are varied, and that set which minimizes the mean-square error is chosen as the solution. The Tschebyscheff approximation minimizes the magnitude of the maximum error. The Tschebyscheff approximation is one of the most desired types of approximation in the area of network synthesis. This approximation makes a highly efficient use of circuit elements. No approximation procedure minimizing the error in the time domain has been offered in the past, probably because this type of approximation gives rise to more complicated mathematical expressions than the morecommonly-employed least-mean-square-error type of approximation. The problem of synthesis of networks from prescribed time requirements is a relatively important one in the general area of network synthesis. Consequently, a considerable effort was expended on this problem in the last decade, resulting in several excellent contributions. In particular, the application of orthogonal functions, computers, Fourier series theory, numerical methods, and time-moment matching have provided useful approaches to this problem. A number of useful methods have been proposed for the solution of the problem of synthesis in the time domain. However, each method that has been proposed suffers from either (1) restriction on the classes

3 of functions that can be approximated, (2) nonphysical realizability of the resulting function (in general), (3) unsatisfactory control of the approximation error, or (4) inefficient use of circuit elements. It is the aim of this investigation to propose a general approximation procedure which will yield a system function resulting in a network realizable with R-L-C elements. The method of approximation is a numerical one, employing discrete Tschebyscheff approximations. The error between the desired time response and the obtained time response is a minimum in the Tschebyscheff sense. It is felt that the proposed method largely overcomes the outlined four limitations. The proposed method also provides an approximation procedure minimizing the error in the time domain in the Tschebyscheff sense, thus providing a desirable type of error control. The theoretical development and the practical application of this approximation method are the purposes of this dissertation. In the method proposed in this investigation, attention is focused on the problem of finding a network from the prescribed response to unit impulse. It is shown (Appendix B) that a problem in which an arbitrary input and a prescribed response are given, can be reduced to one of prescribed impulse response. The Laplace transform of the impulse response is the system function. It follows from network theory considerations that the system function will result in a realizable R-L-C network, if the impulse response is a sum of n exponential functions, of a form skt Ake (k =1, 2,..., n). sk (k = 1, 2..., n) is the location of the kth pole of the system function, and the coefficient Ak (k = 1, 2,..., n) is the residue of the system function at the pole sk. The number n denotes the number of terms in the impulse response and in the system function.

4 n is also directly proportional to the minimum number of elements in the network realizing the system function. Hence, an efficient approximation process is one which yields a tolerable approximation error with a minimum number of terms for the impulse response. To obtain such an approximation, two optimizations must be made, namely, an optimization of pole positions and an optimization of residues. In the proposed method two such optimizations are made. The solution for optimum pole positions is developed. Once these pole positions are determined, an optimization of residues takes place. Both these optimizations are made in the Tschebyscheff sense through the application of discrete Tschebyscheff approximation theory. Thus, the approximate impulse response obtained results in a network which yields a tolerable approximation error with a minimum number of elements. To say this in other words, it is believed that the errors obtained using the method to be presented will for a given complexity of approximating function, generally be smaller than those obtained by previous approaches. The proposed method also largely overcomes the drawback of restriction on the classes of functions that can be approximated. The numerical process culminates in the system function and places no restrictions on types of function which can be approximated, provided, of course, that one does not demand properties not obtainable with R-L-C networks. Also, the drawback of non-physical realizability is overcome. The approximation error, being Tschebyecheff, is rather satisfactorily controlled. The author believes the contributions of this dissertation as providing a satisfactory solution to the problem of synthesis of networks for prescribed time requirements. A summary of the more important results includes: (1) application of the discrete Tschebyscheff approximation

5 theory to network problems; (2) development of a general solution to the problem of approximation of networks in the time domain; (3) development of a general numerical method of approximation, optimizing both pole locations and residues; and (4) detailed investigation of the errors of approximation both in the pole determination and in the residue determination. In the second chapter of this work the problem of synthesis of networks in the time domain is stated in detail. In the succeeding chapter the state of the art is reviewed, and some of the contributions to the problem are outlined. In the fourth chapter, the impulse-response approximation problem, i.e., synthesis of a network from the prescribed impulse response, is stated and solved. Two examples illustrating the approximation process are worked out in detail. In Appendix A, the problem of synthesis from.prescribed step response is solved, and it is shown that the methods of Chapter IV can be applied to yield the desired network. The arbitrary input problem is treated in Appendix B, where it is shown that this problem can be reduced to the one treated in Chapter IV, i.e., synthesis of networks from prescribed impulse response requirements.

CHAPTER II STATEMENT OF THE PROBLEM The synthesis problem is essentially an input-output problem; i.e., it is in general desired to find a network that will produce a prescribed response to a specified excitation. The specifications of the input and of the output yield the system function H(s). In general, there may exist no R-L-C network with H(s) as the system function. A network N can be found, however, which when excited by the specified input will produce an output approximating the prescribed one. For present purposes the network N which is to be found will be characterized by its system function H*(s) such that: 00 H*(s) = L[h*(t)] = f h*(t)e dt, (2.1) 0 where h*(t) is the response of N to the unit impulse function &(t), as shown in Fig. 2.1. INPUT OUTPUT 8(t) h(t) s(t) N h*(t) H (_________s) t hit FIG. 2.1 TIME-DOMAIN SYNTHESIS REQUIREMENTS 6

7 It is assumed here that N is a finite, linear, passive, lumpedparameter, bilateral electrical network. Then H*(s) can be written: n Ak H*(s) = kl s-s (2.2) where sk (k = 1, 2,..., n) are the poles of H*(s) and Ak is the residue at the pole s = sk. Complex poles occur in conjugate pairs. If it is assumed that there are no coincident poles, h*(t), the inverse Laplace transform of H*(s), is then n skt h*(t) = kl Ake (23) Stability requires that Re (sk) < 0 (k = 1, 2,... n). (2.4) skt 2 Skt k-l skt Since coincident poles give rise to te t e,..., t e, for a kth-order pole, they must not occur with zero real part. The statement of an approximation problem consists of prescribing the impulse response h(t) with tolerances on the allowable error. One is to find a system function H*(s), as given in Eq. (2.2), with the constraint of Eq. (2.4). It is also expected that the distancet from h(t) to h*(t) is minimized in some sense (i.e., the error is minimized in some sense). In Fig. 2.2 the above requirement is portrayed in the form of a block diagram. In the following chapters a method will be developed for the synthesis of an approximated system N when h(t) is prescribed. The system N is restricted to be an interconnection of R-L-C elements Distance between x and y = d(xy) = a number which provides a measure of the disparity between x and y (Frechet, 1906).

PRESCRIBED SYSTEM ERROR (t) WITH SYSTEM FU O (DISTANCE) EVALUATORR RROR APPROXIMATED SYSTEM N t) N h (t) WITH SYSTEM FUNCTION (,S ERROR'= ERROR BETWEEN _ H _h(t) (DESIRED RESPONSE) AND h (t) (APPROXIMATED RESPONSE) FIG. 2.2 THE APPROXIMATION PROCESS (i.e., an R-L-C network). The error (distance) between h(t) and h*(t) will be defined as h*(t)-h(t). A network N will be found, such that max Ih*(t) - h(t)I will be a minimum. The error, hence, will be minimized in the Tschebyscheff sense. It is to be noted that most problems for prescribed time response include specifications of a particular input ei(t) and of the corresponding response e0(t) rather than the impulse response h(t). However, a number of techniques are available for reduction of such input conditions to an equivalent prescribed impulse response h(t) [31]. One such technique is discussed in Appendix B.

CHAPTER III STATUS OF THE ART Recent contributions to the time-domain synthesis problem have resulted from exploiting several ideas, of which the following are noteworthy: 3.1 Fourier Series Approach [8, 24] This method is based upon the following idea: if h(t), the prescribed impulse response, were to repeat periodically, it could then be approximated by a finite trigonometric polynomial to any required error tolerance. A corresponding Laplace transform could then be obtained at once, and since the approximation takes place in the time domain, time-domain error is controlled. Also, the methods of Fourier series are well known and understood. In effect, if the function shown in Fig. 3.1 is the desired impulse response of the sought network N, h(t)| h(t) =O when t > T/2 (t)t T/2 T FIG. 3.1 DESIRED IMPULSE RESPONSE OF N 9

10 and if the periodic function hp(t) is as shown in Fig. 3.2, hp(t) T/2 T 3T/2 FIG. 3.2 PERIODIC REPETITION OF h(t) then evidently k h (t) = Z Cke, = (3.1) k=k' C If h (t) = 0 for t < O then h(t) = [h (t)-h (t-T)] h (t) can then be approximated by trigonometric functions resulting in an approximate network. This method yields excellent results for certain types of waveforms. Among its drawbacks is the fact that synthesis by this technique often requires a non-positive real admittance (and hence it is not realizable with passive elements only), and a modification has to take place. It is also believed that this technique is not very efficient in terms of degree of approximation for a utilized number of elements.

11 3.2 Impulse Method of Approximation [7,8,26] This method is based upon the idea that an arbitrary function can be related to a train of impulses. If the desired impulse response h(t) is given, an approximate response h*(t) is obtained as a sequence of v-1 curves, each of which is given by an (m-l)-degree polynomial. This concept is illustrated in Fig. 3-3*(t) h (t) FIG. 3.3 APPROXIMATION OF h(t) BY h*(t) If h*(t) is then differentiated m times, it will yield a sequence of v impulses which, in turn, are approximated by some reasonable facsimile. A suggestive approximation for a delayed impulse is a delayed pulse. A good approximation, evidently, requires narrow pulse width, and if the Laplace transform of one such pulse is approximated by an expression of the form n A Q1(s) = Z --' (3.2) vr=l v

12 then a pulse occuring t seconds earlier would have a Laplace transformt: s t n A e Q2(s) = Z - (3.3) v=l V This method yields good results in many cases; however, one of its drawbacks lies in the fact that two approximations are requiredone in the approximation of h(t) by a sequence of curves and the other in the approximation of the delayed impulse by Q2(s)-thus increasing the final error. 3.3 Numerical Calculations by Time Series [1,14,15] A representative method in this class is the one due to Ba Hli [1]. If h(t) is approximated by rectangles as shown in Fig. 3.4, h(t) h3 h2 At 2 At 3At t FIG. 3.4 APPROXIMATION OF h(t) BY RECTANGLES 0 then} since H(s) - / h(t)e dt (3.4) t n st Note that if L1 [Q1(s)] = Ae = q n s X st then q(t) = ql(t+7) = 1 (A e )e v, hence Qp(s) follows as given in Eq. (3.3).

13 can be approximated by G*(s) given by: -sAt -2sAt G*(s) = h1 At e + h2 ZAt e +... (35) consequently, -vsLt G*(s) ~ Z hv A t. (3.6) -sAt Expansion cf e in a power series yields e-sAt = sAt (sAt)2 e-st =1- -'. + +..(3 7) an expansion valid for all finite s. Now, G*(s) = hlAt (1- st + (s - 1~ 1! +! 2sAt (2sAt) 2 + h2At (1- - + 2 - ) II 0 0 0 9 9 0 0 0 0 0.*L * 0 ** nszt (nsZt ) + h nt (1- + ~ (3.8) Collecting terms, n n G*(s) = Z hAt- L - vh At +..., (3.9) v=l v=l or G*(s) = g + g1s + g2s + This last expression is then approximated at the origin as a ratio of two polynomials; i.e., m N(s) no + nls +... + n s G*(s) d l -.. + (3.10) D(s)= n d0 +d1s + + + d s o n d is usually chosen to be one and the coefficients nk and ak are found through solution of a system of equations.

14 This method yields adequate results for many applications not requiring a great degree of accuracy. It is seen that a number of approximations has taken place, thus adding to the overall error. In particular, experience reveals that functions which are not monotonically rising and then monotonically falling are not well approximated by this procedure. 3.4 Approximation by Means of Least-Square Criteria [6,11] This approach consists of approximating the impulse response h(t) as the sum of orthogonal functions; i.e., h*(t) = Z CkP (t), (3.11) k=l where [cpl(t), cp2(t)..., cpn(t)] form an orthonormal set, and Ck is chosen so as to minimize the least-square error between h(t) and h*(t). In particular, cpk(t) is a sum of decaying exponentials and exponentiallydamped sinusoids. This method, advanced by W. L. Kautz [11], has also been studied by E. G. Gilbert [6]. Gilbert has shown that analog-computer circuits can be implemented to yield the desired constants Ck. However, in this method the approximation is essentially in terms of coefficients Ck (which form the residues) only, and by and large the poles are assumed arbitrarily. Even though two methods are suggested by Kautz for locating pole positions, these methods leave something to be desired. In the first method H(s) = L[h(t)] is found and expanded in a power series which is then expanded into a continued fraction. Termination of this continued fraction after several divisions will yield a rational fraction. The roots of the denominator polynomial are suggested as pole positions.

15 This method, even though straight-forward enough, requires that: 1. h(t) be given in an analytical form; 2. h(t) be L-transformable; 3. The roots in the denominator polynomial be in the left-hand plane. These conditions are not met in general. The second method described by Kautz for locating the poles is the Prony method. It consists of finding a linear, constantcoefficient differential equation, which is nearly satisfied by h(t). st Such an equation can be solved through substitution of h(t) = e into it, and it leads to a characteristic polynomial in s, the roots of which are the "natural resonances" of its solution - hence related approximately to the "natural resonances" of h(t). This polynomial is obtained by forming the sequence of differential sums: eo(t) = h(t) el(t) = flo eo(t) + h'(t)................................ (3.12 ) (t) e(t) + nlel(t) +.. fnn-l e (t) + h(n)(t) The solution of the characteristic equation for en is used to determine the n poles. The coefficients fkj are computed from the integrals (k).f f( (t)ej(t)dt kj — (3.13) -f eJ(t)st Thus, this method requires the availability of the numerator and denominator integrals in addition to a considerable amount of

16 computational work. A more detailed treatment of Prony's method is found in the classical work of Prony [10]. 35. Synthesis Through Matching of Time Moments [10,22] This approach is based upon the expansion of the impulse response into time moments. This method has the advantage that moments are easily obtained from a graphical presentation of the impulse response and that the moment coefficients are simply related to the transfer function of the network. Since -st H(s) = h(t)e dt, (3.14) -st expansion of e into power series yields H(s) = fO h(t)dt - o1! oth(t)dt +. fo t2h(t)dt - = m- mls + m2s -..., (5.16) where mk = kI of tkh(t)dt (3.17) is the kth moment of the impulse-response function h(t) around the origin. Then, in particular: m = area under the impulse function, m1 - = center of gravity, o 2m2 - = moment of inertia about the line t = 0, m o and so on. Therefore, the coefficients can be identified with time moments.

17 The power series is then approximated by a continuousfraction expansion, as described in the previous sectiont. In effect, H(s) is equated to a rational function, and the fraction is cleared by multiplying both sides by the denominator polynomial. Matching coefficients of equal powers of s yields the coefficients of the numerator and denominator polynomials. Some of the advantages of this method were stated earlier. Its drawbacks are: (1) the error of the approximation is not predictable in advance; (2) moments exist only for certain classes of functions which are sufficiently bounded in amplitude and time; (3) there is no guarantee that zeros of the denominator will lie in the left-hand plane. In general, the method does not work well for functions with oscillatory terms, such as the one illustrated in Fig. 535. h(t) t FIG. 3.5 IMPULSE RESPONSE WITH OSCILLATORY TERMS t This method of approximation of a power series by a rational function is described in detail in [20].

18 3.6 Continued-Fraction Expansion [13] and Pade [9,15,19,25,27] Approximants The method of continued-fraction expansion has been described by Nadler [17]. It is not a time-domain synthesis method in the sense of the statement of the problem, as stated in Chapter II. This approach consists of finding the L-transform of the step response, approximating it in the s-domain, and obtaining the inverse L-transform of the approximated function. The details of this method follow: Let K(s) be the Laplace transform of the prescribed response of the sought network N to the unit step function. Then, K(s) = i H(s). (3.18) s Cauer [2) has shown that if K(s) is positive-real and regular, then K(s) = s[fo +r |d (x) (5.19) O s +x where fO = lim K(S (3.20) 0 S S o 00 and dr(x) = Re[K(Jfx)] (3.21) Stieltes [2] has shown that if r(x) is an increasing function with infinitely many poles, and if the integrals f'(-x)k-ldx(x) (k = 1, 2, 3,...) (3.22) all exist, then one can represent the integral J(x) =- ~ (.2)) oZ+x

19 by a continued fraction of the form: J(z) = 1fz + - f z +. f4 +. (3.24) with an infinite number of terms. It can be seen that the right-hand side of Eq. (3.19) can be expanded into a continued fraction similar to Eq. (3.24). Then K(s) = sf + fls + f2 + 2 1 3 +f4 +., (3.25) or K(s) = sf + 1 0 1 f2s + 1f s + 3 + fs +...,.(3.26) One can now terminate the continued fraction in Eq. (3.26) and equate the coefficients fk to those in the power series expansion for K(s). The terminated continued fraction in addition to the sf term, represents, then, a rational function approximation to K(s). The Pade [17] method of approximation consists of listing various rational fraction approximations in a double-entry table. Hence, a function H(s) is approximated as a ratio of two polynomials, N(s) and D(s) [i.e., H*(s) = D ]. Now, if N(s) is an mth-degree polynomial and D(s) an nth-degree polynomial, then N(s) has m+l coefficients and D(s) has n+l coefficients. The rational function Nms- has, however, only m+n+l independent coefficients. Hence, if one equates N(s) to the power series of H(s) he can determine the one~~~~~- eqae n to

20 coefficients of N(s) and D(s) so that H(s).D(s)-N(s) has sm+n+l as the lowest power of s [i.e., coefficients of sm++l k = 1, 2,... m+n) are all zero]. These various rational function approximants to H(s) can then be tabulated in a double-entry table, as shown in Table I m n H H Ho 00 ol 02 Hlo Hll H12 H20 H21 H22 where m a rm-k N(s) k=O ( H (3 27) mnb n-k L b k k- k k=O k TABLE I PADE TABLE FOR H(s) Teasdale [25] suggests for better approximation in the time domain the employment of an "indirect Pade approximant." A method for obtaining this approximant is shown in the block diagram of Fig. 3.6. This approach, excellent for s-plane approximations, suffers in the time domain from the shortcomings mentioned in Section 3.4. In this chapter, several contributions have been reviewed. Other approaches have made use of Laguerre's functions [13,18] and of analog computers [6,12,18]. These will not be discussed here.

I-z z) ~~~~~~~~h(t)~~z =s wtz) REMOVAL OF )=F( z) 0 L-TRANSFORM.... s = 1+z q ZEROS OF (I+z q qI_ I I L I __~W(z) AT z=-l F,(z) EXPANSION OF F,(z) F,(z) DIRECT) (z) Z F)Fdz) IRECTF1(z) q (I+z)FlF(z) =W(z) 0 >IN A POWER SERIES PADE (I +z) ABOUT ORIGIN APPROXIMANT H W(z) W ~ I-s K W(-s/*(s -(t) DeI-s =. KW H (s)/ - Z —-- = -- -- ---- CHOOSE K SO- L-TRANSFORM THAT H(0)= H (0) FIG.3.6 BLOCK DIAGRAM SHOWING THE PROCESS OF OBTAINING THE "INDIRECT PADE APPROXIMANT"

CHAPTER IV THE IMPULSE-RESPONSE-APPROXIMATION PROBLEM 4.1 Introduction In the second chapter the approximation problem has been stated, and in the third chapter methods of attack have been reviewed. In this chapter an alternative method of attack is developed, employing the concept of Tschebyscheff discrete approximations. The problem considered here is one of determining an R-L-C network with an impulse response h*(t) which approximates a prescribed impulse response h(t). Since one knows how to synthesize a network function of the form n Ak H*(s) = Z A- (4.1) k=l -Sk with Re(sk) < 0 (k = 1, 2,..., n), (4.2) the problem will be solved if the approximate impulse response h*(t) can be found which is given by n skt h*(t) = 2 Ak. (4.3) k=l witht Re(sk) < 0 (k = 1, 2,..., n). (4.4) The problem can be considered to be two-fold, its part being: (a) Determination of pole locations, i.e., the determination of exponents of the approximating function in Eq. (4.3), and (b) Determination of residues, i.e., the determination of coefficients Ak in Eq. (4.3). t Note that this expansion assumes no coincident poles. 22

23 Both determinations are to be made so as to minimize the approximation error in some sense. In this chapter the parts (a) and (b) of the problem are solved. The approximation error is minimized in the Tschebyscheff sense. A brief summary of the contents of each section is given below. In Section 4.2 of this chapter the problem of determination of pole locations is solved. This is accomplished by a reduction of the problem to an overdetermined system of linear equations. It is shown that the solution of the overdetermined system will yield a set of coefficients from which the desired poles can be obtained. In the same section the theory of discrete Tschebyscheff approximations is reviewed. It is shown that this theory can be applied to solve the overdetermined system of equations. The formulas for optimum poles (in Tschebyscheff sense) are developed, and are listed in Table III. In Section 4.3 the problem of determination of residues is solved. It is shown that this problem can likewise be reduced to an overdetermined system of linear equations, the solution of which yields the residues. In Section 4.4 the errors of the approximation are discussed. A relationship between the Tschebyscheff error in the pole determination and the final approximation is derived. In Section 4.5 two examples are worked out to illustrate the developed method. The results lead to realizable networks which are shown at the end of each example. Also a comparison is made between the desired impulse response, and the obtained one for each example.

24 In section 4.6 discussion of the suggested method is made, and the drawn conclusions are stated. A summary of the process is offered at the end of the section. 4.2 The Determination of Pole Locations In this section the problem of determination of pole locations is solved. It is shown that this problem can be reduced to an overdetermined system of equations. The theory of discrete Tschebyscheff approximations is reviewed, and it is shown that with this theory, formulas for optimum poles (in Tschebyscheff sense) can be developed. 4.2.1 Reduction of the Problem to an Overdetermined System of Linear Equations. In this section the problem of determination of pole locations is reduced to the problem of an overdetermined system of linear equations. The prescribed impulse response is assumed to be given in form of q ordinates of h(t), denoted as hm (m = 1, 2,..., q), at uniform time intervals. A set of n coefficients rl, r2,..., rn is introduced from which the desired poles sl, s2,..., sn can be obtained. The relationship between rk (k = 1, 2,..., n) and hm (m = 1, 2,..., q) is stated in Theorem 1. Theorem 1 in essence, is found in the literature [21,29,50], the proof of the theorem, however, is the author's. By Theorem 1 one obtains a set of linear equations for rk (k = 1, 2,..., n) which are in general overdetermined (i.e., the number of equations exceeds the number of unknowns). The method developed for the determination of pole locations is similar to the Prony method [10]. However, the discrete Tschebyscheff approximation concept is employed to solve the overdetermined system of equations.

25 In general, h(t) will be given in the form of an equation, a graph, or a set of data. From any of these three forms one can obtain data for equal increments of time. If there are q points of data, and if the first point of data is given at t1 [i.e., h(tl) is given], and if the increment is d, then the m ordinate of h(t) is given by m = h(tm) (m = 1, 2,... q). (4.5) When v is an integer less than or equal to q, tv is given by tv = t1 + (v-l)d (v < q). (4.6) Equating hv to h*(tv), one obtains n skt h = Z A e (47) k=l k A sketch of h(t) and the corresponding ordinates of h(t) for equal time increments are shown in Fig. 4.1. h(t)' h, t___ _ tm = tt + (m-l)d h2 hv h2 — - -od -—, - hq D CI RD T t, t2 tv tq t FIG. 4.1 h(t) AND CORRESPONDING ORDINATES FOR EQUAL TIME INCREMENTS

26 Let Skd e = Yk (k =, 2,..., n) (4.8) and skt Ak = Zk (k = 1, 2,, n 1 < v < q) (4.9) Then n h = Z z V k=l n h - Z zkyk hv+l - ZkYk v+l k=l n ~h = n 2 v+2 Z kYk v+i - zkk (4.10) k=l Since only q points of data are given, then v+i < q in Eq. (4.10). Assume that n+l equations of the same type as Eq. (4.10) for the n unknowns Zkv (k = 1, 2,..., n; 1 < v < q) were used. The n+l equations for the n unknowns zkv can be satisfied simultaneously if and only if the determinant of coefficients of zkv is zero. This will yield a relationship between Yk (k = 1, 2,..., n) and hv+i (i =, 1,..., n). This relationship can alternatively be obtained in the following manner:

27 Let n rl =- yk k=l n r2 = Yk Yk kl=l 1 2 > k > k n r = L Yk Yk Yk k1=l 1 2 35 n > k3 >k2>kl n rn = (-l)n n Yk (4.11) k=l Hence r is the coefficient of y in an algebraic equation m of nth degree, whose roots are Yk (k = 1, 2,..., n). The coefficient of the leading term (i.e., the coefficient of yn) is r (i.e., unity, since r = 1). If one multiplies the first n+l equations in Eqs. (4.10) and (4.11) by one another so that hV+i is multiplied by r i (i = 0, 1,..., n), then, n n nhv = (-1)n 1 Yk Zkv n-1 v+J. k=l 1 2 n-l k=l -nv+ v ~k l k=l n n r lhv+l (-1) n1 Z ZkvYk n>k >k - v+l v n n v+n-l - ~ kYk Z ZkvYk

28 n v+n k=l From Eqs. (4.12) one can obtain the following theorem. Theorem 1 If Eqs. (4.12) are valid for all sets Zkv (k = 1, 2,..., n; 1 < v < q - n) then Z r khk = 0. (1 < v < q - n) (4.13) k=0 n-k v+k k=O Proof: Assume that Eq. (4.13) is valid for sets Zkv (k = 1, 2,..., m 1 < v < q - n) for a particular number of poles, say m. Then, L rkhv = 0 (1 <v < q - n). rm-khv+k k=0 where rO = 1, and rk (k = 1, 2,..., m) are functions of Yk (k = 1, 2,..., m) as defined in Eqs. (4.11). It will be shown that if Eq. (4.13) is valid for n = m then also m+l Z r' h = 0 (1 <v < q -n) m+l-k v+k k=O where r' = 1 and r' (k =1, 2,... m+l) are functions of Yk (k = 1, 2,.., m+l) as defined in Eqs. (4.11). It will be shown, moreover, that Eq. (4.13) is satisfied for n = 1. This will then complete the proof of Theorem 1. The coefficients rk (k = 1, 2,..., m) are functions of Yk (k = 1, 2,..., m) defined in Eqs. (4.11). r = 0. The m values Yk (i.e., y1, Y2, y... Y) are the zeros of Pm(y), an algebraic equation of mth degree.

29 If p(y) = y + rl 1 +... +rr (4.14) let P+ (y) be defined as Pm (y) P (y)(Y-Yml r+lm m+l Then yl, Y2,..., Ym+ are the zeros of P +(y). Let r denote the functions in Eqs. (4.11) of m roots (y1, Y2,''., Ym) and r' denote the functions in Eqs. (4.11) of m+l roots (Y1' Y2' *, Ym+l) Then, r' = 1 = r O O m -r =1 k Ym+l = r-Ym+l k=l m m 2 = kk Ym+l ~Yk =' mS" L11 k1=1 1 2 k=1 n>k >k2 1 m m r - y y y -y Z y y r -Y,+lr2 k=l Ykkk 2 + k=l k1k2 = n>k3>k2>k k2>kl (4.15)'-. r- r im = Ym+l rm-1 rm+l= -Ym+lrm t This last relationship is not apparent but follows if one considers m m both cases, m even and m odd. For m even: rm = fl Yik hence r 1 = -Y r. For m oddr = I- Yk and r -m++lr m+1 m+1n imn m n k1 k m+l. m m~~~~~~~~~~~~~~~~~~~~~~'l -'m. r

30 Then m+l kO (m+1+) -k hv+k r' h + r' +h + + r h +h (4.16) m+l v m v+ 1 v+m v+m ++l Substitution of Eqs. (4.15) into Eq.(4.16) yields, m+l Z rt h = k= (m+l)-k v+k (-Ym+lrm) h + (rm-ymlrml ) hv +... + (r2 - Ym+lrl) hv+m-1 + (rl-Ym+l) hv+m + hv+m+ (-Ym+l)(rmh +r +. rh+l + * l hv+-l + hm) + rmhv+... + r h + rh +rh h (4.17) m v+1 2 v+m-1 1 v+mn o v+m+l' or m+l kZ (m+l)-k hv+k = I1 + 2 = m m -Ym+l rm-khv+k+ Z -k h(v+l)+k (4.18) k=O k=O Now I1, the first expression on the right, side of Eq. (4.18), is zero by the assumption. 12, the second expression, is equal to: m m 12 =rm kr (zkvYk) +rm-1 k= ( kvyk m m + r Z (Zkyk)y k + (ZkvYk k (4.19) k=l k=l Let Zkvyk = Zk m and h' = Z z' v kv ~_ k v'

31 Then m = r h (420) 2 k=0 rm-k v+k (.20 is zero, since by the assumption Eq. (4.14) is valid for all sets zk (k = 1, 2,..., n; 1 < v < q-n), hence also for z' = ZVyk (k = 1, 2,.., n; 1< v < q-n). Consequently, it was shown that if: m rmkh+k =, (1 < v < q-n) (4.21) k=O then also m+l r +l)k h, = 0 (1 < v < q-n) (4.22) (m+l)-k v+k - _ k=O where r = rk — Ym+lrk (k = 0, 1..., m+l)(4.23) (r = r+1 0). Therefore, it was proved that if Eq. (4.13), is valid for the m roots of P (y), then it is also valid for the m+l roots of Pm+l(y). It will be shown now that Eq. (4.13) is valid for n = 1, which will complete the proof of Theorem 1. Expanding Eq. (4.10) for n = 1 yields rhv = -YZlv hv+1 = yllv (4.24) Hence 1 Z rlkhv+k =. (1 < v < q-n) (4.25) k- 0 This completes the proof of Theorem 1. Equation (4.13) gives a relation between (hv, h +l,... hv+n) and (r, rl,..., r ). Through an increase in the index v, from

32 v to v+1, Eq. (4.13) will relate (hv+l, h v2,., +n+1 with (r0, rl,..., rn). If one has q equally-spaced values of h(t) (e.g., hl, h2,..., hq), then Eq. (4.13) will give rise to q-n equations. If q-n = p, one can write in general n rn khv+k = 0 (v = 1, 2, *... p) (4.26) k=O This allows one to differentiate among three cases. Case 1: n > p - undetermined system. Case 2: n = p - determined system. Case 3: n < p - overdetermined system. Case 1 implies that there are more terms chosen for the approximating function, Eq. (4.3) than justified by the available points of data. If n-p = m, then there are m conditions that can be fulfilled arbitrarily. Since, in general, an economy of elements is desired, and these are directly related to the number of terms in Eq. (4.3), Case 1 is of little practical interest, and will be commonly reduced to Case 2 [which can be simply accomplished by requiring the Eq. (4.3) to have p terms only]. Case 2 will theoretically occur whenever essentially no approximation error can be tolerated at the given points. Case 2 provides the theoretical optimum, or best approximation, for a given number of points. Since the elements used for synthesis are not ideal, it is of little value to talk about zero synthesis error.t One may wonder whether with fewer elements [i.e., fewer terms in Eq. (4.3)] one may not have at times a smaller synthesis error than one would t By synthesis error the overall error is meant, i.e., the error due to approximation in addition to the error caused through use of physical elements.

33 obtain with more elements (even though the approximation error is reduced). Also, in general, for the sake of economy, one would like to have the maximum error that can be tolerated, since, by and large, a design allowing a greater error is less costly than one allowing a smaller error. All this reduces to the fact that one would like to design with as few elements as possible, and therefore, in the typical case, Case 3 is of most importance and interest. Case 3 gives rise to an overdetermined system of equations. The characteristics of such a system are such that, in general, no equation will be solved exactly [i.e., the right side of Eqs. (4.26) will be different from zero]. Since this case is of most practical interest, a theory treating it will be developed in detail in the succeeding section of this work. 4.2.2 Solution of Overdetermined Systems of Equations by Means of Discrete Tschebyscheff Approximation. In this section the overdetermined system of equations obtained in Eqs. (4.26), will be solved by means of discrete Tschebyscheff approximations. The theory of discrete Tschebyscheff approximations to overdetermined systems will be reviewed. It will be shown that this theory provides a solution to Eqs. (4.26). The right side of Eqs. (4.26) will, in general, be different from zero, and comprises an error. This error will be minimized in Tschebyscheff sense. The mathematical theory of the applications of discrete Tschebyscheff approximations to overdetermined systems has been treated by Vallee-Poussin [28]. More recent works in this area are due to Collatz [5] and Stiefel [23]. The following review of the theory of discrete Tschebyscheff approximations to overdetermined systems follows

34 closely along lines of Stiefel [23]. The development is in a form suitable for solution of the problem stated. 4.2.2.1 The Theory of Tschebyscheff Approximations to Overdetermined Systems of Equations. The theory of discrete Tschebyscheff approximations offers a solution to an overdetermined system of equations. If there are p equations such as Eqs. (4.26), in n unknowns rl, r2,.., r and if p > n, then, in general, one can not satisfy all p equations simultaneously. Hence, any choice of values for rl, r2,..., rn' will cause an error different from zero on the right side of Eqs. (4.26). The discrete Tschebyscheff approximation theory provides a means for finding those values for the unknowns rl, r2,..., rn, which will minimize the magnitude of the maximum error on the right side of Eqs. (4.26). The process of finding the desired values for rl, r2,... rn consists of a number of cycles. Each cycle is composed of the following four steps: (1) A set of n+l equations for the n unknowns rl, r2,..., r is selected out of the p given equations. This set is called a reference. (2) The Tschebyscheff error for the selected reference is computed. (3) A set of values r, r2,..., rn is obtained corresponding to the reference. (4) Errors for the p equations are obtained. These four steps complete the cycle. If the error for any of the p equations does not exceed the reference error, the set of rl, r2,..., r computed in (3) is the

35 desired one. If there is an equation which has an error larger than the reference error, then this equation must replace one of the equations of the reference. A replacement process is discussed which provides definite rules determining the equation to be replaced. The n remaining equations of the old reference and the new equation form a new reference, thus providing step (1) for a new cycle. It is shown that the process is convergent and that regardless of the initial choice of reference the process terminates always with the same values for rl, r2,., r rn. The details of the theory will follow below. Case 3 gives rise to an overdetermined system of equations for n unknowns rl, r2,..., r (ro =1). Hence, hrn + h l-l + * + nr + hv + = 0 (4.27) (v =1, 2,..., p) where p > n. One can interpret the system of equations in Eqs. (4.27) geometrically by considering every point P (rl, r2,..., rn) to be a point in n-dimensional Euclidean space Rn. Since p >n, there is, in general, no point in R with coordinates which will satisfy Eqs. (4.27) for all v (v = 1, 2,..., p). If the coordinates of an arbitrary point P are substituted into Eqs. (4.27), then in general, there will be an error on the right side of some equations of Eqs. (4.27), rather than zero. If this error is denoted ev then, = vrn + hvlrn_ +... + hv+nlrl + h+n (v = 1, 2,..., p) (4.28)

36 The Tschebyscheff approximation problem consists then of finding a point P such that Max ieV is a minimum (v = 1, 2,..., p) The point P of the best approximation is named the T-point (the Tschebyscheff point). Let us introduce the concept of a reference. The term reference shall denote the choice of (n+l) hyper-planes from the p given hyper-planes El, E2,..., Ep of the Euclidean space Rn [i.e., choice of (n+l) equations from the p equations in Eqs. (4.27)]. The greek letter index a will denote a choice of n+l numbers from the series 1, 2,..., p. The reference will be denoted as [E ]. Let XV = (hv, h... hv+n) (v = 1, 2,..., p) be the normal vectors of the hyper-planes. Since the space is an n-dimensional one, there are only n independent vectors. Hence, n+l vectors are linearly dependent, and there exists a set of numbers X such that: Z Ax = O. (4.29) (n+l) a - a The sign Z denotes that only the selected (n+l) terms are summed. (n+l) Equation (4.29) gives the dependence condition between the normal vectors. In addition, ha X 0 for all values of a, since otherwise the Euclidean space could not be n-dimensional. A point P is denoted a reference point if for its residues c either a sgn Ec = sgn \a for all a or sgn e = -sgn X for all a. (4.30)

37 This condition can be interpreted geometrically. In three dimensional Euclidean space R3 it characterizes the points inside the volume formed by the reference planes. In the general case, Eq. (4.30) limits the magnitude of the error |~E|. Thus a reference point in Rn can be considered to be located "inside" the volume formed by the hyperplanes. An example will be worked out to illustrate the concepts introduced above. Let the space be two dimensional (n = 2), hence the hyper-planes are straight lines. Let the following five equations be given (p = 5): E1:.5r2 + 3r1 + 5 = 0 E: 3r2 + l.5rl +.5 = 0 E: 1.5r2 +.5r1 -.5 = 0 E4.5r2 -.5r1 + 2 = 0 E-: -.*r2 + 2r1 + 4 = 0. If the reference is composed of E3, E4, and ES, then the vectors normal to the reference are: x3 = (1.5,.5) x4 = (.5, -.5) x5 = (-.5, 2) By Eq. (4.29), there exists a set of numbers X such that E x x = 0 (3) aa Hence, 1.5X3 +.5X4 -.54 = 0 and 5 I

38 Let = 1, then 4 = 13 and = -4 These concepts are illustrated in Fig. 4.2. The vectors x (v = 1, 2,..., 5) are orthogonal to their corresponding planes (i.e., x is orthogonal to E ). l x is a sum of three vectors. Since, a (3) a a Z x = 0, these vectors form a triangle. (3) a From Fig. 4.2, one notices that the point P(-2,-2) is located inside the triangle formed by the reference planes E, E4, E5. Hence, P is a reference point, and its errors must satisfy one of the two conditions of Eq. (4.30). Substitution of r2 = -2 and rl = -2, into E, E4, and E5, yields 3 = -4.5, 4 = 2, = 1. Hence, the condition sgn ~ = -sgn X is satisfied for all a (a = 3, 4, 5). By Eq. (4.28) = hr + h r +... + h r + h oa an a+l n-l a+n +n1 Therefore, x = X (h r hlr- +... + h rn + h) a a a an a+l n-l a+n-l 1 a+n Since, x = (h a h+,... ha+n = a v a' a ~ ~ ~a+n -1 = h r + hlr +... + h a n a+l n-l a+n-1rl 1 hence e = xx +X h a a a a a a+n Due to Eq. (4.29), Z? = Z X' h (4.31) (n+l) a a (n+l) a a+n

{ / 44 -3 4Xx5 " \E2 3 FIG.4.2 GEOMETRICAL INTERPRETATION OF HYPER-PLANES AND NORMAL VECTORS FIG.4.2 GEOMETRICAL INTERPRETATION OF HYPER-PLANE

40 Because of the condition Eq. (4.30), either all A e are positive or all are negative. Therefore, Z x IHlEL = + L Xo = + Z h (4.32) (n+l) a a (n+l) a (n+l) ( a+n One shall denote by the term center of a reference that reference point all of which errors ca have the same magnitude Icl~ Hence, a = c(sgnk). (4.33) One can interpret |el to be a measure of distance necessary to bring the hyper-planes Ea to a mutual intersection. One can compute the error at the center from Eq. (4.31), Z x = Z h, (n+l) a (n+l) a a+n or e Z Xo(sgn Ha) = Z h (n+l) a (n+l) a a+n But Xa(sgn A) = | A Hence, -(n ) A +n ( 4.34) (n+1) L -- 1x.1 (n+l) a By Eq. (4.32) X h = + 1X L I (n+l) a "+n (n+ l) l a (n+l)

41 From Eq. (4.35) it follows that IE Z l 1 = Z I Hal (n+l) a (n+1)i' a i Let Min.Iel = |eki Then (n+l) (n+l) Hence le1 2 Min I| | Similarly Eil| Max | al. (4.36) This result is valid at every reference point and leads to the following theorem. Theorem 2 The center of a reference has the property that its error satisfies Eq. (4.36) and is the T-point of the (n+l) reference hyper-planes. One can prove the uniqueness of the T-point for a system in a general position (no hyper-planes are parallel to one another) by assuming that there exists another T-point, T', which has errors e'. By Eq. (4.36), lel= Max |le|. The two T-points must satisfy the requirement that their maximum errors are equal. By Eq. (4.36) lei = Max Ie' 1 Then'1~ ~ cil

42 Let e denote the error of the T point relative to e a S Therefore, whenever, e > 0 then e - E' > 0; a - a a - and whenever e < O0 then e - e' < 0. a - a a - If the first condition of Eq. (4.30) is satisfied, i.e., if sgn a = sgn X, then, whenever e >0, X > 0 and X (e - 6') > 0. If < 0, a- a-a a C - - then X < 0, and X (e - e') > 0. Hence, in both cases all a (E -') a7 - a a a a a a will be positive. If, sgn e = - sgn Xa, whenever e > 0, then X < 0 and (E - E) < 0. If e < O, then X > 0, and X (e - El) < 0. Thus, for the second sign condition a - o a - (i.e., sgn' = - sgn X:), all (E - El ) will be negative. (i.e., sgn ea - sgn Xa);al a a a The above shows that for any one of these four cases all the expressions (e a- e') will have same sign. From Eq. (4.31), A e - ~= A h = Z A.c' (n+l) aa (n+l) a+n (n+l) a a Therefore, Z e - Z xe' = 0 (n+l) a (n+l) a a and Z~ X(eX') = o. (n+l) a a Since X 9 0, and all terms in the summation have the same sign,

43 it follows that e = c'.This shows that T and T' are the same a point. One sees, therefore, that the (n+l) hyper-planes in the Rn space which are in general position (no two planes are parallel to one another) have only one T-point, which is in the center. In the general case of (n+l) planes in Rn one could expect a convex polyhedron consisting of T-points. It has been shown that, given a reference [E ] consisting of (n+l) equations (i.e., hyper-planes), there exists a unique T-point (if the hyper-planes are in a general position) with its corresponding error e. If the coordinates rl, r2,..., rn of the T point are substituted into the remaining equations (other than the reference), one of the equations, Ei, may have an error whose magnitude exceeds e|l. Now, a new reference can be obtained consisting of E. and n of the hyper-planes in the old reference. Hence, Ei "replaces" one of the hyper-planes of the reference. These ideas are expressed in the following theorem. Theorem 3 Given a reference [E ] and a corresponding reference point P, let Ei be an additional hyper-plane which is not in the reference. Then one can replace one of the (n+l) hyper-planes of [E ] by Ei and obtain a new reference for which P is also a reference point. Proof: Assume that the given reference is [E ] (a = 1, 2,..., n+l), and let En+ be the additional hyper-plane. Equation (4.29) yields

44 klXl + k2X2 + *. + Xn+lXn+l = 0; Since P is a reference point in the reference [E ], hence, for its residues Ec the sign rules are valid. Consequently, either sgn e = sgn (a = 1, 2,... nl), or sgn c = -sgn Consider in the following the first of the two cases. Since the system is n-dimensional, the unit vector xn+2 is linearly dependent upon the unit vectors in [E ]. Consequently, a relationship exists between the (n+l) unit vectors in [E ] and x 2. Hence, numbers,k exist such that the relationship 1lXl + 2X +.. + n+ nl + Xn+2 = ~ (4.37) can be assumed. But lXl + 2X22 +.. + Xn+lxn+l = 0 Therefore: lxn+2 + (AL'2 - X241)X2 +... + ( la -n+1 -)xn+ n+l = 0 2xn+2 + (241 - 142 )xl +... + ( 2n+1 - n+l2)xn+l = 0................................................. (4.38) n+l n+2 + ( n+1 - ln+l)Xl + *** + ( n+1n- n )x=n 0 Case 1: The error en+2 of P relative to En+2 is positive. Then, one shall replace that hyper-plane which is designated by the number given by

45 Min - (a = 1, 2,..., n+l). One can note that no two quotients are equal. If this were not the lt ~m case, if, say = -, then kLm - pm = 0, and one coefficient I m in the mth equation of Eq. (4.38) will vanish. But this will contradict the assumption that the system is n-dimensional (since if X x +a x +... +a x +a x +... m n+2 + al1 + +m-m-l m+lm+l + + aXX- 1 + a+lx~+l +... + an+l n+l =0 the n-unit vectors are not linearly independent). Assume that X yields the minimum. Then - < -- (a 1,2,. -,+ n+ Assume > O, k > 0 Then AX Xk 4kO < k an ~kk~ - Ak > O and kek > ~ Hence, sgn (Ak - lek) = sgn (lCkk ) If Ik <, k > O then A~xk > 4kXL l-k -' ~lk < O and k1\k < ~.

46 Hence sgn (Aek - Xkk) = sgn (iXk) It is obvious that the same is true for X> > 0, Xk < 0 and kX < O, 0 k <. Therefore, sgn (Xefk - 4IXk) s= sgn (X~Xk) Consider the Ith equation in the set of Eqs. (4.38). This equation has the form +(Xn+2 ) + ( 1 - ) + (* + ( _1 - k-_l )x l-1 + + (XLf+1 - )X+l 2)Xl +l + - + (k1n+ - n+lll)xn+l = 0 Division of this equation by sgn Xa will yield the following relation for the signs: +, sgn,..., sgn X_,.. sgn +l, sgn n+l. But, by assumption, sgn k = sgn c and E n+2 relative to E2 > 0. n+2 n+2 Hence the errors En+2l' l' -' ~+-l''..., n+l relative to [E ] (a = 1, 2,..., i-1, 1+l,..., n+2) have the same sign as the corresponding X. The new coefficients of Eq. (4.38) have the same signs as the corresponding ea. The coefficient of Xn+2 is positive, but, En+2 > 0, hence P is a reference point and Theorem 3 is proved for this case.

47 Case 2: The error n+2 relative to En is negative. A discussion similar to the above will show that the plane with the number given by Max - must be replaced. In the case that n+2 = 0, P lies on En+2 and one can consider n+2 c n+2 it to be a special case of either case. A consideration of the second sign rule (sgn c = -sgn A ) will cause an exchange of max with min and vice versa. The above gives rise to the following table for the number of the plane to be replaced. TABLE II RULES FOR THE HYPER-PIANE TO BE REPLACED Hyper-plane sgn e E to Be Replaced sgn X > 0 Min - sgn X < 0 Max - -sgn k > 0 Max - -sgn x < 0 Min rt relative to E.. An example will be worked out which will illustrate the replacement process. Let the space be 3-dimensional, hence n = 3. Let the following four equations be given:

4o E1: 2 x + 3y + z = 1 () E2: x - 2y + 3z = -1 (II) E3 x + y- z = 5 (III) E4 x + 3y - 2z = 12 (IV) Let the reference point be (2, 1, -3) and let E5 be given by -3x -lOy + 8z = -20. Then for [E ], the normal vectors are x1 = (2, 3, 1) X2 = (1, -2, 3) x3 = (1, 1, -1) 3 x4 = (1, 3, -2) From lx1 + 2x2 + X3x3 + 4x4 = 0, one obtains 2X1 + 2 + 3 + 4 = 0 3X1 - 2X2 + 3 + 34 = 0 1 + 3X2 - 3 - 24 = 0. Let 1 = 5, then A2 = -7, N3 = 10, X4 = -13. Now c1 = 2(2) + 3(1) + 1(-3) - 1 = 3 62 = 1(2) - 2(1) + 3(-3) + 4 = - c3 = 1(2) + 1(1) - 1(-3) -5 = 1 C4 = 1(2) + 3(1) - 2(-3) - 12 = -1. Therefore sgn k = sgn E a a

4 9 Since x5 = (-3, -10, 8), and by Eq. (4.37), lXl1 + 42X2 + 43X3 + 14x4 + x5 = O 0 therefore, 241 + 2 + 43 + 44 - 3 = 34 - 242 + 3 + 34 - 10 = 41 + 342 3 - 244 + 8 0 The last system of equations is undetermined. Since any exact solution will be satisfactory, one unknown can be assumed arbitrarily. Let 44 = 0, then l1 = 1, 42 = -2, 3 = 3. The error c~ is given by E5 = (-3)(2) - 10(1) + 8(-3) + 20 = -10 Since ei < 0, sgn X = sgn a; therefore one must replace that equation which is designated by the number given by Max. The are 41 1 l*2 -2. 3 3 4 0 xi 5 2 -7 A3 10' l O -13 a 3 3 Since Max - = - = 3- 2 hence E3 is to be replaced. 1a 3 3 The above example illustrates the process of finding the equation to be replaced. With the aid of this process the problem of an overdetermined system of equations can be solved. If p equations in n unknowns are given one will choose an arbitrary reference [E ] and compute its center T1, and its error Al (note that the error

50 of T1 relative to any equation in [E ] is c1). Then all the errors Ei of T1 relative to the remaining equations are calculated. If all these errors.i satisfy the condition le. i'|l1, one has the solution of the problem and the process terminates. If there exists one equation for which |cil>iej, the above-illustrated replacement process should be applied. One equation is replaced by the new one, thereby A forming a new reference. The new center T2, and the new error c2 are then computed. It is to be noted that for the second reference the absolute value of the error is larger than previously (i.e., 1e21 > A |Cl1). The process is now repeated. It is to be observed that the error increases monotonically (i.e., |eg > Eg_ l ), hence, this insures that the process will terminate after a finite number of steps, since, under the condition on errors of monotonic growth, the same reference cannot be used twice. When the process terminates, one will have the final reference center T and the final error c. Then obviously It > e 1V (v = 1, 2,..., p), (4.39) and ev is taken relative to its corresponding hyperplanes. Obviously any point P has, by Eq. (4.36), errors e' relative to the last reference such that Ie I < Mx 1e1' But Max Ie' < Max I|e' (v = 1, 2,..., p) Therefore e < Max'e' (v = 1, 2,... p).(4.40) This can be stated as a Theorem.

51 Theorem 4 The last reference center is the T-point of the p-equations. It is to be noted that the last center has the property that the magnitude of any of its p errors is at most equal to the magnitude of E, the error of the last ereference. The uniqueness of the T-point can be shown by assuming the existence of another T-point T', with error e'. Then Max |IE = i|e (k = 1, 2,... p) But for any point P, Max E _| > lei where a denotes the equations of the last reference (with center at T), and Max E' > Max I e (k = 1, 2,..., p). Therefore, Max C _I > Max Jell, and T' is also a T-point of the last reference. But uniqueness of the T-point in a reference was proved already. Hence T' and T are the same point. The above shows that the replacement process may be initiated with any reference and will always yield the same T-point. It also shows that the last error |e| is the sought approximation error. 4.2.2.2 Application to Solution of Overdetermined Systems of Equations. The theory of the preceding section can be applied to find those values for the unknowns rl, r2,..., r which will minimize the magnitude of the maximum error on the right side of Eqs. (4.26). The magnitude of the maximum error, corresponding to r., r2,..., r can also be found.

52 Apparently, the coordinates of the T-point, in the Euclidean space Rn, have the desired property. Hence the coordinates of the T-point are the desired values for rl, r2,..., rn. The coordinates of the T-point can be determined from the last reference. By Eqs. (4.28), = hrn + h+rn- + v+ n-lrl + hv+n (v = 1, 2,..., p). For the last reference the error e can be determined from Eq. (4.34) as - (n+l) a+n L lxiJ (n+1) a where a represents the last reference. The error of any equation of the reference is given by Eq. (4.33) as E = e(sgn X). Therefore, hr + h - +... + h rlrl + h - E sgn A = 0, (4.41) an a+lrn-1 a+n-lrl a+n a' where a represents the n+l equations of the last reference. n of the n+l equations of the reference will be used to find rl, r2,..., rn. The magnitude of maximum error in Eqs. (4.28) is Ie1. Hence, one has found the desired values for Ie and rl, r2,..., rn, thus solving the overdetermined system of equations. 4.2.3 Formulas for Optimum Poles. With the method of the previous section by approximate values for the functions of the roots r1, r2,..., rn were found. By the fundamental theorem of algebra the

53 roots of n n-i y + rlY +... + rn = 0 (4.42) are Yl Y2 *'" Yn' From these values one can find the exponents of the approximating function, which are the pole positions. By Eq. (4.8), skd e = yk (k = 1, 2,.... n). Hence, Skd = in yk, or k = n Yk' (4.43) which yields the desired pole positions. If Yk is a complex root, then, since the algebraic equation has real coefficients, there must exist a conjugate complex root. Consequently, if: /2 2 j(arc tan ) Yk = k 6 / + 6 e k - 5k and 2 j(arc tan - ) Yk+l = Y- jk = k 6k e k then k = (n + 6 + j arc tan 7 ) 1 I 2 2 1 = n (k + 6 j arc tan - (4.44) 2d n" k j -Yk Similarly, 1 2 2 k Sk = ( n + 6k + j arc tan - 1 Qyd k k - rc tan *Yk 1~n( 78 2.1 k -_ ~n(7 + ) - arc tan -. 2d k k d r,~~~

54 It is to be noted that the principal value of the arc tan has been listed in Eqs. (4.44). One could also use for the imaginary 1 k part of Eq. (4.44) the expression I (arc tan -k + 2~i), d Yk where =where O +, + 2,... The approximation apparently will be unaffected when the conjugate complex poles are moved vertically a distance + d2' If Yk is a negative root then the pole is again complex, as it is shown below. Yk = -lY l = lyk ej Sk = (in -ykJ - ji) Since also -|Yk| = Ykle-J'(, one can have also the representation Sk = d [in IYkl ji =k n Therefore, in case of negative Yk one can use the conjugate complex pair in the approximating function.t If Yk =,0 then k = - 00 and -skt Ake = 0. Hence, in this case the term containing this pole can be eliminated from the approximating function. These results are listed in Table III. t It is to be noted that here, too, the principal value has been used. In general, sk = i in |ly + j 1 (21 + 1)i ( = 0, + 1, + 2,...). k~ ~~ ~~ ~~~~~~~~~~~~~~~~~~~~~~~~ ~ e d

55 TABLE III FORMULAS FOR POLES Yk s Comments > 0 I1 > O | in Yk 1 1 < 0 d n |lyk +j I introduce an additional pole at d n|yl -jJ =0 -0 eliminate the term containing this pole from the approximating _function 1 2 2 1 5k |Yk+jk I2d n(Yk+5k +j d arc tan - The requirement of stability demands that Re (Sk) O. This requirement restricts (yk) to the unit circle. In general, if the ordinates h were taken to include the decaying part of h(t), then this requirement should reflect itself in decaying exponentials.t If this is not the case, one t The requirement of stability demands that OrfIh(T)IdT be finite. This follows from: |eo(t)| = I of h(T)e1 (t-T)dT| [where eo(t) is the response, and ei(t) is the input]; then leo(t)Il < o0lh(T)I ei(t-T)|dT < M Olfjh(T)|dT, where M is the maximum of ei(t). Now, if for bounded ei(t), eo(t) is to be bounded, then of h| (T) ldT < X.

56 can always go back and include more decaying terms (more equations for decaying ordinates). Also, in many instances it has been found that if there were a pole with a positive real part, the residue proved to be very small, thus providing a negligible contribution to the approximating function and enabling one to neglect this pole altogether. It is of interest to know whether any poles in the right-hand plane are to be expected, prior to the solution of the algebraic equation for Yk (k = 1, 2,..., n),which is probably the most tedious step in the process. Since whenever all Yk are in the unit circle all poles will be in the left-hand plane, it is sufficient to determine whether all Yk are inside the unit circle. This can be done by transforming the interior of the unit circle in the y-plane into the left-hand wplane, and applying the Routh's criteria [3] to the transformed polynomial, to determine whether it is a Hurwitz polynomial. The Routh's criteria will also reveal which constants rk (k = 1, 2,..., n) contribute to the instability, and by how much one has to adjust them in order to bring the poles to the left-hand plane (although the algebraic expressions are difficult to handle). A transformation which maps the interior of the unit circle into the left-hand plane is the bilinear fractional transformation [4]. In particular, one can map the unit circle in the y-plane into the imaginary axis in the w-plane as shown in Fig. 4.3.

57 y- PLANE W -PLANE FIG. 4.3 TRANSFORMATION OF THE INTERIOR OF THE UNIT CIRCLE INTO THE LEFT-HAND PLANE If one requires that when Y! = J, w! = j; Y2 =- -j w2 = -j; Y3 = 1, w3 =. Then from (w - w)(w2 - w3) (Y - Y1)(Y2 - Y3) (w - W)(W - w) (y - y(y y) one obtains (w - )(- (y j)(-j - 1) w(-j - j) (y )(-j - j) Hence, w = (y + -1) (45) ~~~~y + 1 (4.45) is the desired transformation. That this transformation maps the interior of the unit circle into the left-hand plane is seen from

58 the fact that the point y = 0 maps into w = -1, i.e., the left-hand plane. Solving for y one obtains Y = 1 (w ). (4.46) l-w If Eq. (4.46) is substituted into E4. (4.42) the result is: (1 + w)n + rl(l + w)n-l( - w) +... + rn(1 -w) = 0. (4.47) If one denotes the coefficient of rk by ck then, k = (1 + w)-k (1 - w)k n-k n-k k k =[z (,k)wn-k-j] [ C ( )wk-i] (l)k j=o j=o [(nk) n-k (nlk) w"-k-l + + [()wk_ ( k...l+k n(-) ( ( ) (_l)m k z wn-j m j-m. j=O m=O Therefore, the coefficient of w"1 in ck is j /k \n-k (-1)k (m)(j-m) (-1)m, m=O and, consequently, the coefficient of wn-j in Eq. (4.47) is n j (k\ (n-k\ E (-1)k rk [ m) (j-m) (-l)m] k k=O m=O where r = 1. Therefore, Eq. (4.47) can be written as: n -j ( l) k rk [ nZ wj rj-m) (-)} = 0. (4.48)kO j=0 =0 m=0

59 If the Routh's test is applied to Eq. (4.48), one will determine whether any yk (k = 1, 2,..., n) lie outside the unit circle. 4.3 The Determination of Residues In the previous section the poles have been determined by the method of discrete Tschebyscheff approximations to an overdetermined system of equations. It will be shown here that the same method can be applied to determine residues. By Eq. (4.5) and Eq. (4.7) n skt h = h(tm) Z Ae km m m k- k k=l Let t e = km' (4.49) then n hm = bkAk (m = 1, 2,..., q). (4.50) k=l Equation (4.50) represents a system of q equations. Since q > n, the system is overdetermined and can be solved by means of the replacement process. The resulting residues will then minimize the error in h(t) in the Tschebyscheff sense at the tm (m = 1, 2,..., q) points of the time intervals. Since the approximating functions have exponentiallydecaying envelopes, it can be argued that for sufficiently small intervals, a good approximation at the interval points will yield a good approximation between the interval points. The application of this method shows that in general good approximations are obtained, and that the Tschebyscheff error le (which is the error of the last reference) is a meaningful indicator of the overall maximum error to be expected.

60 If s = ca+j0~ is a complex pole, then let s +l = cZ-jP = s * Hence bm = b +m and A = A+1. If A = al + jb then stm st b mA + b+lm+ = (a j + jb) e + (a - jb) e azt = 2e (a, cos 5t - b sin tm) = a ma + b' b (4.51) A Am A Am m I m I where t a = 2e cos ept Im m and z t b' = -2e m sin At. (4.52) Am It is somewhat simpler to solve equations for a and b rather than A and A +. Hence, if there are n poles, 2w of which are complex, then one may use the form n-2w w h = bkA + (aa + mb ) (4 53) i k kink Ama1 I +m I k=L A=l where st kkm bk = e km x t am t 2e cos Pbm b; = -2e m sin pt. (4.54) It should be noted that if desired the residues may also be determined by means of the least-square-error criteria [6,11]. The poles can be determined as before.

61 4.4 The Discussion of Errors in the Approximation Process The purpose of this section is to discuss the errors of the approximation process. A comparison will be made between an approximation optimizing residues only and an approximation optimizing both poles and residues. A rough relationship will be developed between the errors in determination of pole locations and the errors at residues. It is to be noted that the final Tschebyscheff residue error is the error of the approximation. From Eqs. (4.7), n skt hm k= Ake (m = 1, 2,..., ).(4.55) k=l Equations (4.55) form an overdetermined set and are, therefore, not satisfied exactly. The error of the approximation at the q points is: n skt m = h Ake m (m = 1, 2,...,)(4.56) k=l Hence, n skt m em = Ake (m = 1, 2,.., q)(4.57) k=l In Chapter IV, the relationship between rk (k = 0, 1,..., n) and h (v = 1, 2,..., p) was derived as Theorem 1 v n rn-kh+k = 0 (v = 1, 2,..., p) (4.58) k=O The right side of Eqs. (4.58) is, in general, different from zero, since Eqs. (4.55) are not satisfied exactly. If the error on the right side of Eqs. (4.58) is denoted by ev, then, n v = 0 -khv+k (v = 1, 2,., p) (4 59)

62 Equations (4.57) are satisfied exactly. Hence, if (hi-Ei) is substituted for hi in Eqs. (4.58), the right sides of Eqs. (4.58) will be exactly zero. Therefore, n rn-k (hv+k - k) = ~ (v = 1, 2,..,p).(4 60) k=O-k +k From Eqs. (4.60) it follows that, n n Er h = r E (v = 1 2 ), 461) k= 0 n-k v+k n-k v+k k=.O k=O But by Eq. (4.59), the left side of Eqs. (4.61) is E. Hence v n v= r n-kev+k (v = 1 2,..,p) (4.62) k=O Expansion of Eqs. (4.62) yields (r = 1), E1 rnE + rn-12 + ** + n+l 2 =rnc2 + n-_13 + * + n+2 E = r + r e+l+... + e. (463) p n p n-l p+l p+n The above equations relate errors in the pole locations (v ) with the final errors (em). It is seen from Eqs. (4.63) that if E (v = 1, 2,..., p) and rk (k = 1, 2,..., n) are known, Eqs. (4.63) form an underdetermined set for the (q = p+n) unknowns Em (m = 1, 2,..., p+n). Thus, there are infinitely many sets of values for e (m = 1, 2,..., p+n) which will satisfy Eqs. (4.63). The approximation

63 process will permit one to find that set of values for Em (m = 1, 2,..., p+n), which will be a minimum in the Tschebyscheff sense. However, sets of values for Ev (v = 1, 2,..., p) and rk (k = 1, 2,..., n) different from the previous ones, may produce a different Tschebyscheff minimum set Em(m = 1, 2,..., p+n). It is desired, of course, to determine those values for rk and Ev which will yield the minimal set em. However, the E (v = 1, 2,..., p) are not independent, but are given by Eq. (4.59). Hence, one has freedom only in the choice of values for rk (k = 1, 2,..., n). It follows from Eqs. (4.63) that an optimum choice for the rk will be one which will cause all ev to be zero. If all cv are zero, all E (m = 1, 2,..., p+n) will be zero, too, thus yielding zero error. But, whenever,p > n there does not exist, in general, a set of rk (k = 1, 2,..., n) which causes all ev (v = 1, 2,. p) to be zero. Hence, the best choice is to select rk (k = 1, 2,..., n) so as to minimize e (v = 1, 2,..., p). This has been done. In the v process developed in Section 4.2, pole locations were selected which minimize v in the Tschebyscheff sense. v It is of interest to compare an approximation procedure which will optimize residues only with an approximation procedure optimizing both pole locations and residues. One may inquire, for example, how many terms must the approximation function have (i.e.; how large is n?) in both cases in order to produce zero error at the interval points tm (m = 1, 2,..., q). The difference between the number of terms in the approximating function (which is proportional to the number of elements in the network) required for both cases will provide a measure of comparison.

64 It follows from Eqs. (4.56) that for arbitrary pole positions n = q is needed to cause all the m (m = 1, 2,..., q) to be zero. However, if both pole positions and residues are optimized, Eqs. (4.63) indicate that n = p is sufficient for all v (v = 1, 2,..., p) to be zero, and then all e (m = 1, 2,..., q) can be made zero also. Since p = q-n, hence n = 2 in the latter case, thus requiring only half as many terms as in an approximation optimizing the residues only. A rough relationship will now be developed between the maximum I|el (v = 1, 2,..., p) and the maximum I em (m = 1, 2,..., q). This relationship will enable one to estimate the expected maximum error in the early stages of the computational process. If the eth equation in Eqs. (4.63) is squared, one obtains, -2 22 2 2 = rE + rn-1 + +... + +n + 2r n (rnle 1+' + +n + 2rn-Ci+l(rn-2e 2+2 + +n) + 2rlE +n-1I +n' (4.64) If all equations in Eqs. (4.63) are squared and added, then, P p Cf = rn e + r e + +...+ Z ev (4.65) v=l v nv~l v n- 1 v~lv+l v=l v+n ( v=l v=l v=l v=l + (cross-product terms). Since, in the cross-product terms, all em (m = 1, 2,..., q) are equally likely to be positive as negative (since Em are points on the Tschebyscheff error curve), it can be argued that the cross-product terms contribute a small amount to the right side of Eq. (4.65). Then,

65 — 2 2 r2 2 2 2 6 ~ r e- + r. 6F +... + Y, Z (4 66) V ~ n v~j V n-1 v+l v+n v=l v=l v v=l (4.66) Similarly, it can be argued that for sufficiently large p P 2 P 2 2 ~ v2 +k (k = 12, 2... n). (4.67) v v+k v=l v=l The above means that it does not matter too much whether the squares of the p errors are summed from E1 to ~p or from Ek+l to ep+k. If Eqs. (4.67) are substituted into Eq. (4.66) one obtains: P 22 2 p 2 E ~ (r + + r. + 1) Z. (4.68) v=l v n n-l vl v V=l v=l Taking square roots on both sides of Eq. (4.68) yields, 1 \ 2 2 2 v\) (r r2 + l +- + 1)( ) 2 (4.69) v (rn +(r n -l''' v' V=I v=l It can be observed, that if 6v vs. v is plotted, and a curve is drawn between the points, such a curve would be similar to a curve obtained from a plot of ev vs. v. Each curve will have n+l ripples, but the curves will differ in amplitude. The amplitude of the Ev curve will be E ma, and the amplitude of the Ev curve will be Erax. Thus, if ( - v 2 is equal to Em times some constant k, then ( e ) 2 \v~l v' "L max v v will be approximately equal to Emax times the same constant k. This result can be expressed as, zf E: / 2 + +1 (470) max max r +1 (4.7 Therefore, ~ max Emax -. (4.71) / Ir2+r + ** n n-l

66 Equation (4.71) relates the error at the pole positions (e ) with the final error (e ). It can be obverved that there max max is a degree of proportionality between the maximum error in the pole locations and the final maximum error. Since at end of the first cycle of computation the maximum error obtained is larger than ax one has an upper-bound estimate on the expected approximation error quite soon in the process. It should be noted, however, that Eq. (4.71) provides a rough estimate only, since several assumptions were made in its derivation. In a particular case, these assumptions may not be met; hence the final error may differ substantially from the one predicted by Eq. (4.71). In summary, it has been shown that there are definite relationships between the errors in pole locations and the approximation errors (residue errors). Also, it has been shown (as was expected) that an optimization both of pole locations and of residues will produce better results (i.e, smaller error) than an optimization of residues only. Also, a rough relationship was developed between the maximum error in pole locations and the final maximum error. 4.5 Applications In the preceeding sections of this chapter the impulse-responseapproximation problem has been stated and solved. The proposed method consists first of determination of poles, and then of determination of residues. In this section two examples will be worked to illustrate the suggested process.

67 4.5.1 Determination of a Network with Impulse Response 1. (l+t )2 _.................................................... (lt)2 In the first example considered, h(t) is given as h(t) =(4.72) (l+t)2 It is desired to find an R-L-C network, N, having an output voltage h(t) given by Eq. (4.72), when excited by an impulse-voltage input 6(t). A plot of h(t) vs. t is presented in Fig. 4.4. A choice of the interval spacing d, and the number of interval points q, must be made. The choice of d is dictated by the behavior of h(t). Since h*(t), the approximation to h(t), is a sum of decaying exponentials and of decaying sinusoids, d must be selected in such a way as to prevent the error between the inverval points from exceeding the error at the interval points. As a simple guide for selecting d, one may imagine h(t) as being replaced by a series of straight line segments whose end-points are d seconds apart, and coincide with h(t). (See Fig. 3.3). The time interval of approximation (i.e., the time necessary for Ih(t)| to reach a small fraction of its maximum magnitude) is equal to (q-l)d. Hence both d and q are obtained through an examination of the plot of h(t). An examination of Fig. 4.4 indicates that a choice of 0.5 for the interval spacing d, and the choice of q = 9 interval points are reasonable ones. These choices result in the values of hm (m = 1, 2,..., 9) at interval points listed in Table IV.

1.0.9.8.7.6 Co -c^~I.. IMUS RSOE \~)( t0).4.3.2 t FIG.4.4 IMPULSE RESPONSE h(t) (l+t)

69 TABLE IV VALUES OF h AT INTERVAL POINTS m m t h m m 1 0 1.0000 2.5.4450 3 1.0.2500 4 1.5.1600 5 2.0.1110 6 2.5.0817 7 3.0.0625 8 3-5.0494 9 4.0.0400 Let one consider an approximation with one approximating term. Hence n = 1, and from Eq. (4.27) one obtains the following relationship: hvrl + v+l = 0 (v = 1, 2,..., 8). (4.73) Substitution of values for h from Table IV yields, r1 +.445 = 0.445r1 +.25 = 0.25rl +.16 = 0.16r +.111 = 0.1lr1 +.0817 = 0.0817r, +.0625 = 0.0625r1 +.0494 = 0.0494r1 +.04 = 0. (4.74) Choosing the first two equations in Eqs. (4.74), one obtains, r1 +.445 = 0

70 and ~~and ~.445rl +.25 = 0.(4.75) Then, x = (1) and X2 = (.445) (4.76) From Eq. (4.29), lX1x + k2x2 =. Hence, 1 +.4452 = 0. (4.77) Let k2 = 1, then 1 = -.445. Consequently, Z' x| | = 1.445 (2) and Z' kxh+ =.052 (2) a a+J From Eq. (4.34) CXh a a-+l e = - - =.036. (4.78) li (2) By Eq. (4.33) e = E (sgn ). Therefore El = -.036 and e2 =.036 From Eq. (4.28) r1 +.445 +.036 = 0. (4.79) Solving for rl yields, r- = -.481. (4.80)

71 Now one can compute the errors in Eqs. (4.74). These are: EL = -.036 62 =.036 E3 =.04 64 =.034 C5 =.0282 66 =.0232 C7 =.01935 e8 =.01624 Since 1e31 > ei6, the replacement process must be undertaken. Now, x = (.25), (4.82) and by Eq. (4.37) A!Xl + 42x2 + x3 = 0 Therefore, 1 +.445 2 +.250 = 0. (4.83) Let p2 = 0, then 1 = -.25. Since sgn e = sgn y, and Ei > 0, from Table II, the a equation which is designated by the number given by Min A must be replaced. ~1 -.25 02 0 Now, 1 -.44' x2 1 Since Min - - the second equation in Eqs. (4.75) must xa x2 be replaced. The new equations are r1 +.445 = 0 and.25rl +.16 = 0. (4.84) The equation for,a is given by

72 1 +.25 3 = 0. Let 3 = 1, then 1 = -.25 Consequently, Z | xa = 1.25 (2) and Z h =.04875 (2) a C+l1 Hence, X Cah e ) = Y — =.039. (4.85) (2) Then 1 = -.039 and ~~and.^C =.039 By Eq. (4.28) ri +.445 +.039 = 0 Hence, r 1 = -.484. (4.86) The errors for all the Eqs. (4.74) are now: eI = -.039 2 =.035 E3 =.039 E4 =.0336 e5 =.0241 c6 =.0230 c7 =.0192 E8 =.0162 Since |vi < e|, (v = 1,..., 8) r, = -.484 is the desired solution.

73 From Eq. (4.42), y -.4-4 = 0. (4.88) Therefore, Thereforey =.484 (4.89) From Table III, s _ = in Y1 = 2 In (.484) = 2 (-.725) = -1.45. (4.90) The residue A can be found from Eqs. (4.50). This equation requires the knowledge of the coefficients bkm, which are defined in km skt Eq. (4.49) as bk = e. The computed values of bm are listed in Table V. TABLE V VALUES OF bm AT INTERVAL POINTS m tm bkm. km 1 0 1.0000 2.5.4840 3 1.0.2346 4 1.5.1132 5 2.0.0550 6 2.5.0265 7 3.0.0129 8 3.5.0062 9 4.0.0030 From Eqs. (4.50) one has the following relationship: m = blmA (m = 1, 2,..., 9) (4.91) Substituting values for h from Table IV, and values for blm from Table V, one obtains: 1 = A1.445 =.484 A.25 =.2346 A1

74.16 =.1132 A1.111 =.055 A1.0817 =.0265 A1.0625 =.0129 A1.0494 =.0062 A1.04 =.003 A1. (4.92) Considering the first two equations in Eqs. (4.92), A -1 = 0.484 A -.445 = 0. (4.93) Then, x = (1) x2 = (.484) and Xlx1+ X2x2 = 0 Hence., k1 +.484N2 = 0; let 2 = 1, then k1 = -.484 Consequently, Z |Xat = 1.484 (2) and Z X (-h ) = (-.484)(-1) + (1)(-.445) =.039 (2) Hence Z (-h a) =(2) " — =.0263. (4.94) zjx IxoI (2) Then, e1 = e (sgn X1) = -.0263 2 = e (sgn %2) =.0263

75 Therefore, A - 1 +.0263 = 0 or 1 =.9737 * (4.95) The errors in Eqs. (4.92) are: E1 = -.0263 ~2 =.0263 3 = -.022 4 = -.0498 = -.0575 C6 = -.0559 ~E = -.04994 e8 = -.0432:9 = -.037 ~ (4.96) Since emax = 1E51 > |E|, the replacement process must be undertaken. Now, x5 = (.055) (4.97) and, as before, AlXl + 42X2 + x) = o Therefore, "l +.484 ~2 +.055 = 0 (4.9) Let 2 = O, then 1 = -.055 Since sgn c = sgn?, and Ei < 0, the equation which is aa designated by the number given by IMax - must be replaced. Now, 1 = -.055'2 0 bi ra-.484 1 Since Max - =, the first equation in Eqs. (4.92) must a 1 be replaced.

76 The new equations are:.484 A -.445 = 0 and.055 A -.111 = 0. (4.99) Now,.484 x2 +.~55% = 0 Let 5 =.484, then X2 = -.055. Consequently, Z!x =-.539 (2) and X C(-h ) = -.0291 (2) Then = -.0291 -.0o4,.539 E2 = e(sgn x2) =.054, and E3 = e(sgn X5) = -.054. Therefore,.484 A1 -.445 -.054 = 0, and ~and A1 = 1.03. (4.101) The errors in Eqs. (4.92) are: e1 =.03 c2 =.054 e3 = -.008 E4 = -.043 c5 = -.054 6 = -.053 e7 = -.049 c8 = -.043 ~g = -.037. (4.102)

77 Since |eml - I E (m = 1,..., 9), A1 is the desired residue, and the one-term approximation is h*(t) = 1.03 e14t. (4.103) The Tschebyscheff error is.054. The Laplace transform of Eq. (4.103) is H*(s) = 1.03 1 (4.104) A network with the voltage-transfer function of this equation is shown in Fig. 4.5. R, Cry R2 El E2 0 —- *E H*(s) = -2 = 3 1 45 R1 =.97 Q R2 = 2.38 n C = 1 fd FIG. 4.5 NETWORK REALIZING THE h*(t) OF EQ. (4.103) The example will be now repeated, using the same interval spacing (d = 0.5) and the same number of interval points (q = 9), as before; but now a two-term approximation is sought. Hence, n = 2, and

78 one obtains hr2 + h + h = o (4.105) hvr2 + hv+rl + hv+2 (v = 1, 2,..., 7). Substitution of values for h from Table IV yields: r2 +.445r1 +.25 = 0.445r2 +.25r1 +.16 = 0.25r +.16rl +.111 = 0.16r2 +.lllr +.0817 = 0.11lr2 +.0817rl +.0625 = 0.0817r2 +.0625r1 +.0494 = 0.0625r2 +.0494rl +.04 = 0. (4.106) Choosing the first three equations in Eqs. (4.106) as reference, one obtains r2 +.445r1 +.25 = 0.445r2 +.25r, +.16 = 0 25r2 +.16rl +.111 = 0. (4.107) Then, x. = (1,.445), x2 = (.445,.25), x3 = (.25,.16). (4.o18) From Eq. (4.29), k1 +.445A2 +.25X3 = 0 and.4451,+ 25 +.25 +.163 = 0 Lett 3 k3 = 1. Then 1 =.17, Xa = -.942

79 Hence, Hence | = 2.112 (3) and Xahc =.0025 (3) a c+2 Consequently, ~ X h e = (3), - =.001183. (4.109) Z |io (3) By Eq. (4.33), eo = e (sgn ). Therefore, ~1 =.001183 ~2 = -.001183 E3 =.001183 By Eq. (4.28), r2 +.445r1 +.25 -.001183 = 0.445r3 +.25r +.16 +.001183 = 0 Hence, r, = -.968 r2 =.1822. (4.110) Now one can compute the errors in Eqs. (4.106). These are: 61 =.0012 C2 = -.0012 E3 =.0012 C4 =.0034 e5 =.0037 e6 =.0008 ~7 =.0036. (4.111) 7-=

80 Since 1 e1 > jI1, the replacement process must be undertaken. Now, x = (.111,.0817) (4.112) and by Eq. (4.37), B1X~ + 42x2 + 43x3 + x = 0 Therefore, 1- +.445p2 +.2513 +.111 = 0 *44541+.2542 +.1643 +.0817 =. (4.113) Let L3 = 0, then K1 =.165, 42 = -.621. Since sgn e =sgn X and Ei > 0, hence from Table II, the equation which is designated by the number given by Min - must be a replaced. Now, 41.165 42 -.621 43 0 X2 3 *.17' A -.942' = 1 Min m = h; hence, the third equation in Eqs. (4.106) must be ca 3 replaced. The new equations are: r2 +.445r +.25 = = 0.445r2 +.25r1 +.16 = 0.1lr2 +.0817r1 +.0625 = 0 The equations for a are: Al +.4452 +.111 = 0.4451 +.25X2 +.0817x5 = 0 (4.115)

81 Let 5 = 1, then 1 =.1654 and X2 = -.621. Consequently, Z |Ix = 1.7864 (3) and X h =.00449 (3) a +2 Hence, h (3) a+2 = a(3) +2 =.00251. (4.116) Z'|\ (3) a Then 61 =.00251 2 = -.00251 E =.00251 By Eq. (4.28), r2 + 445r1 +.25 -.0025 = 0 and.445r2 +.25r1 +.16 +.0025 = 0. (4.117) Hence, r = -1.007, r2 =.2009.(4.118) The errors in Eqs. (4.106) are: 61 =.0025 2 = -.0025 3 = 0 E4 =.0020 E6 =.0025 66 =.0028 E =.0028. (4.119) Since, the eplacement process must take place Since 661 > 161I, the replacement process must take place.

82 Now, x6 = (.0817,.0625), (4.120) and by Eq. (4.37), 41 +.445p2 +.11145 +.0817 = 0.445L1 +.25p2 +.061745 +.0625 = 0. (4.121) Let 5 = 0, then 41 =.1420, 42 = -.0526. Since sgn a = sgn Xa and e. > 0, hence from Table II, the equation designated by the number given by Min CY must be replaced. Now, 41.1420 P2 -.5026 45 0 2 55 A =.1654 A2 -.621' A5 1 Min- =; hence the fifth equation in Eqs. (4.106) must be replaced. a 5 The new equations are: r2 +.445r, +.25 = 0.445r2 +.25rl +.16 = 0.0817r2 +.0625rI +.0494 = 0. (4.122) X are determined from: 1 +.4452 +.0817x6 = 0.445 +.25X2 +.0625x6 = 0. (4.123) Let A6 = 1, then ki =.1420, 2 = -.5026. Consequently, Z X|a| = 1.6446 (3) and Zx h =.004484 (3) a +2 Hence,

83 L h +2 E (3) =.00273 (4.124) Z'IXIal (3) Then, E1 =.00273 62 = -.00273 E6 =.00273 By Eq. (4.28): r2 +.445r1 +.25 -.0027 = 0.445r2 +.25r1 +.16 +.0027 = 0. (4.125) Therefore, rl = -1.00128, r2 =.2033.(4.126) The errors in Eqs. (4.106) are: E, =.0027 2 = -.0027 3 = -.0002 E4 =.0018 E~C =.0023 E6 =.0027 67 =.0026. (4.127) 7 Since |6kl < 6el (k = 1, 2,..., 7) r1 = -1.0128, and r2 =.2033 are the desired solutions. From Eq. (4.42): y2 - 1.0128y +.2033 = 0 (4.128) Hence, Y1 =.7369

84 Y2 =.2759 ~ (4.129) From Table III, 1 s = ny1 = 2in(.7369) = -.6106 = I ny2 = 2in(.2759) = -2.5754. (4.130) The residues Al and A2 can be found from Eq. (4.69). The coefficients blk and b2k are found from: st 1rn b = e lm s2tm bm = e (m = 1, 2,... 9). (4.131) The values of blm and b2m are computed, and are listed in Table VI. TABLE VI VALUES OF blm AND b2m AT INTERVAL POINTS 1m b2m m tm b b2m 1 0 1.00000 1.00000 2.5.73690.27590 3 1.0.54302.07612 4 1.5.40015.02100 5 2.0.29487.00579 6 2.5.21729.00160 7 3.0.16012.00044 8 3.5.11799.00012 9 4.0.08695.00003 From Eq. (4.50), one obtains the relationship, h = bmA + b A2 (m = 1, 2,..., 9).(4.132)

Substituting values for hm from Table IV, and values for blm and b2m from Table VI, one obtains: 1.0000 = A1 + A2.4450 =.73690A1 +.27590A2.2500 =.54302A1 +.07612A2.1600 =.40015A1 +.02100A.1110 =.29487A1 +.00579A2.0817 =.21729A1 +.00160A2.0625 =.16012A1 +.00044A2.0494 =.11799A1 +.00012A2.0400 =.08695A1 +.00003A2. (4.133) The first, sixth and ninth equations in Eqs. (4.133) are: A1 + A2 = 1.21729A1 +.00160A =.0817.08695A1 +.00003A2 = 0400. (4.134) The equations for k are: 1 +.21729X6 +.08695x9 = 0 1 +.001606 +.000003\ = 0. (4.135) Let Xg = 1, then N1 =.00061, k6 = -.40299. Consequently, Z' |lx = 1.40360 (3) and

86 Z X (-h) = -.0076857 ~ (3) Hence, a(-h) e= (3) 0 = -.o0048. (4.136) Z Ix I (3) Then, = e sgn1 = -.00548 E6 = e sgn X6 =.00548 ~C = E sgn 9 = -.00548 A1 and A2 are determined from: A1 + A - 1 +.00548 = 0.21729A1 +.00160A2 -.0817 -.0048 = 0. (4.137) Hence, A1 =.39681, A2 =.59771 ~ (4.138) 2 The errors in Eqs. (4.133) are:., = -.00oo48 c2 =.01232 3 =.01097 E4 =.01134 6e =.00947 E6 =.00548 ~7 =.00130 C8 = -.00251., = -.00548. (4.139) Since 1 cl > |I, the replacement process must be undertaken.

87 Now, X2 = (.73690,.27590), (4.140) and by Eq. (4.37), 11 +.21729e6 +.08695p9 +.73690 = 0 41 +.00160o6 +.0000349 +.27590 = 0. (4.141) Let 49 = 0, then 41 = -.27248, 46 = -2.13733. Since sgn e = - sgn X and c. > 0, hence from Table II, the equation which is designated by the number given by Max - must be a replaced. Now, 41 -.27248 46 _ -2.13733 - 0 1.00061' A6 -.40299' 1 ca [16 Max - -, hence the sixth equation in Eqs. (4.133) must be %a 6 replaced. The new equations are: AI + A2 = 1 *73690A1 +.27590A2 =.4450.08695A1 +.0003A2 =.04000. (4.142) The equations for k are: 1 +.73690X2 +.08695X9 = 0 k1 +.27590X2 +.00003\9 = 0. (4.143) Let k9 = 1, then k1 =.05199, 2 = -.18855. Consequently, ~ Ix1 = 1.24054 (3) and Z XA,(-h ) = -.00808525 (3) Hence,

88 x(-h ) ~ (3) = -.00652. (4.144) Z I| i (3) Then, e = E sgn X1 = -.00652 E2 = E sgn 2 =.00652 e = E sgn X9 = -.00652 The equations for Al and A2 are: Al + A - 1 +.00652 = 0.73690A1 +.27590A2 -.4450 -.00652 = 0. (4.145) Hence, A =.38485, A2 =.60863. (4.146) The errors in Eqs. (4.133) are: 1 = -.00652 2 =.00652 E3 =.00531 C4 =.00678 6) =.00600 ~6 =.00290 e7 = -.00061 C8 = -.00392 E9 = -.00652. (4.147) Since |E41 > je|, the replacement process must take place. From Eqs. (4.133), X4 = (.40015,.02100), (4.148)

89 and by Eq. (4.37), 41 +.7369012 +.0869549 +.40015 = 0 11 +.27590A2 +.0000349 +.02100 = 0. (4.149) Let 19 = 0, then 41 =.20591,'2 = -.82245. Since sgn ca = -sgn X and e. > 0, hence from Table II, the equation which is designated by the number given by Max - must be replaced. Now, 1 _.20591._2 -.82245 0 x.*05199' x -.18855' Max - =, hence the second equation in Eqs. (4.133) must be replaced. Aa 2 The new equations are: A1 + A = 1.40015A1 +.02100A2 =.1600.08695A1 +.00003A2 =.04000 (4.150) The equations for a are: 1 +.40015x4 +.08695x, = 0 k1 +.02100X4 +.00003X9 = 0. (4.151) Let k9 = 1, then \1 =.00478, k4 = -.22925 Therefore, Z I = 1.23403 (3) and Z X (-h) = -.00810 (3)a a

90 Hence, ~ A(-ha) ~ =' (3 h = -.00656. (4.152) Z I|ol (3) Then, 1 = e sgn X1 = -.00656 c4 = e sgn x4 =.00656 E9 = e sgn X9 = -.00656. A1 and A2 are determined from: A1 + A2 - 1 +.00656 = 0.40015A1 +.02100A2 -.1600 -.00656 = o. (4.153) Hence, A1 =.38427, A.60917 * (4.154) The errors in Eqs. (4.133) are: C = -.00656 ~2 =.00424 3 =.00504 =.00656 e _ =.oo584 6 =.00277 E7 = -.00070 c8 = -.00399 c9 = -.00656 (4.155) Since me l <K E\ (m = 1, 2,..., 9), A1 and A2 are the desired residues, and the two-term approximation is: h*(t) =.3843e -6106t +.6092 e-25754t. (4.156)

91 The Tschebyscheff error is.00656 only, as compared with.054 for the one-term approximation. The Laplace transform of Eq. (4.169) is: H*(s) =.3843 +.6092 (4157) ) s +.6106 s + 2.5754 A network realizing H*(s) as a voltage ratio is shown in Fig. 4.6. R2 RI | E, l2 1C3 R3 E3 C)0I y H*(s) = 2 =.9935 (s + 1.3706) *(s E 0 (s + 6106)(s + 2.5754 R1 =.5000 n R2 =.2631 Q R3 = 4.9297 Q C2 = 2.7731 fd C = 2.0132 fd FIG. 4.6 NETWORK REALIZING THE h*(t) OF EQ. (4.156)

92 Plots of h(t) and h*(t) vs. t for one-term and two-term approximations,are shown in Fig. 4.7. A plot of h*(t) for the twoterm approximation agrees so closely with h(t) that in Fig. 4.7 it appears to coincide with h(t). The plots of [h*(t) - h(t)] vs. t for these two cases appear in Fig. 4.8. -t2 4.5.2 Determination of a Network with Impulse Response te As the second example, let _t2 h(t) = te. (4.158) It is desired to find an R-L-C network N, having an output voltage h(t) given by Eq. (4.158) where excited by an impulse-current input b(t). A plot of h(t) vs. t is shown in Fig. 4.9. An examination of this figure indicates that a choice of 0.2 second for the interval spacing d, and the choice of q = 16 points are reasonable ones. These choices result in the values of h at interval points listed in Table VII. m TABLE VII VALUES OF hm AT INTERVAL POINTS m t h m m 1 0 0 2.2.1922 3.4.3408 4.6.4187 5.8.4219 6 1.0.3679 7 1.2.2843 8 1.4.1973 9 1.6.1237 10 1.8.0706 11 2.0.0366 12 2.2.0158 13 2.4.0051 14 2.6.003 15 2.8.0011 16 3.0.0003

1.0 h(t), n=l \7g~ \ ~~ —- ~- h(t) and h(t), n=2.7_.6-.5 -.4.2123 I 2 3 4 t FIG. 4.7 h(t) AND h*(t) FOR ONE-TERM AND TWO-TERM APPROXIMATIONS. (EXAMPLE OF SEC. 4.5.1)

.07.06.054.05 --- h (t), n=l.04 \ - h (t), n=2.03.02 -.01 o.00656 -.02 -03 -.04 \ -.05 -.054 -.06 -.07 FIG.4.8 [h*(t)-h(t)] FOR ONE-TERM AND TWO-TERM APPROXIMATIONS(EXAMPLE OF SEC.4.5.1)

95.4.3.2 1 2 FIG.4.9 IMPULSE RESPONSE h(t) FIG.4.9 IMPULSE RESPONSE h(t)=te

96 Let one consider an approximation with three approximating terms. Hence n = 3, and from Eq. (4.27) one obtains the relationship: hr3 + + hv+r + h 3 = 0 (4.159) v 3 v+l2 v+21 v+3 (v = 1, 2,..., 13). Substitution of values for h from Table VII yields:.1922r2 +.5408rl +.4187 = 0.1922r3 +.3408r2 +.4187r +.4219 = 0.3408r3 +.4187r +.4219r +.3679 = 0.4187r +.421r2 + 3679r +.2843 = 0.4219r3 +.3679r2 +.2843r1 +.1973 = 0.3769r3 +.2843r2 +.1973r1 +.1237 = 0.2843r3 +.1973r2 +.1237r1 +.0706 = 0.1973r +.1237r2 + 06r +.0366 = 0.1237r3 +.0706r2 +.0366r1 +.0158 = 0.0706r3 +.0366r2 +.0158r1 +.0051 = 0.0366r3 +.0158r2 +.0051r +.003 = 0.0158r +.0051r2 +.003r +.0011 = 0.0051r3 +.003r2 +.Ollr +.0003 = 0. (4.160) Choosing the first, fourth, ninth, and last equations as reference, one obtains:.1922r2 +.3408r1 +.4187 = 0.4187r3 +.4219r +.3679r +.2843 = 0.1237r3 +.0706r2 +.0366r1 +.0158 = 0.0051r +.003r +.1OOllr +.0003 = 0. (4.161)

97 Then, x1 = (0,.1922,.3408) x4 = (.4187,.4219,.3679) x9 = (.1237,.0706,.0366) x13 = (.0051,.003,.0011). (4.162) From Eq. (4.29) Xlx1 + 4x4 + 9x9 + 13l3 = 0 Hence,.4187x4 +.1237x9 +.005113 = 0.1922X1 +.4219X4 +.0706x9 +.003 13 == 0.3408x1 +.3679x4 +.0366X9 +.001113 = 0. (4.163) Let 3 = 1 then XI =.006253, k4 = -.007057, Xg = -.017345. Consequently, Z|I I = 1.030655 (4) and Zk h =.00637775. (4) a a+3 Then Z' h L k Ch+3 = (4) =.000619. (4.164) (4) By Eq. (4.33), c = E (sgn X) a ao Therefore,.000619 1=.000619 C4 = -.000619 9 = -.000619 613 =.000619.

98 By Eq. (4.28),.1922r2 +.3408r1 +.4187 -.000619 = 0.4187r3 +.4219r2 +.3679r1 +.2843 +.000619 = 0.0051r3 +.003r2 +.00llr +.0003-.000619 = 0. (4.165) Hence, r 1 = -2.097,922, r2 = 1.544,698, r3 = -.393,604. (4.166) Now one can compute the errors in Eqs. (4.160). These are: 61 =.000619 2 = -.005718 e3 = -.oo4588 4 = -.000619 5 =.003094 E6 =.004131 7 =.003954 e8 =.001908 e9 = -.000617 e10 =.000700 11 =.002301 ~12 = -.003535 13 =.000619. (4.167) Since 1| > |e|, the replacement process must be undertaken. 2 Now, x2 = (.1922,.3408,.4187), (4.168) and by Eq. (4.37), Xl + l4X + 4.X9 + 1rX13- + 1X + = 0

99 Therefore,.418744 +.12379 +.0051O13 +.1922 = 0.192241 +.421944 +.070649 +.003113 +.3408 = 0.34081 +.367944 +.036649 +.0011413 +.4187 = 0. (4.169) Let 4 = 0, then 1 = -1.1638, 49 = 1.8791, 13= -83.3232. Since sgn Ea = sgn \, and Ei < 0, from Table II, the equation which is designated by the number given by Max - must be replaced. Now, 1 -1.1638 44 _ 0 x.006253' \4 ~ -.007057 49 1.8791 413 - -83.3232 ~ -.017345' 1 4 Max = -, hence the fourth equation in Eqs. (4.160) must be replaced. ka 4 The new equations are:.1922r2 +.3408rl +.4187 = 0.1922r3 +.3408r2 +.4187r1 +.4219 = 0.1237r3 + 0706r2 +.0366r1 +.0158 = 0.0051r3 +.003r2 +.11OOllr +.0003 = 0. (4.170) Then, x1 = (0,.1922,.3408) x2 = (.1922,.3408,.4187) 9 = (.1237,.0706,.0366) x3 = (.0051,.003,.0011). (4.171) The equations for X are:

100.1922X2 +.1237k9 +.0051X13 = 0.1922X1 +.3408X2 +.0706x +. 003A13 = 0.3408X1 +.4187X2 +.0366X9 +.001113 = 0. (4.172) Let 13 = 1, then X1 =.0137585, X2 = -.0118285, ~ = -.0228503. Consequently, ~ I X = 1.048437 (4) and t X + h.000709205. (4) a a+3 Then X' ah E= - a a+3 =.000676 (4.173) ~ 1x1 (4) a The errors for the reference are: ~1 =.000676 2 = -.000676 E = -.000676 13=.000676 By Eq. (4.28),.1922r2 +.3408r1 +.4187 -.000676 = 0.1922r3 +.3408r2 +.4187r1 +.4219 +.000676 = 0.1237r3 +.0706r2 +.0366r1 +.0158 +.000676 = 0. (4.174) Hence, r1 = -2.203555; r2 = 1,732297; r3 = -.469897. (4.175) The errors in Eqs. (4.160) are now computed and are listed below:

101 e1 =.000676 62 = -.000676 c3 =.003392 E4 =.007722 E6 =.009892 ~6 =.008556 c7 =.006211 e8 =.002603 c9 = -.000676 10 =.000511 11 =.001934 12 - -.004100 612 = oo4oo 13 =.000676. (4.176) Since 1E5r- > EI, the replacement process must again be undertaken. Now, x5 = (.4219,.3679,.2843). (4.177) Hence, by Eq. (4.37),.1922L2 +.123749 +.0051413 +.4219 = 0.192241 +.340842 +.070649 +.0030413 +.3679 = 0.340811 +.418712 +.036619 +.0011413 +.2843 = 0. (4.178) Let = 0, then p1 = -.607373, 49 = 1.380154, 13 = -116.200940 Since sgn e = sgn X and ei > 0, from Table II, the equation awhich is d which is designated by the number given by Min ~-must be replaced.

102 Now, K1 _ -.607373 2 0 1 ".0137585' 2 -.0118285' 49 1.380154 K13 -116.200940 k9 - -.0228503' A13 1 9 13 Since Min13 the 13th equation in Eqs. (4.160) must Sa 13 be replaced. The new equations are:.1922r2 +.3408r1 +.4187 = 0.1922r3 +.3408r2 +.4187r1 +.4219 = 0.4219r3 +.3679r2 +.2843r1 +.1973 = 0.1237r3 +.0706r2 +.0366r1 +.0158 = 0. (4.179) Then xl = (0,.1922,.3408) X2 = (.1922,.3408,.4187) x5 = (.4219,.3679,.2843) Xg = (.1237,.0706,.0366). (4.180) The equations for X are:.1922k2 +.4219X5 +.1237\9 = 0.1922X1 +.3408X2 +.3679X5 +.0706k9 = 0.3408k1 +.4187x2 +.2843X5 +.0366X9 = 0. (4.181) Let 9 = 1, then X1 = -.777570, X2 = 1.078037, k5 = -.784306. Consequently, Z | | = 3.639913 (4) and Z'k h = -.009688 (4) a 0+3

103 Then L h+ e = mu - = -.002662 (4.182) Et |A| (4) a and 1 =.002662 2 = -.002662 5 =.002662 c9 = -.002662. By Eq. (4.28),.1922r2 +.3408rl +.4187 -.002662 = 0.1922r3 +.3408r2 +.4187r1 +.4219 +.002662 = 0.4219r3 +.3679r2 +.2843r1 +.1973 -.002662 = 0. (4.183) Hence, r = -2.176675, r2 = 1.694968, r3 = -.472597 ~ (4.184) The errors in Eqs. (4.160) are: ~, =.002662 2 = -.002662 3 = -.001817 e4 =.000732 e6 =.002661 E6 =.002253 E7 =.001403 C8 = -.000649 69 = -.002662 0 = - -.000621 e11 =.001382 612 = -.004253 c13 =.000580. (4.185)

104 Since 1 121 > I e, one of the equations in Eqs. (4.179) must be replaced. Now, x,2 = (.0158,.0051,.0030). (4.186) Hence, by Eq. (4.37),.1922[2 +.421945 +.123719 +.0158 = 0.192241 +.340842 +.376915 +.070649 +.0051 = 0.3408,1 +.4187g2 +.2843)5 +.0366 9 +.0030 = 0. (4.187) Let g9 = O, then 41 = -.343950, 12 =.431783, and i5 = -.234152. Since sgn = -sgn a and. E < 0, from Table II, the equation which is designated by the number given by Min - must be replaced. Now, 1 _ -.343950 42.431783:I -.777570' k2 1.078037 5 _ -.243152 49 0 \ - -.784306' = 1 Since Min, the ninth equation in Eqs. (4.160) must a k9 be replaced. The new equations are:.1922r2 +.3408r1 +.4187 = 0.1922r3 +.3408r2 +.4187r1 +.4219 = 0.4219r3 +.3679r2 +.2843r1 +.1973 = 0.0158r3 +.0051r2 +.003r1 +.0011 = 0. (4.188) Then X1 = (0,.1922,.3408) X2 = (.1922, 3408,.4187) x5 = (.4219,.3679,.2843) 12 = (.0158,.0051,.0030). (4.189)

105 The equations for X are, therefore:.1922X2 +.4219X5 +.0158X12 = 0.1922x1 +.3408X2 +.3679X5 +.005112 = 0.3408x1 +.4187k2 +.2843X5 +.0030o12 = 0. (4.190) Let k12 = 1, then A1 = -.343950, k2 =.431783, k5 = -.234152 Consequently, ~ IXa = 2.009885 (4) and X h h +3= -.006940807 (4) a a+3 Hence, (=(4) 0+3 - (4) I - = -.003453 (4.191) (4) and ~ =.003453 2 = -.003453 e =.003453 12 = -.003453. By Eq. (4.28),.1922r2 +.3408r1 +.4187 -.003453 = 0.1922r3 +.3408r2 +.4187r1 +.4219 +.003453 = 0.4219r3 +.3679r2 +.2843r1 +.1973 -.003453 = 0. (4.192) Hence, r1 = -2.072670, r2 = 1.514667, r3 = -.383583.(4.193)

106 The errors in Eqs. (4.160) are found to be: 1 =.003453 2= -.003453 E3 = -.003093 E4 =.000197 c5 =.003452 E6 =.004262 7 =.004002 c8 =.001953 E9 = -.000573 10 =.000708 11 =.002322 E1 = -.003454 e3 =.000608. (4.194) Since 1~61 > I||, the replacement process must again be undertaken. Now, x6 = (.3679,.2843,.1973) ~ (4.195) Hence, by Eq. (4.37),,.192242 +.421945 +.0158412 +.3679 = 0.192241 +.340812 +.367945 +.0051412 +.2843 = 0.340841 +.418742 +.284345 +.0030112 +.1973 = 0.(4.196) Let 2 = 0, then 1 =.107547, 15 = -.803644, 12 = -1.825459. Since sgn a= -sgn X and i. > 0, from Table II, the a a. equation which is designated by the number given by Max A- must be replaced.

107 Now, A1.107547 42 0 X ~ -.343950" X2 431783' 45 -.803644 ^12 _ -1.825459 5 ~ -.234152' X12 1 5 Max -, hence the fifth equation in Eqs. (4.160) must be replaced. oa 5 The new equations are:.1922r2 +.3408rI +.4187 = 0.1922r3 +.3408r2 +.4187rl +.4219 = 0.3679r +.2843r +.1973r +.1237 = 0.0158r3 +.0051r2 +.0030r1 +.0011 = 0. (4.197) then Xi = (0,.1922,.3408) x2 = (.1922,.3408,.4187) X6 = (.3679,.2843,.1973) X12 = (.0158,.0051,.0030) (4.198) The equations for x are, therefore:..1922X2 +.3679X6 +.015812 = 0.1922X1 +.3408X2 +.2843X6 +.0051x12 = 0.3408k1 +.4187X2 +.1973x6 +.003012 = 0. (4.199) Let X12 = 1, then Al = -.244985, X2 =.281867, k6 = -.190201. Consequently, Z aI = 1.717053 (4) and ZX h3 = -.006083 (4) a a+3

108 Hence LZx h (4 ) a +3 e (4) 3 = -.003543. (4.200) (4) a The errors for the reference are: 1 =.003543 62 = -.003543 E6 =.003543 E12 = -.003543 By Eq. (4.28),.1922r2 +.3408r1 +.4187 -.003543 = 0.1922r3 +.3408r2 +.4187r1 +.4219 +.003543 = 0.3679r3 +.2843r2 +.1973r1 +.1237 -.003543 = 0 (4.201) Hence, r = -2.080391 r2 = 1.528826 r3 = -.392337. (4.202) The errors in Eqs. (4.160) are now: 1 =.003543 2 = -.003543 E3 = -.003406 ~4 = -.000336 e =.002773 5 e6 =.003543 7 =.003352 e8 =.001432 E9 = -.000939 e10 =.000486 e -..002186 ell1

109 ~2 = -.003543 c12 13 =.000597 Since |IEv I | e (v = 1,..., 13), rl = -2.080391, r2 = 1.528826, r3 = -.392337, are the desired solutions. From Eq. (4.42) P(y) = y3 2.080391y2 + 1.528826y -.392337 = 0 (4.204) is the equation for Yk (k = 1, 2, 3). Prior to solving Eq. (4.204) it may be of value to best it using Eq. (4.48) to determine whether all Yk lie inside the unit circle. By Eq. (4.48), i w i3j { (-1 r [ f () ( -m)(- ]} = 0. (4.205) j=0 k=O m=0 Expanding, w3(1 - r1 + r2 - r3) + w2(3 - rl - r2 + 3r3) + w(3 + r1 - r2 - 3r3) + (1 + r1 + r2 + r3) = 0 Therefore, P(w) = 5.001554 w3 + 2.374554 w2 +.567794 w +.056098 = 0. (4.206) Application of the Routh's test to P(w) shows that the roots are in the left-hand plane; hence, the roots of P(y) are inside the unit circle, as required. Solving for the roots of P(y), one obtains:

110 Yl =.683177 Y2 =.698607 + j.293651 y3 =.698607 - j.293651.(4.207) The poles from Table III are: 1 s = - y in (.683177) = -1.905 1 1.293651 2 = i in (.574283) + j () arc tan.69860 = -1.3866 + j 1.98959 3 = -13866 - j 1.98959. (4.208) With the poles determined, one can now determine the residues. From Eq. (4.54) one can determine the coefficients b, am and b' from 2m 2m s1t, 1m b = e at = 2e 2tm cos 2tm 2m 2 a2tm bt = -2e sin 2tm (4.209) (m = 1, 2,..., 16). The values of bL, at, by' are computed and are listed in ~nTable VIII 2m Table VIII.

1ll TABLE VIII VALUES OF blm, a' AND bm AT INTERVAL POINTS m t b a' b' m inm 2m 2m 1 0 1.0000 2.0000 0 2.2.6832 1.3972 -.5873 3.4.4667.8036 -.8206 4.6.3189.3205 -.8093 5.8.2178 -.0138 -.6594 6 1.0.1488 -.2033 -.4566 7 1.2.1017 -.2761 -.2593 8 1.4.0695 -.2784 -.1036 9 1.6.0475 -.2173.0091 10 1.8.0324 -.1492.0702 11 2.0.0221 -.0836.0928 12 2.2.0151 -.0311.0894 13 2.4.0103.0045.0716 14 2.6.0071.0242.0487 15 2.8.0048.0312.0269 16 3.0.0033.0297.0097 From Eq. (4.53) one obtains the relationship h = blA +a a + b' b (4.210) m 11 2m 2m (m = 1, 2,..., 16). Subsituting values for h from Table VII, and values for blm, am m 2m and b' from Table VIII, one obtains: 2m 0 = A1 + 2a.1922 =.6832A1 + 1.3972a -.5873b.3408 =.4667A1 +.8036a -.8206b.4187 =.3189A1 +.3205a -.8093b.4219 =.2178A1 -.0138a -.6594b.3679 =.1488A1 -.2033a -.4566b.2843.1017A, -.2761a -.2593b

112.1973 =.0695A1 -.2784a -.1036b.1237 =.0475A1 -.2173a +.0091b.0706 =.0324A1 -.1492a +.0702b.0366 =.0221A1 -.0836a +.0928b.0158 =.0151A1 -.0311a +.0894b.0051 =.0103A1 +.0045a +.0716b.0030 =.0071A1 +.0242a +.0487b.0011 =.0048A +.0312a +.0269b.0003 =.0033A1 +.0297a +.0097b. (4.211) Choosing the first, fourth, sixth, and eleventh equations as reference, one obtains: A1 + 2a = 0.3189A1 +.3205a -.8093b -.4187 = 0.1488A1 -.2033a -.4566b -.3679 = 0.0221A -.0836a -.0928b -.0366 = 0. (4.212) Then, x. = (1, 2, 0) X4 = (.3189,.3205, -.8093) x6 = (.1488, -.2033, -.4566) ll = (.0221, -.0836,.0928). (4.213) The equations for X are: +.3189X4 +.1488X6 +.0221xll = 0 2N1 +.3205X4 -.2033X6 +.0836x11 = 0 -.8093x4 -.4566x6 +.09281l = 0 ~ (4.214) Let ll = 1, then l = -.074541, X4 =.402446, 6 = -.510074.

113 Then, Z' x | = 1.987061 (4) a and E' (-h ) = -.0174479. (4) a Hence, E,' (-h ) (4) = = -.008781. (4.215) (4) a Since 6e = (sgn x), therefore, c, =.008781 E4 = -.008781 E6 =.008781 11 = -.008781 Consequently, A1 + 2a -.008781 = 0.3189A1 +.3205a -.8093b -.4187 +.008781 = 0.1488A1 -.2033a -.4566b -.3679 -.008781 = 0, (4.216) yielding A1 =.913966, a = -.452592, b = -.325604. (4.217) The errors in Eqs. (4.211) are: c1 =.008782 c2 = -.008913 c3 = -.010764 64 = -.008781 E5 = -.001889 E6 =.008781

114 E7 =.018040 ~8 =.025955 E9 =.015099 O10 =.003682 ~ = -.008781 12 = - 017032 13 = -.021036 14 = -.023320 E1 = - 019593 E16 = -.013884. (4.218) Since 1i81 > je1, the replacement process must be undertaken. Now, X8 = (.0695, -.2784, -.1036), (4.219) and, by Eq. (4.37), the equations for 4 are: +.318944 +.148846 +.0221411 +. 0 695 = 0 2 1 +.320544 -.203346 -.083611.2784 = 0 -.809344 -.456646 +.092811 -.1036 = O. (4.220) Let,11 = O, then 41 = -.065105, ~4 =.532406, 46 = -1.170558. Since sgn c = -sgn X and Ei > 0, hence from Table II, the equation which is designated by the number given by Max - must be replaced. Now, 1 -. 065105 44.532406 1i -.074541' 4.402446

115 ~6 -1.170558 411 0 A6 -.510074' 1 Max = 6, thus the sixth equation in Eq. (4.211) must be replaced. \a 116 The new equations are: A1 + 2a = 0.3189A1 +.3205a -.8093b -.4187 = 0.0695A1 -.2784a-.1036b-.1973 = 0.0221A1-.0836a +.0928b -.0366 = 0. (4.221) Then, x1 = (1, 2, 0) x4 = (.3189,.3205, -.8093) x8 = (.0695, -.2784, -.1036) Xll = (.0221, -.0836,.0928). (4.222) The X are determined from 1 +.3189X4 +.0695X8 +.022111 = 0 2X1 +.3205A4 -.2784X8 -.836N11 = 0 -.8093x4 -.1036Q8 +.0928Nll = 0. (4.223) Let ]l = 1, then A1 = -.046171, X4 =.170449, A8 = -.435753. Then ETn I = 1.652373 (4) a and ZX'(-h ) = -.0219929 (4) Consequently, ~ A,(-h ) e ( = =-4) -.... -.013310. (4.224) (4) a

116 Since e = C (sgn x), therefore, 1.013310 c4 = -.013310 e8 =.013310 11 = -.013310 By Eq. (4.28), A1 + 2a -.013310 = 0.3189A1 +.3205a -.8093b -.4187 +.013310 = 0.0695A1 -.2784a -.1036b -.1973-.013310 = 0. (4.225) Hence, A =.853762, a = -.420226, b = -.330914. (4.226) The errors in Eq. (4.211) are: 1 =.013310 2 = -.001704 E = -.00o8495 E4 = -.013309 5 = -.011947 E6 = -.004333 E7 =.004358 e8.013310 C. =.005157 O10 = -.003471 e11 = -.013309 612 = -.019423

117 e13 = -.021891 e14 = -.023223 e15 = -.019015 16 = -.013173. (4.227) Since 1e141 > |e|, the replacement process must take place. Now, Now, x = (.0071,.0242,.0487), (4.228) and, by Eq. (4.37), the i are determined from l1 +.3189v4 +.069598 +.0221411 +.0071 = 0 24l +.3205g4 -.2784g8 -.0836411 +.0242 = 0 -.809344 -.103618 +.0928411 +.0487 = 0. (4.229) Let 11 = 0, then l = -.027440, 4 =.055043, 8 =.040093 Since sgn E = -sgn X and e < 0, hence from Table II, the.aa[ equation which is designated by the number given by Min - must be replaced. Now, 1 _ -.027440 44.055043 - -.046171' 4.0170449' 48 o.040093 1ll 8 = -.435753' 1 i Min - = 8, hence, the eighth equation in Eqs. (4.211) must be %a X8 replaced. The new equations are:

118 A + 2a - 0.3189A1 +.3205a -.8093b -.4187 = 0.0221A -.0836a +.0928b -.0366 = 0.0071A +.0242a +.0487b -.0030 = 0. (4.230) Then, x1 = (1, 2, 0) x4 = (.3189,.3205, -.8093) xll = (.0221, -.0636,.092o) 14 = (.0071,.0242,.0487). (4.231) The equations for A are: h +.3189X4 +.0221Xll +.007 =L 0 2X1 +.3205x4 -.0836!11 +.0242\14 = 0 -.8093x4 +.0928ll. +.048714 = 0. (4.232) t 14 = 1, then = -.023041, h =.053825, l -.055387. Then, ZI I = 1.132253 (4) and ZX (-h ) = -.0235094 (4) a Hence, Z'% (-h) = =(4) - -.020763. (4.233) Z' |\ol (4) Since E f= e (sgn ha), therefore,

119 l =.020763 E4 = -.020763 11 =.020763 E14 = -.020763 By Eq. (4.28), A + 2a = 0.3189A1 +.3205a -.8093b -.4187 +.020763 = 0.0221A -.0836a +.0928b -.0366 -.020763 = 0. (4.234) Hence, A = 1.260533, a = -.619885, b = -.240487. (4.235) The errors in Eqs. (4.211) are: 1 =.020763 2 = - 055869 = -.053305 E4 = -.020763 em =.019776 E6 =.055496 7 =.077405 eg =.087797 8 eg =.068688 E10 =..o45846 11 =.020763 C12 =.001013 13 = -.012125 14 = -.020763

120 E - = -.o020859 16 = -.016884. (4.236) Since |c8 > jeJ, the replacement process must proceed. Now, x8 = (.0695, -.2784, -.1036), (4.237) and, by Eq. (4.37), the equations for 4 are: 1l +.3189 4 +.0221411 +.007114 +.0695 = 0 2 +.320 -1.36 +.320 - 36 + 024214-.2784 = 0 -.809344 +.0928l11 +.0487414 -.1036 = 0. (4.238) Let 14 = 0, then 1 =.105957, 4 = -.391159, 11 = -2.294879. Since sgn E = -sgn X and e. > 0, hence, from Table II, the [i equation which is designated by the number given by Max - must be a replaced. Now, 1l.105957 g4 -.391159 \ -.023041.05325 11 -2.294879 414 0 11 -.055387 14 1 11 (4. 14 4a 11 Since Max - =, the eleventh equation in Eqs. (4.211) a 11 must be replaced. The new equations for the reference are: A1 + 2a = 0.3189A1 +.3205a -.8093b -.4187 = 0.0695A1 -.2784a -.1036b -.1973 = 0.0071A1 +.0242a +.0487b -.0030 = 0. (4.239)

121 Then, x1 = (1, 2, 0) X4 = (.3189,.3205, -.8093) X8 = (.0695, -.2784, -.1036) x = (.0071,.0242,.0 487). (4.240) The equations for x are: k1 +.3189x4 +.o695X8 +.0071k14 = 0 2x1 +.3205X4 -.2784x8 +.0242x14 = 0 -.80934 -.1036x8 +.o487x14 = 0. (4.241) Let 14 = 1, then k1 = -.025598, k4 =.063265, k8 = -.024135. Then, E' |xa = 1.112998 (4) and E (-h ) = -.0247272 (4) a Consequently, Z k (-h ) e = -(4) = -.022217. (4.242) L' I I% (4) Since e = e (sgn %H), therefore, ~1 =.022217 E4 = -.022217 c8 =.022217 ~14 = -.022217 By Eq. (4.28):

122 A + 2a -.022217 = 0.3189A1 +.3205a -.8093b -.4187 +.022217 = 0.0695A1 -.2784a -.1036b -.1973 -.022217 = 0. (4.243) Hence, A1 =.914645, a = -.446214, b = -.306209.(4.244) The errors in Eqs. (4.211) are: E1 =.022217 2 = -.010928 3 = -.021238 e4 = -.022216 5 = -.014618 E6 = -.001270 7 =.011319 c8 =.022217 9 =.013921 10 =.004114 E11 = -.007499 E12 = -.015487 613 = -.019612 E14 = -.022217 e15 = -.018869 16 = -.013504. (4.245) Since Iem < jej (m = 1,..., 16) A1, a, b, are the desired values, and the three-term approximation is:

123 h*(t) =.914645 e-1905t + (-.446214 - j.306209) e (-13866 + jl.98959)t + (-.446214 + j.306209) e(-1.3866 - j196959)t (4.246) or h*(t) =.914645 e 190t + (-.446214)(2e13866t cos 1.9859t) -1. 3866t + (-306209)(-2e-1'3866 sin 1.98959t). (4.247) The Tschebyscheff error is.022217. The Laplace transform of Eq. (4.224) is: H*(s) = (.022217)(s2 + 36.7901s + 239.8808) (4.248) (s + 1.905)(s + 2.7732s + 5.866458) A network realizing H*(s) as a transfer impedance is shown in Fig. 4.10..006805 fd oI,_, I" 3.996626n...013092 n.614626h.546666 fd.546666fd.960248.n.960248 E2 0.... 0 E2 = (.022217)(s2 + 36.7901s + 239.8808 1H* (s + 1.905)(s2 + 2.7732s + 5.866458) FIG. 4.10 NETWORK REALIZING THE h*(t) OF EQ. (4.247)

124 Plots of h(t) and h*(t) vs. t are shown in Fig. 4.11. The Tschebyscheff error is, in accordance with Eq. (4.242),.022217. The plot of [h*(t) - h(t)] vs. t is shown in Fig. 4.12. 4.6 Conclusion and Summary In the preceding sections of this chapter, the problem of approximation of the impulse response h(t) by a function h*(t), whose Laplace transform is a network function has been solved. The process developed yields such a function that Max |h(t) - h*(t)l is minimized at the interval points. Since the terms of the approximating function are well-behaved functions themselves, with exponentially-decaying envelopes, it can be argued that for sufficiently small intervals a good approximation at the interval points will yield a good approximation between the interval points. The application of this method has shown that good approximations are to be expected and that the Tschebyscheff error for the residues is a meaningful indicator of the overall maximum error to be expected. The amount of numerical work involved in obtaining h*(t) is governed by two numbers, q and n. q is the number of equispaced points [at which hm (m = 1, 2,..., q) is known] considered, and n is the number of terms in h*(t). It has been observed that the amount of computational work varies roughly linearly with q, but goes up roughly with the square of n (i.e., for a given q, the amount of work 16 for n = 4 is roughly -;times the amount for n = 3). This rough estimate of computational work to be expected should enable one to decide at which point it is advisable to utilize automatic computers for the calculation of h*(t).

125.4 1 \' I \ 5I \ I \ Io 2 ___ I FIG. 4.11 PLOT OF h*(t) AND h(t) (EXAMPLE OF SEC. 4.5.2)

126.022217.02.01. 0I _S ~ 1 0.5 /I 1.5 \2 2.5 3.0 3.5.01 -02 _-.022217 FIG. 4.12 PLOT OF [h*(t)-h(t)I (EXAMPLE OF SEC. 4.5.2)

127 The choice of a number for q is dictated by two considerations. These are: (1) the length of the time interval of interest, and (2) the behavior of h(t). If h(t) decays slowly, many points must be considered and q will be large. Similarly, if h(t) displays wild variations (i.e., the derivative of h(t) changes sign often) then q will also be large. Of course, if h(t) varies rather wildly, one may "smooth" h(t) first, and then approximate. Fortunately, the impulse responses demanded in practice from R-L-C networks are relatively well behaved. However, it may be of theoretical interest to consider an approximation to a wildly varying function by the proposed method. Such a function will demand a choice of a large number for q. The number chosen for q should be no less than the time interval in which approximation ta.kes place divided by the smallest time interval between a relative maximum and a relative minimum of h(t). A good choice for q is a matter of judgement. The choice of a number for n is dictated by two conflicting requirements. These are: (1) the magnitude of the approximation error and (2) the complexity of the resulting network. An increase in n reduces the error but does increase the complexity of the network (i.e., increases the number of network elements). Therefore, the choice of n will be determined either by the maximum error that can be tolerated, or by the maximum allowable complexity of the desired network, or both. Hence, the choice of n is a matter of engineering judgement and should not be prescribed without the knowledge of specific requirements. In many application, it is desired to solve one of two possible problems: (1) the maximum allowable error is prescribed,

128 and one wants the simplest network function which will satisfy the requirements on the error, and (2) the maximum number of elements in the network is prescribed, and one desirs a network which mimimizes the error. The first problem requires the determination of the minimum n which will satisfy the error requirements. This requires a selection of n, and by the methods of Section 4.4, after some computational work, one can determine the magnitude of the expected final error. However, the precise value of the final error is determined with nearly the final computation. Hence, it is certainly conceivable that at the end of the computational process one may discover that either (1) the allowable error has been exceeded, or (2) the final error is sufficiently below the allowable error to question whether a choice of a smaller number for n would have been more appropriate. In both these cases one may have to repeat the computational process with a different choice for n. A remedy for these two possibilities would be a straightforward relationship between the final error and n. Unfortunately, such a relationship is not apparent. Fortunately, however, rarely if ever does one have to carry the process to near completion before discovering that a better choice for n was indicated. The Tschebyscheff error of every cycle conveys a more precise knowledge of the expected final error than the knowledge available at a previous cycle. The second problem can be solved in a straightforward manner. n is approximately equal to twice the number of elements. Hence, n is prescribed and one can solve the approximation problem.

129 The approximation procedure will be now summarized. Prior to approximation, the preseribed impulse response h(t) should be subjected to a preliminary simplification. If h(t) has exponential terms, these terms can be subtracted from h(t) and the remainder approximated. The subtracted terms are then added to the approximated function obtained. Also, a replacement of t by a linear function of t [if t is replaced by a(t+b), then b represents a delay, and a represents change of time scale] may give rise to simplifications. The approximation obtained is then modified accordingly. In some instances, it is simpler to approximate the differential or the integral of h(t) rather than h(t) itself. (The system function obtained is then multiplied or divided by s). Of course, then, h*(t) will not approximate h(t) in the Tschebyscheff sense. As a result of the preliminary simplifications, the prescribed h(t) is at the start of the approximation process in its simplest form. The interval of approximation is now selected. After the choice of the numbers n and q has been made p and d are computed and the values for hm (m = 1, 2,..., q) are determined. The remaining steps of the process are outlined below: n 1. Obtain p equations from h+krn_k = )(v = 1, 2,..., p). k=O v+k n 2. Select [E ]. 3. Find the n+l x from: x = (ha, ha,., h ). a a a a+l a+n-1l 4. Find n from X x = 0 (one of the X is arbitrary, but X + 0). n+1 a l f'k h 5. Compute (n a +n ~= In+l)'I (n+l) a

130 6. Find the n+l Ec from Ea = (sgn X). 7. Find rk (k = 1, 2,..., n) from n equations n k oh +k n-k - e (sgn Ka) = k=O 8. Compute ev (v = 1, 2,..., p) from n ~ = E h +krnk (ev for v = a were already computed in 6). k=0 9. Is 1ev< IlEa? If Yes, next step is 17. If No, next step is 10. 10. Find i from |e.| =|(v = =1, 2,..., p). (If more than one max, choose any max). 11. Find xi = (hi hi+.., hi+n-l). 12. Find i from Z x + x. = 0 (one of the p is arbitrary). (n+l) 13. Compute a-..r 14. Find from Table III the hyper-plane E which is to be replaced by Ei. 15. Replace EQ by Ei, forming a new reference. 16. Repeat from step 3. 17. Apply Routh's test to: n n r k k -k wn-j{ Z (_l)krk [ (m)(n-m)(l)m]} = 0 j=- k=0 m=O (r = ). Are all roots in the left-hand plane? If Yes next step is 20. If No next step is 18.

131 18. Increase the interval of approximation, choosing a new q (qnew > old; Pnew old) 19. Repeat from step 1. n k 20. Solve Z rk y = 0 (r = ). k=O 21. Compute poles from Table III. 22. Find bkm, a;, b'm from skt bkm = e (k = 1, 2,..., n-2w) km c~t a = 2e cos t (k = 1, 2,..., w) = 2e cos B~t b; = -2e sin pt where sk are real poles, s = ca + j$~ are complex poles. 2w is the number of complex poles. 23. Obtain q equations from n-2w Z b A + Z (a^ + b' b h (m= 12,..., q). k=l k =1 I 24. Select [E. 25. Find the n+l xa from x = (bl, b *l2a*.. b(n-2w)a l a 1 a2o - aw, b la 2' a w 26. Find X from Z A x = 0 a (n+l) C a (one of the X is arbitrary, but AX 0). 27. Compute (n+l) a E =!, li (n+l) a

132 28. Find the n+l c from e = e (sgn X ) a CT <j 29. Find Ak (k = 1, 2,..., n-2w), aA, bI ( 1 = 1, 2,..., w) from n equations n-2w w E I bk Ak + Z (at a + bb b) -h (sgn ) = k=l kak A=1 a a a 30. Compute em (m = 1, 2,..., q) from n-2w w = b Ak + Z (a a + b b ) h k=l k~m k Am AmA m k=l A=l 31. Is Ieml < Ie |? If Yes next step is 39. If No next step is 32. 32. Find i from eil = mlmax (m = 1, 2,..., q). 33. Find x = (bl,... b(n2w)i, ali,... a, bli, b ) 34. Find a from a a x + x. = 0. (One of the [ is arbitrary). 1a (n+l) a a i 35. Compute. 36. Find from Table III the hyper-plane Ee which is to be replaced by Ei. 37. Replace EA by Ei, forming a new reference [E]. 38. Repeat from step 25. n-2w skt w A 39. From h*(t) = Z Ake + a (2e cos pAt) + be(-2e sin pt). k=l l =1 40. Find H*(s) = L [h*(t). H*(s) is the system function of the desired network N.

133 This outline summarizes the process of approximation of h(t) by h*(t), from which H*(s) can be obtained. H*(s) can now be synthesized as the desired transfer function, thus yielding the desired network N. The last e (computed in 27), is the Tschebyscheff error of the approximation.

CHAPTER V CONCLUSION The goal of the preceeding chapters has been to develop a theory of synthesizing R-L-C networks meeting to a Tschebyscheff approximation in the time domain, prescribed input - output requirements. The method which is developed is a numerical one, and, hence, permits a prescription of the input - output relationship in either the form of an equation or in the form of data. The approximation process is discussed in detail for a prescribed impulse response. Consideration is given to the more general problem of obtaining a network with a prescribed response to an arbitrary input. It is shown in Appendix B that this problem can be reduced to an equivalent problem of obtaining a network having a prescribed impulse response. The approximation process developed in this dissertation yields an impulse response function approximating the prescribed one. The approximating impulse response is the inverse Laplace transform of an R-L-C network function. In this way one may determine the impulse response of a realizable network which approximates the prescribed impulse response in the Tschebyscheff sense. In the opinion of the author, the chief contributions of this investigation are as follows: 1. Application of the discrete Tschebyscheff approximation theory to network problems. 2. Development of a general solution to the problem of network approximation in the time domain. 134

155 5. Development of a general numerical method of approximation of a prescribed impulse response by the impulse response of a realizable R-L-C network. The error of the approximation of the realizable impulse response is minimized through optimization of both its pole locations and its residues. 4. A detailed investigation of the effect on the approximation of both the error in the pole determination and the error in the residue determination. The above results are encouraging. However, it is noted that the calculations tend to become lengthy when the number of terms in the approximating function is large. In addition, as might be expected, a large number of points must be considered when the prescribed time response varies wildly (i.e., when the derivative of the time response changes sign often). Consequently, occasions may arise when the numerical calculations can advantageously be programmed on a digital computer. A number of topics meriting further study has arisen during this investigation. In particular, it would be desirable to extend the method developed so as to permit a Tschebyscheff approximation with certain constraints. Such constraints might, for example, involve a restriction of the poles to the negative real axis (thus yielding an R-C network), or a restriction of the poles to the jw axis (thus yielding a lossless network). Many other constraints dictated by practical considerations might be considered.

156 The method of discrete Tschebyscheff approximations can be employed advantageously whenever a problem can be reduced to an overdetermined system of linear equations. It is believed that some network problems in the frequency domain also show promise of solution by this approach.

APPENDIX A Step Input Problem In many applications a network N is desired which will provide a prescribed response k(t) to a unit step input. This problem can be reduced to the one discussed in Chapter IV, i.e., the synthesis of a network with prescribed impulse response, through differentiation of k(t) and equating it to h(t). Hence, h(t) = k'(t). (Al) With h(t) determined, the methods of Chapter IV can be applied, producing the desired network N. However, in cases where k(t) is not differentiable without an error [for example, when k(t) is given as numerical data], it is more accurate to approximate k(t) by k*(t) and then to differentiate to obtain h*(t). Such an approach will be outlined in this appendix. The requirement of stability demands h(t) to approach zero for sufficiently large t. This requires k(t) to approach a constant for large t. Hence, if k*(t), an approximation to k(t), is represented as n skt k*(t) = Bo + Z Bke (A2) k=l (Re s 0), then n skt n skt h*(t) = k*'(t) = k (kBk)e Ake (A3) (Re Sk< O) where Ak = skBk, has the required form. 137

136 If B is known (which is the usual case) one can form 0 ~*(t) = k*(t) - Bo; then, n skt i*(t) = Z Bke (Re Sk< 0). (A4) k=l i*(t) can now be determined by the methods of Chapter IV, yielding h*(t) given by: n skt h*(t) = I*'(t) = E Ake (A5) k=l k (Re sk< O), from which the desired network can be synthesized. If B is not known in advance to sufficient accuracy, and can not be subtracted, one can form a set of equations at equallyspaced intervals in a manner similar to Eq. (4.5) and Eq. (4.7). If the interval spacing is d, and if there are q points of data, then n skt k = k(tm) =B B+ km k tm =B+ kiZ. Bke * (m = 1, 2,..., q) (A6) By Eq. (4.8), skd e = yk (A7) and by Eq. (4.9) skt B e = (A8) Then, n k = B + z v 0 k=l kv n k =B + Zy v+1 o k=l kv'k

1539 n k+2 = B + ZkvYk k=l (Ag) n kv+i = + zkvk (v+i = 1,..., q) k=l If one takes the difference of two subsequently following equations, one obtains: n v+l v kl zkv(yk 1) n v+2 kv = k= kvyk-1)y r+3 -+2 k1 Zkv(Yk-l)Sk v+3 v+2 k=l V kv(Yk-)Yk ki _-kV k l kv(yk-1)y (Alo) v+i +-1 k=l kvk k Let nAm = k +l -k (All) v+m v+m+l v+m then, n n v = Zk kv k Zkv k=l k=l n 2 n 4+1 = Zkvyk - Z kvyk (v+i = 1, 2,...,q-l) k=l k=l n n n+l n v+i-l zk vk -Z Yk ~ (A12) k=l k=l Introducing the functions rk (k = 0, 1,..., n) defined in Eqs. (4.11), one obtains:

140 r L +r Ln +. = I -I nv n-lv+l vn 3 2 (A13) where n n n 2 rn k (kvyk)rn-l kl(ZkvYk)Yk+ k+ k (zkvYk)Yk ( and n n n 13 = rn Zkv rn-l kl kvyk+... + k Zkvy * (A15) But it was shown in the proof of Theorem 1 that 3 = I = 0 (A16) Hence, n Z rk v+k = 0 (v+k = 1, 2,.., q-l). (A17) k=O Therefore, Eqs. (A17) form an overdetermined system which can be solved by the methods of Chapter IV, yielding h*(t), and thus the desired network N. It should be noted that now the system has only p-l equations rather than p (p=q-n) equations, since the number of equations was reduced by one through the difference-taking process.

APPENDIX B Arbitrary Input Problem It was pointed out in Chapter II that most problems for prescribed time-response include specifications of a particular input ei(t) and of the corresponding response eo(t), rather than the impulse response h(t). A number of techniques are available, however, for the reduction of such input conditions to the equivalent h(t) desired. One technique, a slight modification of an approach advanced by E. A. Guillemin[8] will be presented in this appendix. The general relationship between ei(t), e0(t) and h(t) is given by t t eo(t) = f ei(x)h(t-x)dx = f ei(t-x)h(x) dx. (Bl) O O If ei(t) or h(t) is replaced by its kth derivative, then e0(t) becomes replaced by its kth derivative. If ei(t) is differentiated k times and h(t) integrated k times, then e (t) is unaffected. It follows that if ei(t) is differentiated k times and h(t) is differentiated m times, then e (t) becomes replaced by its (k+m)th derivative. This relationship can be stated as e(k+m)t) = ek) (x)h(m)(t-x) dx. (B2) e~ ~ 0 o Equation (B2) can be considered to be a generalization of Eq. (Bl). In particular, if only ei(t) is differentiated k times (i.e., m = 0), e(k)(t) = ei (x)h(t-x) dx. (B3) 0 41 141

142 If eO(t) is approximated by 0(t), which is a sequence of q curves, each of which is given by a (k-l)-degree polynomial, then -(k) e( (t) can be represented by a sequence of impulses. If a uniform time increment d is chosen between approximating curves, then all a(k) impulses are uniformly spaced. Then e )(t) can be represented as (k) q e( (t) = c (t -md), (B4) m=O where 5(t) is the unit impulse. From Eq. (B3) one obtains: e()(t) c h(t) + clh(t-d) +... + c h (t-qd). (B5) Since h(t) = 0 for t < 0, therefore, e(k)(O) = c h(O) e( (d) c0h(d) + clh(O)........................(B6) -(k) ~e (qd) = c0h(qd) + c1h[(q-l)d]+...c h(O). Since e0(t) is known, e (md) (m = 0, 1,..., q) can be determined. The coefficients c (m = 0, 1,..., q) can be determined from Eq. (B4). Thus one has in Eq. (B6) a set of q+l equations for the q+l unknowns, h(O), h(d),..., h(qd). The solutions of Eqs. (B6) yield, therefore, the values of the impulse response at the interval points, thus providing a starting point for the impulse-response approximation problem.

LIST OF REFERENCES 1. Ba Hli, Freddy, "A General Method for Time Domain Network Synthesis," Transactions of the Institute of Radio Engineers, Vol. CT-1, pp. 2129, Sept. 1954. 2. Cauer, Wilhelm, "Das Poissonsche Integral und Seine Anwendungen auf die Theorie der Linearen Wechselstromschaltungen (Netzwerke)," Elek. Nach. Tech., p. 17, Jan. 1940. 3. Chestnut, H., and Mayer, R. W., Servomechanisms and Regulating System Design, John Wiley and Sons, Inc., New York, 1951, (pp. 134-137). 4. Churchill, R. V., Complex Variables and Applications, McGraw-Hill Book Co., New York, 1948. 5. Collatz, L., "Approximation von Functionen bei einer und bei mehreren unabhangingen Veranderlichen," Z. Angew. Math. und Mech., 36, 198-221, 1956. 6. Gilbert, E. G., "Linear System Approximation by Mean Square Error Minimization in the Time Domain," Ph.D. Thesis, Dept. of Aeronautical Engineering, Univ. of Mich., Jan. 1957. 7. Guillemin, E. A., "Computational Techniques which Simplify the Correlation between Steady-State and Transient Responses of Filters and Other Networks," Proc. Nat. Electronics Conf., 1953, Vol. 9, 1954. 8. Guillemin, E. A., Synthesis of Passive Networks, J. Wiley and Sons, 1957, Pp. 707-726. 9. Guillemin, E. A., "What is Network Synthesis," Transactions of the Institute of Radio Engineers, Vol. PGCT-1, pp. 4-19, December 1952. 10. Huggins, W. H., "Network Approximation in the Time Domain," Report E5048A, Air Force Research Laboratories, Cambridge, Mass., Oct. 1949. 11. Kautz, W. H., "Transient Synthesis in the Time Domain," Transactions of the Institute of Radio Engineers, Vol. CT-1, pp. 29-39, Sept. 1954. 12. Lanning, J. H., Jr., and Battin, R. H., Random Processes in Automatic Control, McGraw-Hill Book Company, New York, 1956. 13. Lee, Y. W., "Synthesis of Electrical Networks by Means of the Fourier Transforms of Laguerre's Functions," Journal of Mathematics and Physics, Vol. 11, pp. 83-113, June 1932. 14. Lewis, N. W., "Waveform Computations by the Time Series Method," Proceedings of the Institute of Electrical Engineers, Vol. 99, Part III, pp. 109-110, Sept. 1952. 143

LIST OF REFERENCES —Continued 15. Linvill, W. K., "Use of Sampled Functions for Time Domain Synthesis," Proc. of the National Electronics Conference, Vol. 9, pp. 533-542, 1953. 16. Mathers, G. W. C., "The Synthesis of Lumped-Element Circuits for Optimum Transient Response," Technical Report No. 28, Electronics Research Laboratories, Stanford University, Nov. 1951. 17. Nadler, M., "The Synthesis of Electrical Networks According to Prescribed Transient Conditions," Proceedings of the Institute of Radio Engineers, Vol. 37, pp. 627-629, June 1949. 18. Otterman, J., "Time Domain Synthesis for an Analog Computer Setup," Proceedings of National Simulation Conference, pp.24.1-24.5, Dallas, Texas, January 1956. 19. Pade, H., "Sur la representation approchee d'une fonction des fractions rationnelles," Ame. de l'Ecole Normale, (3) Vol. 9, 1892, pp. 1-93. 20. Perron, 0., Die Lehre von den Kettenbruchen, Teubner Verlag Leipzig, 1929. 21. Prony, Jour. de l'ecole polytechnique, Cah. 2 (an IV) 1795, p. 29. 22. Spencer, R. C., "Network Synthesis and the Moment Problem", Transactions of the Institute of Radio Engineers, Vol. CT-1, pp. 32-33, June 1954. l. 23. Stiefel, E., "Uber Discrete und Lineare Tschebyscheff Approximationen," Numerische Mathematik, 1 Band, 1 Heft Springer Verlag 1959, pp. 1-28. 24. Strieby, M., "A Fourier Method for Time Domain Synthesis," Proceedings of the Symposium on Modern Network Synthesis, pp. 197-211, New York, April, 1955. 25. Teasdale, R. D., "Time Domain Approximation by Use of Pade Approximates," The Institute of Radio Engineers Convention Record, Part 5, pp. 89-94, March 1953. 26. Truxal, John G., Automatic Feedback Control System Synthesis, McGraw-Hill Book Company, Inc., New York, 1955. 27. Tuttle, D. F., Jr., Network Synthesis, Vol. 1, J. Wiley and Sons, Inc., New York, 1958. 28. Vallee-Poussin, Ch. J. de La, "Sur la Me/thode de l'approximation minimum," Soc. Scient. Bruxelles, Annales, 2 parte, memorees, Vol. 35, p. 1-16, 1911. 144

LIST OF REFERENCES —Continued 29. Whitaker-Robinson, The Calculus of Observations, 4th Edition, Blacke and Son, Limited, London, 1952. 30. Willers, F. A., Methoden der Practischen Analysis, Walter deGruyter Verlag, Berlin, 1950. 31. Zabusky, N. J., "A Numerical Method for Determining a System Impulse Response from the Transient Response to Arbitrary Inputs," Transactions of the Institute of Radio Engineers, Vol. and PGAC-1, pp. 40- 56. 145

DISTRIBUTION LIST Copy No. Copy No. 1-2 Commanding Officer, U. S. Army Signal 27 Commander, Air Proving Ground Center, Research and Development Laboratory, ATTN: Adj/Technical Report Branch, Fort Monmouth, New Jersey, ATTN: Senior Eglin Air Force Base, Florida Scientist, Countermeasures Division 28 Commander, Special Weapons Center, Kirt3 Commanding General, U. S. Army Electronic land Air Force Base, Albuquerque, New Proving Ground, Fort Huachuca, Arizona, Mexico ATTN: Director, Electronic Warfare Department 29 Chief, Bureau of Ordnance, Code ReO-l, Department of the Navy, Washington 25, 4 Chief, Research and Development Division, D. C. Office of the Chief Signal Officer, Department of the Army, Washington 25, 30 Chief of Naval Operations, EW Systems D. C., ATTN: SIGEB Branch, OP-347, Department of the Navy, Washington 25, D. C. 5 Chief, Plans and Operations Division, Office of the Chief Signal Officer, 31 Chief, Bureau of Ships, Code 840, DeWashington 25, D. C., ATTN: SIGEW partment of the Navy, Washington 25, D. C. 6 Commanding Officer, Signal Corps Elec- 32 Chief, Bureau of Ships, Code 843, Detronics Research Unit, 9560th USASRU, partment of the Navy, Washington 25, D. C. P. O. Box 205, Mountain View, California 33 Chief, Bureau of Aeronautics, Code EL-8, 7 U. S. Atomic Energy Commission, 1901 Con- Department of the Navy, Washington 25, stitution Avenue, N.W., Washington 25, D. C. D. C., ATTN: Chief Librarian 34 Commander, Naval Ordnance Test Station, 8 Director, Central Intelligence Agency, Inyokern, China Lake, California, ATTN: 2430 E Street, N.W., Washington 25, Test Director-Code 30 D. C., ATTN: OCD 35 Commander, Naval Air Missile Test Center, 9 Signal Corps Liaison Officer, Lincoln Point Mugu, California, ATTN: Code 366 Laboratory, Box 73, Lexington 73, Massachusetts, ATTN: Col. Clinton W, Janes 36 Director, Naval Research Laboratory, Countermeasures Branch, Code 5430, Wash10-19 Commander, Armed Services Technical In- ington 25, D. C. formation Agency, Arlington Hall Station, Arlington 12, Virginia 37 Director, Naval Research Laboratory, Washington 25, D. C., ATTN: Code 2021 20 Commander, Air Research and Development Command, Andrews Air Force Base, Wash- 38 Director, Air University Library, Maxwell ington 25, D. C., ATTN: RDTC Air Force Base, Alabama, ATTN: CR-4987 21 Directorate of Research and Development, 39 Commanding Officer-Director, U. S. Naval USAF, Washington 25, D. C., ATTN: Chief, Electronic Laboratory, San Diego 52, Electronic Division California 22-23 Commander, Wright Air Development Center, 40 Office of the Chief of Ordnance, DepartWright-Patterson Air Force Base, Ohio, ment of the Army, Washington 25, D. C., ATTN: WCOSI-3 ATTN: ORDTU 24 Commander, Wright Air Development Center, 41 Chief, West Coast Office, U. S. Army Wright-Patterson Air Force Base, Ohio, Signal Research and Development LaboraATTN: WCLGL-7 tory, Bldg. 6, 75 S. Grand Avenue, Pasadena 2, California 25 Commander, Air Force Cambridge Research Center, L. G. Hanscom Field, Bedford, 42 Commanding Officer, U. S. Naval Ordnance Massachusetts, ATTN: CROTLR-2 Laboratory, Silver Springs 19, Maryland 26 Commander, Rome Air Development Center, 43-44 Chief, U. S. Army Security Agency, Griffiss Air Force Base, New York, ATTN: Arlington Hall Station, Arlington 12, RCSSLD Virginia, ATTN: GAS-24L 147

DISTRIBUTION LIST (Cont'd) Copy No. Copy No. 45 President, U. S. Army Defense Board, 61-62 Commanding Officer, U. S. Army Signal Headquarters, Fort Bliss, Texas Missile Support Agency, White Sands Missile Range, New Mexico, ATTN: SIGWS-EW 46 President, U. S. Army Airborne and Elec- and SIGWS-FC tronics Board, Fort Bragg, North Carolina 63 Commanding Officer, U. S. Naval Air 47 U. S. Army Antiaircraft Artillery and Development Center, Johnsville, Pennsylvania, ATTN: Naval Air DevelopGuided Missile School, Fort Bliss, Texas, Penns ania T N Ar ee ATTN: E & E Department ment enter Librar 64 Commanding Officer, U. S. Army Signal 48 Commander, USAF Security Service, San Research and Development Laboratory Atonio, Texas, ATN: CLR Research and Development Laboratory, Antonio, Texas, ATTN: CLR Fort Monmouth, New Jersey, ATTN: U. S. 49 Chief of Naval Research, Department of Marine Corps Liaison Office, Code AO-4C the Navy, Washington 25, D. C. ATTN: the Navy, Washington 25, D. C. ATTN: 65 President U. S. Army Signal Board, Fort ~~~Cod ~e r~931 ~Monmouth, New Jersey 50 Commanding Officer, U. S. Army Security 66-76 Commanding Officer, U. S. Army Signal ReAgency, Operations Center, Fort Huachuca, search and Developmert Laboratory, Fort A i ona search and Development Laboratory, Fort ~~~~~~~Arizona ~Monmouth, New Jersey 51 President, U. S. Army Security AgencyTN: 1 Copy - Director of Research Board, Arlington Hall Station, Arlington, 1 - ec ca oce i Copy - Technical Documents Center 12, Virginia ADT/E 52 Operations Research Office, Johns Hopkins 1 Copy - Chief, Ctms Systems Branch, University, 6935 Arlington Road, Bethesda Copy - Chief, Deasures Division 14, Maryland, ATTN: U. S. Army Liaison 1 Co catpy - hief Detecton & LoOfficer cation Branch, Countermeasures Division 53 The Johns Hopkins University, Radiation 1 Copy - Chief, Jaming & DeLaboratory, 1315 St. Paul Street, Balti- ceptionasu Branch Countermeasures Division more 2, Maryland, ATTN: Librarian more 2, Maryland, ATTN: Librarian 1 Copy - File Unit No. 4, Mail & 514iRecords, Countermeasures 54 Stanford Electronics Laboratories, Stan- RecordDivisi Countemeasures ford University, Stanford, California, 1 Copy - Chief, Vulnerability Br., ATTN: Applied Electronics Laboratory 1 Copy - Chief Vulnerability Br., Electromagnetic EnvironDocument Library ment Division 1 Copy - Reports Distribution Unit, 55 HRB-Singer, Inc., Science Park, State C1 Cu - Reports Distribution Unit, College, enna., Pe ATTN: R. A. Evans, ue are Manager, Technical Information Centers - ief eur s -' ~~~~~~~~~~3 Cpys - Chief, Security Division 46 ~~~~~~~~~~~~~~~(for retransmittal to BJSM) 56 ITT Laboratories, 500 Washington Avenue, Nutly 1, Nw Jersy ATT: Mr. L. A. 77 Director, National Security Agency, Ft. DeRosa, Div. R-15 Lab. George G. Meade, Maryland, ATTN: TEC 57 The Rand Corporation, 1700 Main Street, r... D _H5 78 Dr..n W. Farrisc iecPr, Electronic Santa Monica, California, ATTN: Dr. J. L. Deese ro, iversitoan rHult Defense Group, University of Michigan Research Institute, Ann Arbor, Michigan 58 Stanford Electronics Laboratories, Stanford University, Stanford, California, 79-99 Electronic D)efense Group Project File, ATTN: Dr. R. C. Cumming University of Michigan Research Institute, Ann Arbor, Michigan 59 Willow Run Laboratories, The University of Michigan, P. 0. Box 2008, Ann Arbor, 100 Project File, University of Michigan Michigan, ATTN: Dr. Boyd Research Institute, Ann Arbor, Michigan 60 Stanford Research Institute, Menlo Park, California, ATTN: Dr. Cohn Above distribution is effected by Countermeasures Division, Surveillance Dept., USASRDL, Evans Area, Belmar, New Jersey. For further inormation contact Mr. I. 0. Myers, Senior Scientist, telephone PRospect 5-3000, Ext. 61252. 148

ERRATA TECHNICAL REPORT NO. 107 ELECTRONIC DEFENSE GROUP Page Line Read For ix 6 ei(t) e (t) ei(t) e (t) 4 20 functions function 12 1 T t Eq. (3.3) T t 16 2 [21] [10] 18 1 [17] [13] 1 16 15 Ei. (3.22) di(x) dx(x) Eq. (3.23) J(z) J(x) 19 14 [19J [17] 24 24 [21] [10] 26 4 AkSktv A sktv 8 zk 9 Zkv k 9 zlkv, zk 10 Zkv k n n 27 Eq. (4.11) r=- ky yk r 3 k k Y Y k3 =1 2 3 k= 1 1 2 Yk 28 21 r = 1 r = 0 O O m m 30 Eq. (4.19) +r Z (z y)yk... +r (zkk + 39 Fig. 4.2 (.5, -.5) (-.5,.5)

Page Line Read For 40 Eq. (4.34) E X h Xh (n+l) a o+n (n+l) a o+n 44 3 errors residues 52 22 the by 54 11 + 55 Table III 7k+J k Yk+j6k k k 58 9 [ E (-1) ( ) w-i](-l)k [ k () wk-j](l)k j=0 J=0 2 2 2 64 Eq. (4.64) +rl + +** +rnl +1 Eq. (4.65) Ev 123 12 Z12 Zl2 12 parenthesis missing 127 15 judgment judgement 24 judgment judgement 128 3 desires desirs 130 Step 14 Table II Table III 131 Step 22 ( =, 2,...w) (k = 1 2,..., w) n-2w n-2w Step 23 0 = Z bkmAk. ~ bkmAk... k=l k=l 132 Step 36 Table II Table III Step 39 Form From 142 1 ei(t) e (t) 1 ^e (t) E(t) i 0O

Page Line Read For 142 E. (B5) e((k)(t) 0 0 Ei ~.(B6 ) e (k) e(k) Eq. (B6) e e0 17 e (md) e (md) 145 Ref. 31 Vol. PGAC-1 Vol. and PGAC-1

UNIVERSITY OF MICHIGAN II11111111 l1111111 3 9015 03695 6020