WORK SESSION IN LYAPUNOV S SECOND METHOD Sponsored by the Nonlinear Control Subcommittee of the AIEE Feedback Control Systems Committee Edited by L. F. Kazda April, 1962 Second Printing

For additional copies write to: The University of Michigan Industry Program of the College of Engineering Ann Arbor, Michigan

PREFACE This pamphlet contains papers and problems of a Workshop Session on the Second Method of Lyapunov, sponsored by the Nonlinear Control Theory Subcommittee of the American Institute of Electrical Engineers held on September 6, preceding the Joint Automatic Control Conference for 1960, at the Massachusetts Institute of Technology, Cambridge, Massachusetts. In 1892 A. M. Lyapunov, a Russian Mathematician, postulated in his book, The General Problem of Motion Stability a number of sufficient conditions for stability or instability of undisturbed systems. By reducing the problem of stability on an undisturbed system to the problem of the stability of the equilibrium position, Lyapunov connected the fact of stability or instability with the presence of a "v" function, the time derivative of which has certain properties. For a long time it was not clear whether the conditions postulated by him were necessary. This question remained unanswered for a long time and only much later were the necessary conditions established which would insure the existence of a Lyapunov Function. In 1949 the work was translated into English and since then it has received some notoriety in this country, but only within the past five years has it been given serious consideration by those interested in feedback control systems. At the present time it is considered to be the most general method of studying the stability of nonlinear systems. The purpose of the Workshop Session as planned by the Nonlinear Control Theory Subcommittee was to organize a group of papers which would serve to introduce this subject to the uninformed by starting with introductory concepts and culminating in a group of home problems designed to enhance the reader's understanding of the subject. The workshop committee directly responsible for the success of this Session were Professor I. Flugge-Lotz, Stanford University Chairman; Dr. Kan Chen, Westinghouse Electric Corporation; Professor John E. Gibson, Purdue University; and Professor T. J. Higgins, University of Wisconsin; although all the members of the Nonlinear ii

Control Theory Subcommittee should be congratulated for their willingness to assist in every way possible. Thanks go to The University of Michigan Industry Program, who made possible this publication. Louis F. Kazda Chairman Nonlinear Control Theory Subcommittee iii

TABLE OF CONTENTS Page PREFACE.................................................... ii AN INTRODUCTION TO LYAPUNOV'S SECOND METHOD................. 1 W. J. Cunningham LYAPUNOV APPROACH TO STABILITY AND PERFORMANCE OF NONLINEAR CONTROL SYSTEMS.................................. 39 Y. H. Ku and D. W. C. Shen PRINCIPAL DEFINITIONS OF STABILITY......................... 55 D. R. Ingwerson THE "DIRECT METHOD" OF LYAPUNOV IN THE ANALYSIS AND DESIGN OF DISCRETE-TIME CONTROL SYSTEMS.................... 79 J. E. Bertram STABILITY ANALYSIS OF NUCLEAR REACTORS BY LIAPOUNOFF' S SECOND METHOD.............................................. 107 Thomas J. Higgins A RESUME OF THE BASIC LITERATURE ON NONLINEAR SYSTEM THEORY (WITH PARTICULAR REFERENCE TO LYAPUNOV'S METHODS)... 125 Thomas J. Higgins A PROBLEM IN STABILITY ANALYSIS BY DIRECT MANIPULATION OF THE EQUATION OF MOTION.................................. 149 D. R. Ingwerson APPLICATION OF LIAPUNOV'S SECOND METHOD TO CONTROL SYSTEMS WITH NONLINEAR GAIN............................59 J. E. Gibson and Z. V. Rekasius iv

AN INTRODUCTION TO LYAPUNOV'S SECOND METHOD W. J. Cunningham Yale University New Haven, Connecticut

1. INTRODUCTION A fundamental problem associated with the study of dynamic systems is the determination of their stability. While various techniques are available for investigating stability these techniques typically become difficult and tedious to apply if the system is of high order, or is nonlinear or time-varying. An approach to this problem, developed seventy years ago in Russia but almost unrecognized in this country until quite recently, is the so-called second method of Lyapunov. This is a method which has been much exploited in the Soviet Union, and which appears to have great power and flexibility. It does not provide a purely mechanical procedure applicable to all situations; it does require ingenuity to apply to other than standard situations. On the other hand, it may give information about systems that cannot be analyzed in other ways. An introduction to this second method of Lyapunov is given in the following discussion. The purpose here is to describe the basic idea of the method, to show something of the range of problems to which it applies, and to provide simple illustrative examples. The discussion is presented with a minimum of mathematical niceties, and no attempt is made to justify the procedures which are employed. Most of the published Western literature about the method has been written by mathematicians for a mathematical audience. Those readers interested in this aspect of the subject are referred to the literature(1'2). The recent paper by Kalman and Bertram(3, in particular, provides an excellent survey of the method, together with appropriate mathematical proofs, and gives many bibliographic references. -1

-2 2. MATHEMATICAL DESCRIPTION OF SYSTEM Throughout the following discussion it is assumed that the description of the dynamic system under study has been reduced to a set of simultaneous first-order differential equations dxl/dt = = fl(xl.''.'Xn) (1) dxn/dt = xn = fn(xl,',xn) The independent variable in these equations is time t, and the various dependent variables are xl,...xn The functions fl(xl,...,n),..,fn(xl,.,xn) may be nonlinear but are assumed to be differentiable. The order of the system is the integer n. The system described by Equation (1) is said to be stationary in the sense that functions fl'..'fn do not depend upon time t. The system is said to be free in the sense that no explicit functions of time appear as forcing functions. A free stationary system is sometimes said to be autonomous. An equilibrium condition exists if the variables have such values xle,...,xne that all the derivatives dxl/dt,...,dxn/dt are simultaneously zero. In a general way, an equilibrium condition is described as stable if the system tends to remain at that condition following any small disturbance away from the condition. The dependent variables xl,...xn must be chosed in such a way and in sufficient number to describe completely the system under study. For a physical system, these variables will generally be quantities which have certain physical dimensions. It is often possible

-3 to make more than a single choice for the variables used in describing a particular system. Sometimes one choice has advantages over some other choice. It should be noted here that in many of the operations which appear in the following discussion it is essential that all the dependent variables be chosen so as to have the same physical dimensions. Either the choice must be made intentionally with this criterion in mind, or else the dimensions must be made the same by suitable conversion factors. Sometimes it is desirable to normalize all quantities into pure numerics having no dimensions. Such normalization can always be done. This necessity for having a common dimension comes about because of the way in which coefficients arising at several points in the equations must be combined in subsequent work. It is evident that if the order of the system is other than quite small, the set of equations forming Equation (1) is going to be complicated and difficult to manipulate. For this reason it is essential to use matrix notation in the analysis. Rewritten in this notation) Equation (1) becomes dx/dt = = (x) (2) where x is the column matrix, or vector, made up of the n dependent variables, and f(x) is a similar column matrix of the functions. An equilibrium condition for Equation (2) is x = x for which dx/dt==0O where 0 is the zero column matrix. If functions f(x are linear, and f(0 = Qt the one equilibrium condition is xe = Q If functions f(x are nonlinear, there may be more than one equilibrium condition. The values of the n variables in the column matrix x at any instant describe the state of the system at that instant. It is

-4 convenient to have a single number to represent, at least in part, the state of the system. Such a single number may be a norm, which can be defined in any of several ways. The norm is sometimes taken as the sum of the magnitudes of all the state variables. For the present discussion, it is more convenient to take the square of the Euclidean norm, written as follows, =I.2 = x = (Xl...) /x = X1 + + Xn Gi~12 x'5~1 ~xI =(XI,... Xn) n X2 1... 2 i xnI The primed matrix x!' is the transpose of the unprimed matrix x. This quantity |x| | can readily be interpreted geometrically, at least for the cases of two or three variables. It is the square of the distance from the origin to the point representing the particular state x of the system, as plotted in rectangular coordinates. If the origin is an equilibrium point x = 0 the norm ||x|| provides a simple measure of the departure from this equilibrium point. 3. STABILITY OF SYSTEM The precise definition of stability, particularly for a nonlinear or time-varying system, is not simple. This question will not be explored here. Rather, for the present discussion, only the concept of asymptotic stability will be employed. An equilibrium condition is asymptotically stable if the system ultimately returns to this condition following any slight disturbance away from it. Stated in another way, if an initial disturbance x - x e| is small, asymptotic stability implies that ultimately ix —x as t-~oo. For a linear system the disturbance need not be limited in magnitude. Since

-5 a nonlinear system may have more than justone equalibrium condition, the disturbance used to test its stability must be small enough so that the system remains near the point being investigated. The exact nature of the nonlinearity governs the required smallness here. For a linear system with an equilibrium condition at z = xe =0i Equation (2) may be written dx,/dt = x = A x o-?" WI' (3) where X is a square matrix of constant coefficients. The solution for Equation (3) is known to be of the general form n xi = j =1 Cij exp(-Xjt) where the n characteristic exponents, or eigenvalues, are.Xj, and the constants Cij depend upon initial conditions at t = 0. Eigenvalues are determined by coefficients A in Equation (3), and are roots of the characteristic equation LA- xI =o (5) where I is the is real, and the conjugate pairs. eigenvalue has a unit matrix. For a real physical system, matrix A eigenvalues must either be real or occur in complex The system is asymptotically stable only if every negative real part.

-6 4. TESTS FOR STABILITY a. Routh-Hurwitz Method The determination of the characteristic equation, and its factorization to find the various eigenvalues, is a tedious process, particularly if the order of the system is large. If a complete solution is not needed, but information about stability is all that is required, it is sufficient only to test whether every eigenvalue has a negative real part. Such a test is provided by the well known Routh-Hurwitz criterion. This criterion requires expansion of the characteristic equation, Equation (5), into the polynomial form n n-l a0 X + a1 X + * + _ + an_ = 0 (6) where coefficient ao has been made positive. The following matrix is formed from the coefficients of Equation (6) a1 a0 0 0... 0 a3 a2 a1 ao.. 0 a a4 an3 a2.a 0 M =.. (7) 0... an an- a2 an-3 an-4 0... 0 an a n2 0. 0 0 0 0 a n

-7 All the eigenvalues, found as roots of Equation (6), can have only negative real parts if matrix M is positive definite. This is the case if each principal minor of M is positive. These minors are the following determinants a |al a o al aO ~ a3 a2 a3 a2 a1 a5 a4 a3 If all these determinants are positive, and ao> 0 as already assumed, the system leading to Equation (6) is asymptotically stable. The RouthHurwitz criterion for stability requires the expansion of the determinant of Equation (5) into the form of Equation (6), and the subsequent evaluationof the determinants derived from the matrix of Equation (7). While this is all perfectly straightforward, it is tedious to carry out if the system is large. b. First Method of Lyapunov If the system described by Equation (2) is nonlinear, the process just described is not immediately applicable. An approach commonly used with a nonlinear system is based on what is known as the first method of Lyapunov. In this method each equilibrium point must be investigated in turn. The nonlinear functions (x) of Equation (2) are expanded in Taylor series about the equilibrium point. It is convenient to introduce the new variable y = x - x, and to write Equation (2) as

-8 where F = dy/dt =F y + G(y) Y afl/axl afl/ax2 af2/axl af2/6x2 (8) and F is the so-called Jacobian matrix, with all tives evaluated at the equilibrium point, x = e. fi(;) of Equation (8), contains terms arising from derivatives in the Taylor series expansions. This its elements vanish at the equilibrium point, that its partial derivaThe second matrix, the higher-order matrix must have is IG(y) y| / I I II -o as I| y I -| o. The linearized equation dy/dt = F X (9) is the first approximation to the nonlinear equation, Equation (8). Lyapunov showed that if the real parts of the eigenvalues corresponding to the linearized equation are not zero, the stability of the nonlinear equation near the equilibrium point is the same as that of the linearized equation. Thus, the stability of a nonlinear system under some conditions can be investigated using the same techniques as are used with linear systems. The procedure here is similar to that which would be used in attempting to find an explicit solution for the system.

-9 5. SECOND METHOD OF LYAPUNOV The second method of Lyapunov is based on a somewhat different idea, and one that is closely related to the concept of energy. The energy stored in any physical system is, of course, a scalar quantity represented by a single number, even though a complete description of the system may require many variables. In an asymptotically stable system, the stored energy decays with increasing time. Thus a stable system may be characterized by stored energy, which is itself a positive quantity, but which has a time derivative which is negative. A simple electric circuit, consisting of a capacitance C and a conductance G in series, is described by the equation C de/dt + Ge = 0 (10) where e is the voltage across the capacitor. Voltage e is given by the solution e = E exp(-Gt/C), where e = E at t = 0. The system is obviously stable. The instantaneous stored energy is W = 2 Ce2 = 2 CE2 exp(-2Gt/C) which is positive. The time derivative is dW/dt = W = -GE exp(-2Gt/C) which is negative. The ratio -W/W = C/2G can be interpreted as a time constant for energy change. Its value is half the more usual time constant, C/G, applying to voltage change. This concept of energy and its rate of change is extended in the second method. In this method, however, a more general "Lyapunov function" is used, rather than energy itself. If a system is asymptotically stable, a Lyapunov function can be determined for the system. This is a scalar function of time and of the state variables. It is positive, itself, and it has a negative time derivative. Con

-10 versely the existence of such a function for a given system implies that the system is asymptotically stable. The Lyapunov function is given the symbol V(x), and the requirements for asymptotic stability are V(x) > 0 for x + x dV/dt = V(x) < 0 for x = x (11) V() = 0 for x= x V(x) - X for 11- X For simple systems, V(x) may be taken directly as the energy of the system. For more complicated systems, usually V(x) is better chosen to be something other than the energy. In fact there may be great flexibility in the choice of V( ), and this is one of the features of this method of analysis. At the same time, this flexibility requires ingenuity and experience on the part of the analyst. The intent in applying the method to test stability is to determine a Lyapunov function for the system directly from the differential equations. It is hoped to avoid many of the steps needed in attempting to find an explicit solution for the equations. A Lyapunov function is known for a few simple sorts of equations. There are some indications of how such a function might be sought for more complicated equations. There is opportunity for further work in this area. In addition to providing a test for stability, the Lyapunov function may also give information about the transient response of the system. This possibility is based on the observation that the function

-11 gives a simple measure of the state of the system at any instant. A parameter q may be defined as = (-V/V)min (12) in which case 1/T is the largest time constant relating to changes in the Lyapunov function V(x). Since V(x) is somewhat similar to energy which generally depends upon the squares of the state variables As this time constant 1/r is half the more conventional time constant defined for the state variables themselves. Usually rapid response is desirable so that parameter r can be considered as a kind of figure of merit. Larger values of T correspond to more rapid response. The point of interest here is that it may be possible to determine a Lyapunov function V(A) for a system without going through the usual steps of finding a solution in a conventional manner. It may be possible to find a V(x) for a nonlinear system that could not be solved at all in the usual way. In either case, figure of merit T can be found. For a nonlinear system, T will change as the state of the system changes. The nature of such changes may represent useful information itself. It should be noted, however, that the actual value of r will depend upon the particular Lyapunov function that is used. Since there may be several such functions for a given system, several alternate values of r may result. Presumably, the choice of V(x) should be made so as to minimize the resulting value of T. Just how to make this choice is usually not known in advance.

-12 6. LYAPUNOV'S THEOREM FOR LINEAR SYSTEM The Lyapunov function V(x) is known for a free, linear, stationary system as described by Equation (3) dx/dt = x = A x (3) Here, A is a square matrix of constants and equilibrium exists for x =x = O. For this system, Lyapunov himself showed that one suitable function is V(x) = ll2 P (13) and the notation of the right side of the equation has the meaning 2 x P = x' P x. Matrix P is a symmetric positive definite matrix satisfying the equation A' P + P A = -I (14) where A' is the transpose of A and I is the unit matrix. If a matrix P that will satisfy this condition can be found, then the system described by Equation (3) is asymptotically stable. This requirement is both necessary and sufficient. Matrix P is symmetric if P' = P. It is positive definite if all the principal minors are positive. The elements of P can be found from the simultaneous algebraic equations that result from expanding the matrix equation, Equation (14). There are (n/2)(n +1) such simultaneous equations. This process is similar to the Routh-Hurwitz test for stability. However, it does not require the explicit determina

-13 tion of the characteristic equation, Equation (6). This determination is a tedious process for systems of high order, and thus the Lyapunov approach may require less manipulation. This is typically the case if the order of the system exceeds four or five. The figure of merit T for the transient response of the free, linear, stationary system can readily be found. The Lyapunov function for the system is V(x) = | | which is Equation (13). Its time derivative can be shown to be given by the relation dV/dt = V x) =- I J2 (15) Figure of merit T is then 2 2 n = (I H/lHxl I ) min By additional work, this can be shown to be the minimum eigenvalue for the inverse of matrix P, which is written - where P P =I AWe inverse P ~ where P P I. Thus, in two alternate forms, -1 r = min X (( ) (16) 1/1 = max X (P) The determination of this eigenvalue requires either the expansion of a determinant similar to Equation (5), or the use of an appropriate numerical procedure directly with the matrix. Example 1 A simple example of the use of this technique is provided by the electric circuit of Figure 1. The elements of this linear circuit

-14 - G,= 4G +i Cl= C C2==2 c G2= 8G Figure 1. Stable second-order electric current for Examples 1 and 2.

-15 have the values shown. The equations for the circuit may be written de1/dt = e = -4kel + 4ke2 (17) de2/dt = e2 = 2ke1 - 6ke2 where e1 and e2 are the instantaneous voltages across the two capacitors and the definition has been made, k = G/C. Written in the form of Equation (3), these relations are de/dt = = A e (18) where._-4k 4k A= 2k -6k It should be noted that the two dependent variables here, el and e2, are both voltages and thus have the same physical dimensions. This is necessary for the application of Equation (14). In a conventional solution, the characteristic equation is found first as (-4k -x) 4k A - X I = \ = o 2k (-6k -X) Its roots yield the two eigenvalues, X 1= -2k, X2 = -8k. Since these are both real and negative, the system is stable. If initial conditions are chosen arbitrarily as e = -E, e2 = 2E at t = 0, the solution for the system is easily determined to be

2 O ~~~.I 02 0.3 0.4 0.l5 0.6 0. e,/E kt Figure 2. Variation of voltages el and e2 with time t for Examples 1 and 2. Initial conditions are such that el undergoes a polarity reversal.

300 100 30 10 i 3 H 0.3 0.1, ----,_ _ _ _ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 kt Figure 3. Functions relating to the stability for Examples 1 and 2. The energy, W from Equation 20, is plotted as 10W/CE2. One Lyapunov function, V from Equation 22, found using Lyapunov's theorem, is plotted as 1000kV/E2. A second Lyapunov function, V from Equation 27, found using Krasovskii's theorem, is plotted as V/k2E2. All three functions are positive, with negative time derivatives.

el/E e2/E = 2/3 exp(-2kt) = 1/3 exp(-2kt) - 5/3 exp(-8kt) + 5/3 exp(-8kt) (19) The two voltages are plotted against time in Figure 2. The initial conditions have been chosen so that the polarity of voltage e1 reverses as time progresses. Both voltages decay toward zero with increasing time, but their ratio becomes e2/el = 1/2. The energy stored in the circuit at any instant is given by 2 W = a Cle2 2 1 2 + 1/2 C2e2 (20) This energy W is plotted logarithmically in Figure 3. It is clear that the energy is a positive quantity, but that it has a negative time derivative, as would be expected for a so, even when one voltage goes through zero the case here. A Lyapunov function for this same applying Equation (14), which then appears stable system. This is and reverses sign, as is system can te found by as -4k 2k q r q r -4k 4k 4k -6k r s r s 2k -6k -1 0 o0 - where symmetric matrix PJ has been written with elements q, r, s that are to be determined. This matrix equation leads to the three simultaneous equations

-19 -8kq + 4kr = -1 4kq - lOkr + 2ks 0 8kr - 12ks = -1 Solution for the quantities q, r, s determines P as 7 4 p (i/40k) (21) 4 7 4 Matrix P is positive definite, since both 17 >0 and >0 4 6 and thus the system is asymptotically stable. A Lyapunov function is, from Equation (13), 12 2 V(e) = el 2'= (1/40k)(7el + 8e2 + 62) (22) (E /72k) [10 exp(-4kt) - 8 exp(-lOkt)+ 25 exp(-16kt)] where Equation (19), the exact solution, has been used. The time derivative is, from Equation (15) () = - II e 2 = - el2 - e2 (23) = - (E2/9 [5 exp(-4kt) - 10 exp(-lOkt) + 50 exp(-16kt)] This Lyapunov function is plotted in Figure 3. It is, of course, positive, with a negative time derivative. The ratio -V/V is initially 16k, but becomes 4k as time increases indefinitely. Thus, the figure of merit is r - 4k, as found from -V/V making use of exact solutions for ei and e2. This result corresponds to the more slowly-varying component of voltage, which varies

-20 as exp(-2kt), so that the component of energy associated with it varies as exp(-4kt). The figure of merit can also be found directly from Equation (16), which does not require the use of exact solutions for e. The eigenvalues for matrix P are found from the relation (7/40k - x ) 4/40k =0 4/40k (6/40k - X ) The two values are X1 = 0.263/k and X2 = 0.062/k. The reciprocal of the larger of these is the figure of merit, q = 1/(0..263/k) = 3.8k. This result is similar to that found using exact solutions for ee, although it has been obtained without the need of these solutions. 7. KRASOVSKII'S THEOREM FOR NONLINEAR SYSTEM The Lyapunov function of Equation (13), and the test for stability employing Equation (14), apply only to linear systems. The stability of a nonlinear system near an equilibrium point may be investigated using this technique, providing the system is appropriately linearized. This linearization may be carried out by applying the first method of Lyapunov, as described in Equations (8) and (9). An alter4 nate approach to a nonlinear system is based upon work by Krasovskii. A free, stationary, nonlinear system is described by Equation (2) as dx/dt =.(x O., il" ~iv) (2)

-21 where Jf(c) is differentiable, but g< that af(O) = Q. The Jacobian matri: af2/ax1 A A A matrix FF(x) is defined as %(x) is the transpose of F(x). Matrix -A(x) is positive definite for all Xe = 0 is asymptotically stable in for the system is enerally nonlinear, and it is assumed x for the system is afl/ax2... cf2/cx2... = -() +' (I where F' (I F(x) is evidently symmetric. If values of x, the equilibrium point the large, and a Lyapunov function v(x) = 11( II11 2 (24) In order that -F(x be positive definite, all its principal minors must be positive. This criterion of stability is readily applied because the mathematical manipulations required are simple. Such simplicity is an evident necessity where a nonlinear system is involved. If the criterion holds for all values of the state variables,x, the system is stable for any state. There is no limitation to small departures from the equilibrium condition, as is the case with linearization as used with the Lyapunov first method. On the other hand, it must be recognized that the criterion is a rather restrictive one. While it is sufficient to assure asymptotic stability, it may not be necessary for such stability.

-22 In other words, a particular state may actually be stable even though the criterion is not satisfied. The criterion does represent, however, one of the few known general criteria applicable to nonlinear systems. The time derivative of V(x) of Equation (24) can be shown to be given by the relation.2 A~.x9 - A V(x) = |i| f(x)I | ^A(x = f F f (25) A comparison of Equations (24) and (25) with Equations (13) and (15) indicates that figure of merit n is A = min X [ -F(x)] (26) Example 2 This method of Krasovskii may be applied to the circuit of Figure 1, which is, of course, a linear system. Matrix F(e) is the same as A of Equation (18), and is -4k 4k F(e) = vF (i.e.. 2k -6k A Matrix F(e) is then -4k 4kt -4k 2kT -8k 6k F(e) = F + F' 2k -6k 4k -6k 6k -12k A 8k -6k -F(e) 6 V.'N I -6k 12k

-23 This matrix is positive definite since both |8kl > 0 and 8k -6k > 0, and thus the system is asymptotically stable. _6k 12k A Lyapunov function is, from Equations (17) and (24) ~V(te) = 1^i) \\l 1 e 22 = k O(20el - 56ele2 + 52e^2) (27) Its time derivative is, from Equation (25), 2A 2 2 V() e) = f(e) ( k(-8 = 12 e -12e2 (28) This Lyapunov function, Equation (27), is plotted in Figure 3 where it may be compared with that found by the first method applicable only to linear systems and given by Equation (22). Both these functions, as well as the energy given by Equation (20), are positive and have negative time derivatives. Any one provides a valid test for stability. All three curves of Figure 3 have similar shapes in that they are asymptotic to two straight lines, with slopes dependent upon X = 16k and X = 4k. An estimate for the figure of merit is obtainable from Equation (26). This leads to the relation (8k - X) -6k = 0 -6k (12k - X) which gives X1 = 16.3 k, X2 = 3.7 k. The smaller of these is taken as r, giving q = 3.7k, which is similar to values found previously. The Krasovskii method is readily applied to this example and generally

-24 verifies its stability and yields a value for the figure of merit. Example 3 A somewhat different example is given by the circuit of Figure 4. The box of this figure contains a nonlinear, voltage-controlled, negative resistance. The current in this resistance is assumed to be related to the voltage across it as i =-ael + bel, where a and b are positive constants. This relation is shown in Figure 5. Also shown in Figure 5 is a load line constructed to determine operating conditions if just a single positive resistance R were connected across the terminals of the negative resistance. The three intersection points of this line as drawn indicate three equilibrium points. The equations for the circuit of Figure 4 may be written del/dt = =(a/C)el - (b/C)el3 + (1/RC)e2 de2/dt = e2 = -(R/L)el - (R/L)e2 (29) where the two variables, el and e2, are both voltages with the same dimensions, as is required. The symbols have the meanings identified fe } I le, in Figure 4. There are generally three equilibrium conditions, e=- where ele = + [(a -1/R)/b] / or e = 0. If Rl/a, two of e _le these conditions are imaginary. Matrix f(e) is (a/C)el - (b/C)el3 + (1/RC)e2 - (R/L)el - (R/L)e2 and matrix F(e) is

-25 + e, Figure 4. Electric circuit for Example 3. The box contains a nonlinear, voltage-controlled negative resistance. The circuit may, or may not, be stable. i i /Ae,=-I/R Figure 5. Nonlinear current-voltage characteristic for the negative resistance in the circuit of Figure 4. Also shown is a load line for a positive resistance R connected to the terminals of the negative resistance.

-26(a/C - 3be12/C 1/RC F = -R/L -R/L A so that matrix -F(e) becomes A 2 -F = -2(a/C - 3 be1 /C (R/L - 1/RC) 11" 1 1 ~~~~~~~~~~~(30) (R/L - /RC) 2R/L This matrix is never positive definite near e = 0, so that there is no assurance that the circuit can be stable near its rest condition. On the other hand, elimination of e2 in Equation (29) yields the equivalent single equation in el el = (R/L - a/C + 3be2/C) + (1/LC)(1 - aR + bRel)el = 0 (31) This equation does have a stable solution near e, = 0, provided both a < RC/L and a < 1/R. These two conditions are equivalent to the requirements that both the d-c load line, governed by the resistance R, and the a-c load line, governed by the dynamic resistance L/RC, intersect the negative-resistance characteristic of Figure 5 only at the origin. Thus, the circuit may be stable under appropriate conditions, although this may A not be predicted from the matrix -F of Equation (30). If the negative resistance element is removed entirely from the circuit of Figure 4, the resulting passive circuit clearly must be stable. Removal of the negative resistance causes both coefficients a and b of Equation (30) to vanish, leaving A o (R/L - 1/RC) -F = (32) F L(R/L - 1/RC) 2R/L._

-27 This matrix can never be positive definite and again there is no assurance of stability. The equations for the circuit of Figure 4 can be set up in a different way. If the definition is made, wo2 = 1/LC, a dimensionless time may be introduced as T = olt. Derivatives may be written in terms of this dimensionless time as del/dt = %o del/dT and d2el/dt2 = 22 2 o dd e1 /dt. Also, the definitions can be made, x1 = el and x2 = dxl/dT. With these definitions, Equation (31) can be written dxl/dT = X2 dx2/dT = -(1 - aR + bRx12)xl - (R/Lwo - a/Cwo + 3bx2/Cwo)x2 (33) Equations (33) are different from Equation (29) even though they describe the same physical system. Here, one variable, x2, is simply the time derivative of the other, xl. That is not the case with the first set of equations. Again, however, because of the use of dimensionless time, the dimensions of both variables are the same. A Matrix -F arising from Equation (33) is 0 (- a + 3bRx + 6bxlx2/Co) I _2 =2 (-aR + 3bRx2 + 6bxlx2/Cwo) 2(R/Lw o - a/COw + 3bx1 /CwO) This matrix, just as that of Equation (30) found previously, is never positive definite near x = 0, so once more there is no assurance that stability can exist.

-28 If the negative resistance is removed, Equation (34) becomes 0 0 -_ = 0 0(35), -0 2R/Lw This matrix is positive semidefinite, in that the principal minors are zero, and at least are not negative. The test for stability is almost satisfied. It should be evident from the results of this example that because the Krasovskii method gives only a sufficient condition for stability, it may lead to erroneous conclusions about the behavior of any given system. Furthermore the conclusions that are received by applying the method may depend upon just how the analysis is carried out. 8. BARBASHIN'S THEOREM FOR THIRD-ORDER NONLINEAR SYSTEM Specific stability criteria have been obtained for one particular third-order system by Barbashin 56. The system is described by the equations X1 = X2 x2 = x3 (36) x3 - f(xl) - g(x2) - a2x3 where f(O) = 0 and g(O) = 0, and both f(xl) and g(x2) are differentiable. If written as a single third-order equation, this system is equivalent to X + a2xl + g(Xl) + f(xl) = o 1 +a2- 1+gx ~~1J(7 (37)

-29 The equilibrium point, xe = O, is asymptotically stable in the large if (1) a2>0 (ii) f(xl) Xl >X), xlO (38) (iii) a2g(x2)/x2 - f'(x1), > O, x2 $ 0 where f' (xl) = d/dxl [f(xl)]. While Equations (36) and (37) are written with dots indicating differentiation, the independent variable must be a dimensionless time in order that the x's be of the same dimensions and the criteria of Equation (38) be directly applicable. A Lyapunov function for the system is V(x) = aF(x1) + f(xl)x2 + G(x2) + 1/2(ax + x3)2 (39) xl x2 where F(xl) = f(xl) dx and G(xo) = f0 g(x2) dx2 The time derivative is V(x) = - [a2g(x2) /x2- f'(x1)] x22 (40) Variable x3 does not appear in V(. If the system is stable, V(x> O, t(x) < 0, except at x = 0 and the conditions of Equation (38) apply. While Equation (37) is a nonlinear, third-order equation, with rather general nonlinearities allowed in both the dependent variable xl and its first derivative xl, it is necessary that there be no products of these two sorts of terms, and that the second and third derivatives appear only in linear terms. These requirements tend to limit somewhat the applicability of Equation (37) to third-order systems as they arise in physical situations.

-30 There have been efforts made toward extending the idea used in this case to systems of fourth6, and higher, orders. The mathematical complications tend to increase very rapidly when this is done, however. It is worth noting that the stability criteria of Equation (38) contain two different types of linearization, each of which is sometimes used with nonlinear functions. One of these is the derivative f'(x) = d/dx [f(x)], which corresponds geometrically to the slope of the tangent to the curve representing f(x), at the point in question. The other is the ratio f(x)/x, which corresponds geometrically to the slope of the chord from the origin to the point in question on the curve of f(x). These two slopes are shown in Figure 6 for a case in which they are evidently quite different. For a nonlinear electric resistance for example, variable x might represent voltage and function f(x) would represent current. The derivative f'(x) would then be the variational, or a-c, conductance, while the ratio f(x)/x would be the steady, or d-c, conductance. Often an attempt is made to analyze a nonlinear system by introducing some kind of linearization, and often just which of these types of linearization should be used is not self evident. In this case of Barbashin's theorem, both types appear. Example 4 An example of the application of Barbashin's theorem is given by the electric circuit of Figure 7, which represents a kind of phase-shift oscillator. The phase-shifting network consists of a three-section resistance-capacitance network composed of elements as shown in the figure. Its differential equation may be shown to be

f(X) f(x) d /dx [f X)] One makes use of the derivative d/dx[f(x)], the other makes use of the ratio f(x)/x. The two methods here give widely different results. AMPLIFIER PHASE SHIFTER -'=\\ ___/ + _ R R T_ +~______ -e+ 1......L Im el2 T T e3 eo el ez es Figure 7. Phase-shift oscillator circuit for Example 4. The amplifier has the nonlinear characteristic shown.

-32 e2 R3C3 e + 5R2C2 e + 6RC e + e (41) 2 e3 3 3 3 where dots indicate derivatives with respect to time, and voltages e2 and e3 have the meaning shown in the figure. An amplifier is used in conjunction with the phase-shift network. This amplifier is assumed to saturate as its input voltage increases, so that the following equation might apply e2 = A (1 - be12) e1 (42) where A and b are positive constants, and A is the small-signal voltage amplification. This relation holds only moderately well for an actual amplifier, since it predicts that the output voltage first increases, and then decreases with an ultimate change in sign, as the input voltage indefinitely increases. As the circuit is commonly used, there is a polarity reversal, as represented by the relation e =- e (43) 1 o Finally, of course, e = e (44) If Equations (41) - (44) are combined, the result is R3C3 e + 5R2C e + 6RC e + e = - A( ( - 45) 0 0 0 0 0 0 Again, the stability criteria of Equation (38) require the use of a dimensionless time variable. It is convenient here to define T = t/RC, in which case Equation (45) becomes

-33 e I + 5 eo "l + 6 eo' + (1+A) e - bA e3 = 0 where primes indicate differentiation with respect to the dimensionless time T. Requirements that e = 0 be stable are given by Equation (38) as (i) a2 = 5 > 0 (ii) f(eo)/eo = [(1 + A - bAeo2] > O (46) (iii) a2 g(eo)/e - f'(eo) = [5(6) - (1 + A) + 3bA eo ] > 0 Of these relations, (i) is obviously satisfied, while (ii) and (iii) can be written respectively as A(l - be ) > -1 and A(l - 3beo ) < 29. If the circuit is initially at rest, eo is near zero, and stability is predicted if 29 > A > -1. These are well known conditions for this circuit. If the circuit is to be used as an oscillator, A is chosen to exceed 29, and for example it might be assumed that A = 35. The important condition now is (iii) which becomes beo >(l - 29/35)/3. Thus, if eo is initially small, the circuit is unstable and any disturbance builds up, ultimately leading, of course, to an oscillation. However, if eo is initially large, the circuit is initially stable, and the large initial voltage first decays. This decay brings the circuit into an unstable condition, and oscillation again takes place.

9. CONCLUSION In conclusion, it should be pointed out that the second method of Lyapunov is valuable, in part, because it provides a useful concept for considering the stability of a system. The idea of the Lyapunov function is somewhat similar to that of energy, but it is more general and more widely applicable. In some cases, a specific expression is known for the appropriate Lyapunov function. When this is so, stability can be explored and information about the speed of transient response can be obtained. Ingenuity is required to apply the concept to systems of other than a few simple types, and there is wide opportunity for further work in this area.

REFERENCE 1. Liapounoff, A. M. "Probleme General de la Stabilite du Mouvement." Reprinted in Annals of Mathematics Studies, No. 17, Princeton University Press, 1949. 2. Antosiewicz, H. A. "A Survey of Lyapunov's Second Method," in "Contributions to the Theory of Nonlinear Oscillations." Annals of Mathematics Studies, No. 41, Princeton University Press, 1958. 3. Kalman, R. E. and Bertram, J. E. "Control System Analysis and Design Via the Second Method of Lyapunov." Transactions of the ASME, Series D, 82, (1960), 371. 4. Krasovskii, N. N. "On the Stability in the Large of a System of Nonlinear Differential Equations," (in Russian). Prikladnaya Matematika i Mekhanika, 18, (1954), 735. 5. Barbashin, E. A. "On the Stability of Solutions of a Third-order Nonlinear Differential Equation," (in Russian). Prikladnaya Matematika i Mekhanika, 16, (1952), 629. 6. Cartwright, M. L. "On the Stability of Solutions of Certain Differential Equations of the Fourth Order." Quarterly Journal of Mechanics and Applied Mathematics, 9, (1956), 185. -35

LYAPUNOV APPROACH TO STABILITY AND PERFORMANCE OF NONLINEAR CONTROL SYSTEMS Y. H. Ku University of Pennsylvania D. W. C. Shen University of Pennsylvania

LYAPUNOV APPROACH TO STABILITY AND PERFORMANCE OF NONLINEAR CONTROL SYSTEMS In the theory of stability of dynamical systems, the second method of Lyapunov should be considered as a philosophy of approach rather than a systematic method. A unified approach to the whole theory of control systems is made possible by using the basic concept of a Lyapunov function. This relatively new point of view offers much promise to the further development of control theory, particularly as regards nonlinear systems. In the nonlinear case, intuition must be used to obtain suitable Lyapunov functions because no straightforward methods are available at present for doing this. This paper presents an application of Lyapunov's second method to the study of stability and performance of control systems which may be described by the following nonlinear differential equations: Dne + p1D e +..... + pe = N ( D- e Dne e, t ) ( where e represents the error signal, N represents a nonlinear function, D = d /dt, and pj' s are constant coefficients. A simple example is a second-order nonlinear control system of the type shown in Fig. 1. If G2(S) is a second-order transfer function of the form JS2 + KS + L, as it will be for a simple motor and load, and G1(S) is first order, the differential equation will be of the form D2e + k De + le + f ( De, e ) = D2r + k Dr + Ir (2) where k=K/J, 1=L/J. Equation (2) can be rewritten D2e + k De + le = N ( De, e, t ) (3) for any given input r(t). -39

r G,(s) f(b) b c G,(s) I -=I Figure 1. A second-order nonlinear control system.

iThe Second Method of Lyapunov. It is usually not difficult to define what is meant by stability in a linear system. Because of the new types of phenomena which arise in a nonlinear system, it is not possible to use a single definition for stability which is meaningful in every case. Kalman and Bertram, 1, stated that concepts of stability are closely related to concepts of convergence. When there are as many of the latter, there are correspondingly many types of stability. Lyapunov, 2, divided the methods which could be used to indicate the solution of the problem of stability into two categories. He included in the first category those methods which reduce to a direct consideration of the equation of motion, that is, to the explicit determination of the general or a particular solution of the equation. It is usually necessary to search for these solutions by successive approximations. Lyapunov calls the totality of all methods of this category the " first method ". It is possible, however, to indicate other methods of solution of the stability problem which do not require the calculation of a solution of equation, but which reduce to the search for certain functions known as Lyapunov functions that possess special properties. Lyapunov calls the totality of the methods belonging to this second category the " second method ". Some authors, among whom are Lefschetz, 3, and Hahn, 4, call it the " direct "t method of Lyapunov. For an autonomous system, x= f(x) i.e. xi= fi( Xl,X2....n ) i=l,2,....n (4) where x=dx/dt, x=( xl.... ) and fi =( f1, f,. fn ) Lyapunov's second method consists in finding a real, continuous scalar function V=V(x) = V ( X1,x2,****Xn. ) in the neighborhood U of a point of equilibrium ( which may be assumed to be x=O without loss of generality ), or in the whole phase space, satisfying the following two conditions: (i) V(O) =0; (ii) V(x) is positive definite. The first condition means that the function we are interested in vanishes only for x=0O, i=l,2,....n. The second condition 1

requires that V(x) has continous partial derivatives and that V(x)>O for all x=O; and V(x) = ik () lV /a ) = fi(x) ( </ax ) O for all x#O. V(x) and its time derivative V(x) are opposite in sign. This function V(x) is called a Lyapunov function, and the Lyapunov theorem has shown that the existence of such a function implies the stability of the system. It may be noted that V is actually the total derivative of V with respect to t. For nonautonomous systems, V = f( t, x ). Hence, V<O <yjL 0<1 x < i - means that V is a decreasing function of t. Transformation to Canonical Form. Consider the system described by equation (1), let the phase space variables be given by the matrix equation el, e2,.o..eni = e, De,...... D eJ (5) The equivalent system of first order is eI' 0 1 0.. 0 0 ~ e0 1 e2 0 0 1.. 0 0 e2 0.. = *.. *. *.. +. (6).0 0 0. 0 1 en -Pn - n-l -Pn-2 * -P2 P1 en N which may written as DCeJ = [A] [e + N] (7) where [ A]'is the square matrix in equation (6). It is obvious that the latent roots of the square matrix fAlare the characteristic roots of linear portion of the system. Let these roots be distinct and given by S1,S2, **..Sn, S1S2,...S, and A.... where S. and Sj are complex conjugate pair and a'

-43 %'s are real roots. Equation (6) can be transformed to canonical form by the following Vandermonde matrix *f:I S~ 9Qi a 5-3. I snt 5m.1 1 iTI =... "?A nt (8) 1 "i 2-1,5t 7-j1 -n-1 tA2 Thus, transforming the phase variables ei's to the phase variables xj's by the equation 3 [el = [T]3X] (9) we obtain D[x3 = [B. Lx} + [T]-1 rN (10) (11) where LjB = T3-1 A [T] is a diagonal matrix having latent roots as its diagonal elements. A Lyapunov Function. The problem of determining a Lyapunov function for the systems for which the solution x = 0 is known to be stable, is called the inverse problem. There are necessary and sufficient conditions for the existence of a Iyapunov function, but there is nc -eneral metnod of solution for th e difficult inverse problem. 1esearch had been done in this direction by a number of investigators, jb,97,;-,9. The case of autonomous systems when the nummber of the unknown

-44 functions is equal to 2 has been extensively treated by Malkin and his followers. Actually, in his attack on the problem of stability, Lyapunov's second method was inspired by Dirichlet's proof of Lagrange's theorem on the stability of stationary states. This has been pointed out by Lefschetz. The geometrical interpretation of a Lyapunov function could be a measure of the " distance " of the state x from the origin in the state space. Suppose the distance between the origin and the instantaneous state is continually decreasing as t —o, then x (t) — 0. Since the Langrangian function also has the property of being a measure of the swing of the energy content of the system away from the equilibrium point, it is natural to investigate whether it would satisfy the conditions of a Lyapunov function. In fact, in many cases Lyapunov functions are already available, though unrecognized, in standard results in control theory. Kalman and Bertram pointed out that a system whose energy B decreases on the average, but not necessarily at each instant, is stable but E is not a Lyapunov function. A Lyapunov function has to be positive definite. Krang and Fett, 10, suggested the use of the envelope to the Lagrangian function instead of the function itself as a distance function in case the Lagrangian function contains oscillatory terms. Thus, by introducing the following matrix. 6 SiSi X o So12 tw' I 51 aSz 0 I (12) S0 $n^ 0 I^) (?A2 ^ntl I L^. a 2 t

-45 one establishes the function V(x) = xlt [ ] Ex7 or V(x) = 2X 5z 6''1 j t ^ AZ )2 tz j-2j J-=t Equation (14) satisfies the conditions V(x)> 0 when x 4 0 and V(0) = 0. Using equation (10) and. [V] = EwJt, the time derivative of V(x) can be written as (13) (14) V(x) = 2 CXlt CViJjBJ xJ + 2 CXjt CV CB-1 [DIJ (15) The first the force sents the effect of term on the left hand side of the above equation represents free case to the linear system, while the second term repreeffect of the auxiliary forcing function and hence the the nonlinearity. DenoteCB]-1 by ci ), and N ( e, De,...... e t ) by G(t), equation (15) becomes V (x) = VL(x) + V(x) (16) where ~ ~ 2-.-2.2 7/-2m 3 2 V.(x) SJ. ZiCJ +2^A- Z V (x) = 2 bi),-, fx * J (17) (18) = z cr(;t) Iqj

-46 Consideration on Stability and Performance. The necessary and sufficient condition that the basic linear system is stable is when the value of its Lyapunov function, along its force free trajectory tends to zero as time approches infinity. In terms of error coordinates e's, the Lyapunov function given in equation (13) becomes V (x) = [e3t [Y] [e] (19) whe re Y] = ET-1 t rw] [T-l (20) If the characteristic roots of the basic linear system are all real, then the characteristic roots of the matrix CY] are all positive. It follows that V = constant is an ellipsoidal surface, and the e space is topologically Euclidean. It is clear from equation (17) that if all the characteristic roots of the basic linear system have negative real parts, Vg(x) is always negative and the linear system is stable. Since the behavior of the nonlinear system is expressed in terms of some auxilliary forcing function on a linear system, whether the effect of nonlinear terms is to make V (x) more negative or not will depend on the sign of Vn(x) = G(t) HI. H is a linear function of the coordinates; hence H = 0 is a plane through the origin partitioning the phase space into two subspaces. The effect of nonlinearity can then be studied in the light of this criterion. Among all the states in the error coordinates, the necessary condition that one state is better than the other is that the distance of this state from the origin is smaller than those of the other states. Therefore, V(x) can be used as an ordering relation to define a preference among all states. One can also monitor the state of the system or simply the sign of H in the expression of Vn(x) by direct measurements, and then adjust the system properly to really improve the performance. This is analogous to control systems with system-parameter adaptation, whereby the parameter are adjusted in accordance with input-signal characteristics or measurements of the system variables. No actual design of specific systems is attempted in this paper. However it is clear that Lyapunov method is not merely an abstract tool for studying the stability of dynamical systems; it is also a concrete one to facilitate the study and design of a general control system on the basis of performance.

-47 Nonlinear Control Systems with Random Inputs. There is still a lack in the literature on the application of Lyapunov's method to nonlinear control systems with random inputs. Research in this direction has much promise for the development of a satisfactory theory concerning both the analysis and synthesis of such systems. Consider the simple second-order nonlinear systems described by equation (2) and assume random behavior for input r(t). Equation (2) can be written e + F( ee)= r + k r + 1 r (21) where the single and double dots refer to first and second time derivatives respectively. By one of various methods of construction, 11, the phase-phane trajectories can be dravn for the case of zero input e + F ( e, e ) = 0 (22) by writing in the form de = (e (23) de The finite input r(t) will increase the slope of the trajectory by - r kr + lr or de = -F(e,e) + r + kr +l r (24) de e e If the system starts at the point P o ( eoeo ) corresponding to the input conditions ro,ro,ro, this point will move in the direction PoP1 instead of along the trajectories of equation (25). After a short time interval Ft, P] will travel along another path that makes an angle with the previous path dependent upon the new values of r, r,r This continuous variation of the slope from that of the trajectories drawn for r(t) = 0 results in a drunkard's walk about one of the force-free trajectories, see -ig. 2o

te P, I 2P 3 I I I I I Iel lelMAX I oD I I Figure 2. Error trajectory of system with random inputs.

In mathematical statistics, the variates are mostly assumned to be unbounded, and given a sufficiently long time infinite values will be obtained. In actual physical control systems this is not the case, as both the input and output and their derivatives, and therefore the error function and all its derivatives, are bounded. If the control system is a useful one and stable, the actual trajectory in spite of its flucuations will tend to the origin after a sufficient long time. The upper and lower limits of r(t) then set a definite limit to the magnitude of excursions of the drumkard's walk from one of the force-free trajectories. As a result, it may be possible to put down limits onlelmax and. eJmax as shown in Fig. 2. In general, some small value e in may exist below which no matter how a varies, e can be just so large to force the system into alignment so that e does not exceed le min' It is evident that the above discussions can be extended to systems of high order. One may infer, therefore, that it is almost certain the space trajectory of error for zero input can be used to investigate the stability of nonlinear control systems with random inputs, provided the bounds of both input and output can be determined. In this sense, Lyapunov's approach to system stability and performance as presented in this paper would be useful to tackle problems of this nature. Acknowledgments Thanks are due to Drs. J. G. Brainerd and N. A. Finkelstein for their kind interest and support in the preparation of this paper. The first author would like to acknowledge the support of the National Science Foundation in his research on nonlinear analysis. References 1. Probleme general de la stabilite du mouvement, A. Mo Lyapunov, Ann. of Math. Studies, no. 17, Princeton University Press, 1949. 2. Control System Analysis and Design Via The" Second Method" of Lyapunov, R. E. Kalman and J. E. B'ertram, ASaME-AIEE-IREISA National Automatic Control Conference, Dallas, Texas, Nov. 1959, paper No. 59-NAC-2. 3. Differential Equations: Geometric Theory, (book), S. Lefschetz, Interscience Publishers, Inc., New York, 1957.

4, Theorie und Anwendung der r.irekten methode von Lyapunov, (book), Wolfgang Hahn, Ergebnisse der MJathematik und ihrer Grenzgebiete neue Folge, left 22, Springer-Verlag, Berlin, 1959. 5. Theory of Stability of rlotion, (book), I. G. Malkin, AEC Translation 3352 Department of Commerce, USA, 1958. 6. Some Implications of Lyapunov's conditions of Stability, H. A. Antosiewicz and P. Davis, J. Rat. Math. and Mech., vol. 3, 1954, pp. 447-457. 7. Kronecker Products and the Second Method of Lyapunov, R. E. Bellman, Math. Nachrichten, 1959. On the Application of Lyapunov's Methods to Stability Problems, B. S. Razumikhin, Prikl.:at. Mekh., vol. 22, 1958, pp. 338-349. 9. A Method for the Construction of Lyapunov Functions for Linear Systems with Variable Coefficients, Ya. N. Roitenberg, Prikl.. Mat. Mekh., vol. 22, 1958, pp. 167-172. 10..etrization of Phase Space and Nonlinear Servo Systems, C. L. Kang and G. H. Fett, Journal of Applied Physics, vol. 22, no. 1, January, 1953, pp. 58-41. 11. Analysis and Control of Nonlinear Systems; Nonlinear Vibrations and Oscillations in Physical Oystems, (book), Y. H. Ku. The Ronald Press, iew York, 1958.

PRINCIPAL DEFINITIONS OF STABILITY D. R. Ingwerson Sperry Gyroscope Company

INTRODUCTION To those familiar with linear control system design the concept of stability seems so obvious as to need no more than a casual definition. This statement is not quite the same in the field of rigid body dynamics. It is well known that a rigid body has two stable axes and one unstable axis. A few experiments at throwing a book in the air will verify this. Obviously one axis is unstable, for the book will not rotate about it. But what does the instability mean? Is it the same as for an unstable control system? In a sense it is, but a detailed discussion of the meaning of stability is required to see the connection. The original ideas on the stability of dynamic systems were advanced by Poincare (1) and Lyapunov(2). These were extensions of the concepts of stability for static equilibrium under small disturbances. For many years they were deemed adequate for classical dynamics. In the development of linear feedback theory this early work was not used. Stability was based on the exponential decay of solutions of linear differential equations. Finally, with the revival of the Lyapunov method of analysis and its application to control theory, the general formulation of stability and its correlation among various disciplines was attempted. Definitions of stability that are suitable for automatic control may be stated in many ways, not all of which are equivalent. The concept is somewhat arbitrary depending on the particular requirements for the system. The definitions given here are those found useful by the author in applying the Lyapunov method to -53

nonlinear control problems. Certain aspects that are more of a mathematical than an engineering nature are neglected. References 3, 4 and 5 contain discussions of various other definitions used in conjunction with the Lyapunov method. DESCRIPTION OF PHYSICAL SYSTEMS From an engineering standpoint the problem of stability begins with a physical system which is capable of changing its state, in some sense of the word, from time to time. This eliminates many of the questions of mathematical concern, such as existence and uniqueness of solutions, escape times etc. Specifically this discussion is concerned with a system of the type shown in Figure 1. It is a plant -- a dynamical system or otherwise -- whose outputs are to be regulated by means of a set of inputs which are compared to certain output variables. The differences are operated upon by a controller which in turn supplies signals to the plant. In addition, both the plant and controller may be subjected to the influence of various uncontrolled or free inputs. These represent such things as temperature variations, component aging or other parametric excitation. This is essentially a conventional control system. There is an equation, either differential or difference, which describes the nature of the changes that the variables of such a system undergo. One description might be the normal set of n first order differential equations;

-55 - p-ft r, ~ I II 0 — I X2 Xp Xn CONTROLLER PLANT rp> Ul U2 I I l Uq Figure 1. General control system

-56 dxi = fi(xj,rk(t),Us(t)) i = 1,2,,n (1) dt j= 1,2,*....,n k = 1,2,.....,p s = 1,2,....,q These will be used as a specific reference. The explicit dependence of the equations on time is broken into two parts, the reference inputs rk(t) and the free inputs us(t). The x. represent the magnitudes of n different variables which completely specify the state of the plant and controller. They are called the state variables or, from a geometrical viewpoint, they form the components of the state vector. Any condition where all of the state variables are constant is called an equilibrium position. With regard to Equations (1) this is characterized by all of the functions fi being equal to zero. The dependence on time of a physical system may be divided into those factors which influence the equilibrium position and those which do not. In the set of reference equations it is assumed that the rk(t) affect the equilibrium position while the us(t) do not. An equilibrium position can exist only when the rk are constant. The equation describing the motion of a system has a solution defined as any set of functions Oi(t) which, if they are substituted for the state variables xi, satisfy the equation. Solutions are dependent on time in the manner prescribed by the equation of motion and also upon an initial state xi(to) and the time t0 when this state occurs. Thus the notation Xi(t) is understood to imply a set of functions which have the value xi(to) at a specified time to; Xi(t) = oi(to,Xj(to);t).

-57 To interpret the implications associated with general concepts of stability the idea of an n dimensional state space is used. Each of the state variables is imagined to represent a length along an axis in this hyperspace. If the variables are functionally independent no three axes lie in a plane. Under special circumstances this space reduces to the conventional phase space, in which case each succeeding axis represents the rate of change of the quanity measured along the one preceding it. Any point in state space is represented by a vector having the various state variables as components. The totality of all points in the space represents all possible states which the system may assume. With the passage of time the state vector traces a curve in the space, known as a system trajectory. Figure 2 is a possible trajectory in three dimensional space. It may be thought of as the projection of a solution of the equations, plotted in a space containing the n state axes and a time axis, onto the state space. Such a space and the projection is shown in Figure 3. STABILITY CONCEPTS AND DEFINITIONS Solutions completely specify the motion of a system. When these are known any more general properties of the motion are determined. However, if the solutions are divided into general classes certain properties of each class may be found without knowledge of the solutions. In some cases this is a simpler task than the

X3 SOLUTION I \J1 0o I XI X2 t Figure 2. System trajectory Figure 3. Projection of a solution

-59 determination of solutions. One useful division is into the classes "stable1' and "unstable.' The primary purpose of the Lyapunov method is to furnish a criterion by which an investigator can decide into which of these classes the solutions of a particular system fall. To make the decision a defintion of stability is required.. The definition must supply both necessary and sufficient conditions for a system to have stable solutions. On the other hand a criterion for stability, based on the definition, may provide only a sufficient condition, as is the case in most applications of the Lyapunov method. The concept of stability of a dynamic system is basically the question of whether or not it will return to a particular state after it has been disturbed in some way. Actually some state, either stationary or dynamic, is always stable and the question is as to the stability of a specified state. Various definitions of stability are available depending on the nature of the state and the manner in which the system approaches or deviates from it. Stability in the Sense of Lyapunov The mathematical definition of the stability of Equation (1) as stated by Lyapunov is as follows. Let Xi(t) be a solution of the equation and define a new set of variables; qi = xi - Xi. If these are substituted into (1) a new set of equations in qi results which has an equilibrium position at qi = 0. This is true whether the original set of equations possess an equilibrium position or not.

-6o The equilibrium position is called stable in the sense of Lyapunov if for every E > 0 there exists a 6(e) > 0 such that, |q(t)| < c whenever Iq(to)l < 6 for all t > to. In other words, the equilibrium position, qi = 0, is stable if the magnitude of the new state vector q can be made to remain permanently below an arbitrary upper bound by choosing its initial magnitude sufficiently small. If q approaches zero as t approaches infinity the equilibrium position is called asymptotically stable. Extending this to (1) the solution 0i(t) is called stable in this sense if, Ixi(t)-0i(t)| < C whenever |xi(to)-0i(to)I < 6 for t > t0o Thus if one solution remains arbitrarily close to another when their initial values are sufficiently close together that solution is called stable. A brief consideration of the aspects of control system design makes it evident that these definitions are not completely satisfactory. First, the question of what happens when the initial disturbance cannot be made sufficiently small arises. For instance, the state of a stable system, by this definition, can go into a limit cycle or increase without limit if the initial disturbance is above a specified bound. Second, the fact that one solution remains close to another is of little value if both deviate greatly from the desired solution. Fortunately in the case of linear systems the definitions above are coincident with broader aspects of stability. More General Definitions of Stability A more general set of stability classifications is defined in the following discussion. The basic definition is concerned with the stability of an equilibrium position. Let the system considered

-61 be one that has an equilibrium position; for example, in the set of Equations (1) all of the forcing functions rk(t) are constant. Let x be the vector representing the equilibrium position and let R be -e some region in state space, bounded by a hypersurface and containing the point xe. It is desired to classify solutions for which the initial state vector represents a point in R at a time t0 as stable or unstable. A state vector at some point in R at t0 will be at some other point in state space at a later time t1. Another state vector initially in R will be at another point at t1. In terms of two such vectors an analytic definition of stability is; Definition: 1) An equilibrium position xe is stable in a region R at a time t0 if for every E > 0 there exists a 5(E) > 0 such that I| (t )-02(t)0) < 6 implies |Il(t)-t (t)l < E for all 0 (t0) and 02(t0) in R and all t > t0. This definition is illustrated in Figure 4. All trajectories originating in a sphere of radius 6 arrive at some other set of points at tl. These lie within a sphere of radius ~. Regardless of the size of E it must be possible to choose 6 small enough so that the trajectories remain within it for all time and this must be true for all possible states originating in R. These requirements are very weak as far as the stability of control systems is concerned. They effectively eliminate the three unstable phenomens, unbounded variations, other equilibrium states, and limit cycles. By this definition no state originally in R can become infinite at infinite time for if it were possible two adjacent initial

-62 / / t Figure 4. Stability of solutions

-63 states could exist, one of which remains finite while the other becomes infinite. At least one state, xe, remains finite. If a finite value of E is chosen it is impossible to choose a 6 small enough to keep the difference between these two states less than e after a sufficiently long time. No trajectories initiating in R can go to other equilibrium positions, if they exist, for if this were possible two adjacent initial states could exist, one of which would approach another equilibrium state while the other would not. Again at least one trajectory exists which remains at a specified position. Then for a sufficiently small value of E it is impossible to choose a 6 small enough that the variation between the states will not exceed this value. A similar argument proves that if any trajectories initiate in R and go to a limit cycle the requirements of the definition are violated. It is noted that under this definition a stable system does not necessarily return to the equilibrium position. A typical example is a second order conservative system such as might be represented on an analog computer by two integrators each of whose outputs feeds the input of the other. The poles of the transfer function are purely imaginary and any disturbance results in a continued oscillation of fixed amplitude. Unlike a limit cycle, however, the amplitude is dependent on the initial disturbance. A stronger kind of stability is given by the following statement.

-64 Definition: 2) An equilibrium position xe is asymptotically stable in a region R at a time to if it is stable and if 0(t) - xe as t - m for all 0(to) in R. In applying the Lyapunov method it is customary to distinguish stability for cases where (2) is not satisfied by the statement; Definition: 3) An equilibrium position x is neutrally stable in a region R at a time tO if it is stable and not asymptotically stable. Geometrically, asymptotic stability means that the region R, which becomes a region R1 at time t1 as shown in Figure 4, eventually becomes arbitrarily small. In other words, no matter what initial disturbance is given to the system within the limits specified at to the system will return to its equilibrium position. As far as control systems are concerned this behavior is usually necessary to achieve their objectives. Neutral stability is primarily of interest in these applications for interpretation of analytical results. It represents the boundary between stability and instability. However, in other applications, such as the motion of rigid bodies or planets in orbit, it is the only kind of stability possible. So far the definitions have been concerned with the behavior of solutions after a long time has elapsed since the initial disturbance. Trajectories on the boundary of R at t0 are on the boundary of R1 at t1. Thus a system which is asymptotically stable in R at t0 is also asymptotically stable in R1 for all t1 after t0. However, R1 can become considerably larger than R before the trajectories begin to conver:ge.

-65 In effect an asymptotically stable system can behave as an unstable one for an indefinite length of time. In practical problems not only the asymptotic character of the solutions but also the manner in which they exhibit this behavior is of importance. A convenient way to state a definition of stability having more desirable properties for automatic control is to use a construction similar to the formulation of the second method itself. Thus the Lyapunov method is especially suited for providing the desired information. It will be called monotonic stability because of the similarity to a monotonic sequence. Imagine the space inside of the region Ri, derived from R as above, to be divided into a continuous nest of non-intersecting hypersurfaces inclosing the point x. The innermost surface reduces to a point at xe. This is shown for two dimensions in Figure 5. Designate one of the surfaces by S1 and another inside of it by S2. Definition: 4) The equilibrium position is monotonically stable in R at t0 if it is possible to choose a set of surfaces in R1 such that, if 0(tl) is a state vector representing a point on any surface S1 at tl, 0(t2) represents a point on some surface S2 inside of S1 for all t2 > t1 and all t1 > to. This definition is concerned with the behavior of the system at all times rather than only after a long time since t0. It necessarily implies asymptotic stability, since if the solutions go inside of every surface they must eventually converge to x. It is a stronger requirement than the "exponential stability" defined in reference 4; for even

-66 Figure 5. Monotonic stability

-67 though the solutions remain bounded by an exponential function they may increase at times, whereas by this definition they must always tend toward equilibrium. It is a property of autonomous systems that this is the only kind of asymptotic stability possible, but it is not necessarily restricted to autonomous systems. Stability As a Function of Initial Conditions Definitions 1 to 4 are concerned with stability for a given set of initial conditions, namely for an initial state in a region R at a time t0. These are usually dependent upon the choice of R and t0. It may be that the system is such that the definitions hold irrespective of the time too Definition: 5) The equilibrium position is called respectively uniformly stable, uniformly asymptotically stable, uniformly neutrally stable, or uniformly monotonically stable if the choice of the region R in definitions 1, 2, 3, or 4 is independent of the initial time t0. Specifically the requirements of (5) are always satisfied for autonomous systems. For non-autonomous systems they may be fulfilled for certain regions and not for others. Uniform stability is a desirable feature for control systems. For example, if it is known that a control is unstable when the initial disturbance exceeds a specified bound a limit stop may be added below the bound to prevent this occurrence, However, if the stability is not uniform it may be difficult to specify the bound or design the limit stop.

-68 So far nothing has been said about the size of the region R. Sometimes it may be chosen as small as is necessary to satisfy the particular requirements. When this is all that is required the investigation of stability is a comparatively simple task. Usually a linearization about the equilibrium position is sufficient. Definition: 6) The equilibrium position is called respectively A-stable, asymptotically A-stable, neutrally A-stable or monotonically A-stable at to if definitions 1, 2, 3, or 4 are fulfilled for initial states within an arbitrarily small region A about the equilibrium position. This condition is referred to as "stability in the small" in Russian literature. The "stability in the sense of Lyapunov" defined previously is of this kind. It has considerable mathematical interest but is of little use for control system design since the possibility of rather large disturbances always exists. A more practical condition is given by; Definition: 7) The equilibrium position is called respectively B-stable, asymptotically B-stable, neutrally B-stable, or monotonically B-stable at t0 if definitions 1, 2, 3, or 4 are fulfilled for initial states within a specified and finite region B containing the equilibrium position. This is usually called "'stability in the large" by Russian authors, The major problem in nonlinear systems where a finite region of

-69 stability exists is in determining the largest boundary within which the solutions tend toward equilibrium. This is usually very difficult but if a boundary is determined which is so large that no disturbance is likely to exceed it the problem is solved for practical purposes. The Lyapunov method gives such information more or less precisely. A special case of the above occurs when the stability requirements are satisfied for all possible initial states. Definition: 8) The equilibrium position is called respectively B-stable, asymptotically B-stable, neutrally Bstable, or monotonically B-stable at to if definitions 1, 2, 3, or 4 are fulfilled for all possible initial states. This clearly implies B-stability as B-stability implies A-stability. It is alternatively called "stability in the whole" and "stability in the large"' in various translations of Russian papers. This is often considered to be the only kind of stability of interest for control systems. With anything less there is always the risk of some random disturbance putting the system into an unwanted mode of operation. Of course in linear systems the nature of the stability is always independent of the magnitude of the initial disturbance. For these there is no distinction between A and B or B-stability. The process of classification of solutions could be carried on to considerably greater length but the definitions given so far serve to outline the principal distinctions among the ways solutions can behave. The strongest classification is uniform-monotonic-Bstability while the weakest is A-stability. The rest fall at intervals

-70 between these two. Finally, instability is defined by the statement; Definition: 9) If the equilibrium position is not stable it is unstable. Figure 6 is a flow diagram illustrating the various classifications. The restrictions on the system become progressively stronger in going from left to right on the diagram or from the bottom to the top. The implications of each class can be noted by following the flow back to the solution. Thus B-stability implies A-stability but neutral B-stability does not imply asymptotic A-stability nor does monotonic A-stability necessarily follow from asymptotic B-stability. Stability of Control Systems with Forcing Functions When the system is subjected to dynamic inputs which affect the equilibrium position, as when the rk(t) in Equations (1) are not constant, the stability of an equilibrium position has no meaning. Then it is customary, in mathematical discussions, to follow the method of Lyapunov described previously. Some solution is called the "undisturbed motion" and all other solutions, called "disturbed motion", are classed according to whether or not they converge toward this one. The procedure followed here is similar in meaning but it is stated differently. A control system is designed to follow a particular set of inputs. The information that is desired is the stability for these inputs. For this purpose the description "reference inputs" is understood to include all functions of time which influence an equilibrium position whether they are specifically intended as references or not.

I I i i I I I I A-STABLE I I I I xo to I! Figure 6. Stability Classifications

-72 When the reference inputs are identically zero the system has some equilibrium position which may be classed as stable or unstable. Similarly when they have a constant set of values there is another equilibrium position which may be considered. For physical systems this position depends continuously on the set of values assumed by the inputs. Thus at any time there can be said to be an equilibrium position corresponding to the instantaneous values of the references. A time varying set of references may be considered as an effective movement of an equilibrium state from point to point in state space. If they vary slowly enough the state of the system will follow a stable position. Of course the references may affect other aspects of the motion as well as the equilibrium state. Definition: 10) Let xe(t) be the equilibrium position corresponding to a set of reference inputs rk(t) and let 0(to) = xe(tO). A system is stable at t0 with respect to this set of inputs if there exists a T > t0 such that 0(tl) falls inside of a region for which the equilibrium position corresponding to rk(tl) is stable by definition 1 for all t1 > T. In effect this definition states that the motion is stable if there is some time after which the references may be held constant and the ensuing motion is stable. It differes from the usual one in that no initial disturbance is specified. This is taken into account in specifying the reference themselves and also the possibility of continuously acting disturbances is admitted. The solution may temporarily go into an unstable region provided that the inputs

-73 are such that it returns to a stable region and remains there. This is somewhat risky when classes of inputs rather than specific inputs are considered. To avoid such behavior a stronger restriction is imposed, Definition: 11) A system is called continuously stable with respect to a set of inputs if definition 10 is satisfied for all t1 > t0. Here the inputs may be fixed at any time after t0 and the resulting motion must be stable. Both definitions (10) and (11) may be extended to include asymptotic stability, uniform stability, etc., by inserting the appropriate one of definitions (2) to (8) instead of defintion (1). Thus a set of classifications for systems with forcing function inputs is defined. Many more refined distinctions exist which could be used as a basis for further subdivision but these describe the major courses that a state of motion may follow. CONCLUSIONS Various classifications of stability for equilibrium and dynamic states have been discussed. It is emphasized that these are not unique. The variation from the strongest kind of stability to instability is a continuous process which may be graduated by a variety of demarcations. For most applications in automatic control only the most restrictive categories are of value; however, there are circumstances where weaker classifications may be used.

-74In general it can be said that the strongest kinds of stability are also the easiest to investigate. For example, a Lyapunov function for a neutrally stable system is unique, whereas a variety are possible for a monotonically stable one. The investigation of anything but B-stability for dynamic states of a nonlinear system is particularly difficult since a set of solutions must be known or at least approximated by suitable upper bounds.

REFERENCES 1. Poincare, H. "Methodes nouvelle de la mecanique celeste," I, III, Gautier-Villars and Cie, Paris 1892. 2. Lyapunov, A. M. "Probleme general de la stabilite du mouvement," Princeton University Press, 1949. Reprinted from original translation in French. 3. Antosiewicz, H. A.'A Survey of Lyapunov's Second Method," Annals of Mathematics Studies, Number 41, Princeton University Press, 1959. 4. Kalman, R. E. and Bertram, J. E. "Control System Analysis and Design via the'Second Method' of Lyapunov," ASME Journal of Basic Engineering, Paper No. 59 —NAC-2, 1959. 5. Hahn, W. "Theorie und Anwending der Direkten Methode von Ljapunov," Springer Verlag, 1959. -75

THE DIRECT METHOD OF LYAPUNOV IN THE ANALYSIS AND DESIGN OF DISCRETE-TIME CONTROL SYSTEMS J. E. Bertram Research Center, Mohansic Laboratories Yorktown Heights, New York

THE DIRECT METHOD OF LYAPUNOV IN THE ANALYSIS AND DESIGN OF DISCRETE-TIME CONTROL SYSTEMS 1. Introduction: Lyapunov's Direct Method often called the Second Method is one of the most general methods known for the study of the stability of equilibrium solutions of dynamic systems described by ordinary differential or difference equations. The present paper is restricted to the study of dynamic systems in which the time variable changes discretely, i. e. those governed by ordinary difference equations. With the advent of digital components, pulsed radar, and analytical instruments employing sampling techniques, discrete-time systems have become quite common to the control engineer. The objective of the paper is to present methods rather than the rigorous development of the mathematical structure. Good sources for the statements and proofs of the mathematical theorems underlying the method can be found in [1], [2], [3], and [4]. These references also contain extensive bibliographies. 2. Description of Discrete-Time Dynamic Systems: A large class of discrete-time dynamic systems may be described by the vector difference equation itk+l) f ii(tk), dstk)e (2.1) where the tk (k an integer) indicate discrete values of time <...' tk1 < t< tkl+'<; tk- as k-a at which the behavior of the system can be or is observed; tk is regarded as an independent variable analogous to t in continuous-time systems. Equation (2 1) is equivalent to the set of n scaler difference equation i(tk+l) = i((tk) Xn(tk) ul(tk). U (tk) (2.2) i=1,..o, n The vector x is the state of the system (2.1); its components x. are the state variables, The vector u is the control input of the system; its components u. are the control variables. The system is specified by the vector valued function f. The integer n is the order of the system. Usually the number of control inputs, m, is less than the order of the system, n, (m < n). -79

To illustrate how equations in the form of (2.1) are obtained from control problems consider the block diagram of the sampled-data system in Figure 1. It consists of the following: (a) The plant and the feedback instruments, which are governed by ordinary, linear, time-invariant, differential equations. As is customary in the control system literature, the plant and the instruments are described by rational tranfer functions. (b) The sample-and-hold element, which replaces the continuous error signal e(t) with a piecewise constant sampled signal e*(t) described by e*(t ) = etk tk < t < k= 0, 2,..o (2.3) The sampling is periodic with period T. (c) The amplifier which as shown has a gain of k for I e I < e and saturates for e e I > e o The transfer characteristic of the amplifier is given by tie function f so that S m(t) = f (e(t)) (2.4) The first step in obtaining the equations (2.1) is to redraw the block diagram in such a way as to make the state variables accessible. This can be done by simulating the plant and the feedback instruments on an analog computer. One such simulation of the system is shown in the block diagram of figure 2. The state variables are the outputs of the two integrators and are labeled x and x2. At the sampling instant t = t, in terms of the state variables andlthe input at the sampling instant t = are given by xl(tk+l) = (tk) + T x2(tk) + T2 m(tk) (2.5) 2(tk+2) = x2(tk) + T m(tk) where m(tk) = f(e(tk) ) (2.6)

-81 Figure 1. Sampled-data-system AMPLIFIER PLANT Figure 2. Simulation of sampled-data system X2 XIXlm= -I %I a XI -lixile =l Figure 3. Examples of Norm

-82 and e(tk) =,t(tk) - xl(t)2 + ax2(tk) (2.7) Therefore substituting (2.6) and (2.7) in (2.5) the system is described by the equations l(tk+l)= xl(tk) + Tx2(tk) + T/2 f(u(tk) - (tk) + ax2(tk)) (2.8) x2(tk+l) 2(t) + T f(u(tk) - xl(tk) + ax2(tk)) which are in the form of (2.1) and (2.2) where f1(x x2 u) = 1 + Tx2 + T/2 f(u - x1 + ax2) and f2(x1' x2, U) X2+T f (u-x1+ ax2) 3. Concepts of Stability: If the control input u(tk) 0- for all tk, we say that the system (2.1) is unforced. (tk+l) =f(_(tk) ) (:ol) A state x of the unforced dynamic system (3.1) is an equilibrium state if x =f(xe) (3.2) Thus if the unforced system (3. 1) is started in the equilibrium state, x, it remains in this state for all tk. This is, of course, a mathematical statement, and the actual physical behavior raises the problem of stability. It is never physically possible to start the system exactly in its equilibrium state and, in addition, the system is always subject to outside forces which are not considered in the mathematical description. Thus the system is constantly being disturbed and displaced from its equilibrium state. Roughly speaking, if it remains near the equilibrium state we say that the system is stable. If the system remains near to the equilibrium state and, in addition, tends to return to equilibrium we say that the system is asymptotically stable.

-83 Now to make these notions more precise let us assume that the equilibrium state being investigated is located at the origin: x = 0. (This can always be accomplished by a translation of coordinates). Le x be the Euclidean 2 21/2 e length of the vector x ( I I x I I = (xl +... xn ) /). If S(R) denotes the spherical region of radius R about the origin, then S(R) consists of all points x such that l xle < R. The equilibrium state at the origin is said to be stable if corresponding to each S(R) there is a spherical region S(r) of smaller radius such that for an initial state x(to) starting in S(r) the solution x(tk) does not leave S(R) for all t >t. K- o If, in addition, there exists a third spherical neighborhood of the origin S(Ro) such that every solution starting in S(RO) approaches the origin as tk - ~~, the system is said to be asymptotically stable. Stability and asymptotic stability are local concepts. In practice some knowledge as to size of the rgion in which stable behavior is to be expected is required. In many control system applications it is important to assure that no matter how large the perturbation, the system tends to return to its equilibrium state. This is asymptotic stability in the large. 4. Principal Result: If within the neighborhood of the equilibrium state the total energy of the unforced system is always decreasing, we should expect that the equilibrium state is asymptotically stable. Lyapunov's direct method generalizes this idea. Suppose that within some neighborhood S(R) of the origin it is possible to construct a scaler function V (nf), continuous in x, and such that V(D = 0 and V(Q) > o for all x X 0. Define, with reference to the unforced system (3.1) A V(x(tk) ) = V(x(tk+l)) - V (tk) ) (4.1) Now, if x(t,) is the solution of (3.1), the change of the Lyapunov function V(x) along the solution sequence { x(t) } is V(x(tk) ) = V(f(x(tk) ) ) - V(x(tk) ) (4.2) which is obtained without any knowledge of the solutions but directly from the structure of the difference equations. If such a Lyapunov function V(x) can be

constructed in a neighborhood S(R) of the equilibrium state and if in that neighborhood V(x) > 0 for x X 0, then the equilibrium state is asymptotically stable if A V() < 0. This is one form of Lyapunov's Asymptotic Stability Theorem for difference equations. The proof of this theorem is quite simple and can be found in [3]. A few additional conditions are necessary to assure asymptotic stability in the large. Because of its many applications in the control field I shall state it as a theorem. Theorem 1. Consider the unforced, discrete-time, dynamic system E(tk+l) - (tk) ) where (0 = 0. Suppose there exists a scaler function V (x) such that V(g) = 0 and (i) V(Q) > 0 when x ~ 0; (ii) AV() < 0 when x 0; (iii) V(x) is continuous in x; (iv) V() -- co when I x I -- oo Then the equilibrium state x = 0 is asymptotically stable in the large and V(x) is is a Lyapunov function. Since the major difficulty in applying the Second Method is the construction of a suitable Lyapunov function, it is desirable to weaken condition (ii) of Theorem 1 and thereby enrich the class of functions. Actually A V(xi) need only be non-positive ( 0 > A V(x) ) as long as it doesn't vanish identically on any solution sequence of the difference equation. Thus (ii) can be replaced by the conditions (iil) a V(x) < 0 for all x (ii ) A V(x_) does not vanish identically for any sequence {x(tk) satisfying the difference equation being studied.

-85 5. Applications: Traditionally, the study of automatic control systems proceeds along the following path. After an adequate mathematical model of the system is obtained, conditions for stability are sought. Once the system is stabilized attention is usually focused on the transient behavior. Then the effect of arbitrary inputs and noise disturbances are considered. Finally when the analysis problems related to a class of control problems are understood, effort is directed toward deriving a synthesis or design procedure which ensures a stable system whose transient behavior and response to inputs is optimal in some specified sense. In this section, with the aid of numerous examples, it will be shown how the "direct method" may be used for the study of discrete-time automatic control systems of the regulator type. (a) The Mathematical Model. We are concerned with the analysis and design of automatic control systems in which the plant to be controlled is described by the ordinary difference equation x stte vtr of te plt, u is t (5 1) where x is the state vector of the plant, u is the control vector, and v is an uncontrolled input or disturbance vector. The plant is described by the vector valued vector function gl. It is assumed that g1(0,, 0,) = 0. The regulator type control problem is concerned with generating the control vector,u, by means of a feedback structure so that the state of the plant is near in some sense to the equilibrium state x =, In function notation u(tk) = g2((t) (5.2) Thus the closed-loop, regulator system that we are considering is of the form k+1) = (xtk) 2 (X(tk) (tk ) (5.3) =f(x(tk), v(t) ) Example 5. 1. In many practical situations the dynamic behavior of the plant is adequately approximated by an ordinary, linear, time-invariant, difference equation

-86 x(tk+1) = x(tk) + A u(tk) + v(tk) (5.4) (i.e. g is a linear function of x, u, and v.) With such plants the control input, u(tk), is often made a linear function of the state, x(tk) u(tk)=Bx(tk) (5 5) Thus the overall system is described by the relation x(tk+l)='I x(tk) + v(tk) (5.6) where v = 4 + A B. Example 5.2. Even when the plant is described by a nonlinear difference equation it is often found that the dynamic behavior is close to that of a linear system. Thus it is convenient to describe such systems in the following way (tk+ ) -f ()i(tk), (tk., v(tk)) (5.7) = X) u(tk) ++ ag ( )x(tk), u(tk), (tk) ) af af. where the elements of 4 and A, ij and Aij, are ax. and. evaluated at 3 J the equilibrium point, x = 0, u = 0. The vector valued vector function Y is defined by (5.7). Because of its simplicity, such systems are often stabilized as in the linear case by linear feedback of the state x(tk). That is u(tk) = B x(t) (5.8) In this case the overall system is governed by the difference equation

-87 _(tk+l) = x(tk) + g(9(tk), (t) ) (5 9) where as before == 4 + A B. In example 5.1 we are interested in the conditions on the matrix I so that the overall system performance is satisfactory in the presence of the disturbance v. Example 5.2 is similar, except that the disturbance has become a nonlinear term. (b) Conditions for Stability. As we previously pointed out, the major problem envolved in applying the direct method is to find a Lyapunov function. For discrete-time systems, a class of functions which has proven extremely useful in a number of cases is just the norm of the state vector x. The norm of x denoted by 1 x 11 may be thought of as a measure of the length of the vector. To see that a norm might make a useful Lyapunov function, let us define the concept more precisely. A norm is a function which assigns to every vector x in a given Euclidean space a real number denoted by 1 x | | such that (i) I xl i =0 if, and only if x = (ii) I x [ >O for allx 0 (iii) li ax i = i a i i x [ for all x and a a real constant (iv) t x+2 I < 1 x + [ y I for all x and The best known norm is probably the Euclidean measure of length iI 2 1/2, 1/2 zx lie (x x_) e i=1 1 Some other commonly used norms are 11 | m =max { I xil} n it x is= i=1 lxi It is easily shown that all three satisfy the four conditions of a norm.

-88 The use of the norm as a Lyapunov function is facilitated by the geometrical picture of the locus of x for a constant value of the norm. In Figure 3 the loci in the two dimensional case for the three norms set equal to unity are shown. Note that for I I x I 6= 1 the locus is a circle of radius, one centered at the origin; for 11 x i I = 1 the locus is a square with sides of length, two parallel to the coordinate axes and centered at the origin; and for I x 11 = 1 the locus is again a square but in this case the sides are of length 17 and the verticies lie in the coordinate axes one unit from the origin. It is often desirable to emphasize certain elements of the vector x so that in the cases discussed the circle becomes an ellipse, the first square a rectangle, and the second square a diamond. This is accomplished by weighting each of the elements xi by positive, scaler constants. In this way we obtain the norms n 2 1/2 i |xll =-( c.x,2) 1/2 e, c i- 1 i ~-''m,. 1 i x IrM C=max {C I xi } n _ I _ I Ix e 1 c I x I s, C i=l 1 1 where the c. are positive scaler constants. In the examples which follow it will be convenient to rotate the major and minor axes of the ellipse. For any rotation of the type described it is always possible to find a nonsingular, linear transformation T acting on the vector x which produced it. From this point of view we can define a generalized Euclidean norm! x I_ = [ (T (T)'T) ]1/2 = [x TI Tx /2 T n n 1/2 =x Px =( Z x. p...) - -i=1 j=l 1 J 3 The matrix P generated in this way is always symmetric and has the property that x' P x > 0 for all x X 0. A matrix with this property is said to be positive definite. Thus the generalized Euclidean norm is defined for any positive definite matrix P. For a complete discussion of the concept of norm and normed linear spaces see [5].

Now returning to the stability problem we consider-the unforced system x(tk+1) = f(x(tk) ) (5. 10) where f (0 = O. Setting the Lyapunov function equal to the norm of x V(x(tk) ) = I xI (tk) I (5.11) we satisfy all conditions of Theorem 1 except for condition (ii). In this case the difference AV(x(t) ) = x(tk+l) I - 1I x(tk) (5.12) =-11 f(x(t) ) ) - I x(It) I I From (5.12) we see that condition (ii) is satisfied and the equilibrium solution x = O of the system (5.10) is asymptotically stable in the large if I.f(_) I < 1I x I I for allx X 0 A function f which has this property for some norm is said to be a contraction. Unfortunately considerable ingenuity is often required to find such a norm. The following examples show the type of results that may be obtained from the norms previously defined. Example 5. 3. For the linear, time-invariant, discrete-time, dynamic system (tk+l)= x(tk) (5.13) a convenient Lyapunov function is the square of the generalized Euclidean norm. Thus we set V(x) = x' Px= x 2 (5.14) _ ~~~p

where as previously noted P is a symmetric, positive definite matrix. From (5.13) and (5.14) the difference is seen to be ^ V(X(tk) ) = x(tk)' P x(tk) - x(tk) P x(k) (5.15) t -- - X(tk) Q x(tk) where Q = P - ~ P ~ is a symmetric matrix. From (5.15) we see that A V < 0 for all x C0 if Q is positive definite. The positive definiteness of Q can be checked from Sylvester's inequalities for it is necessary and sufficient that the n determinants qll * *' qlk Dk. >0, k=l, 2,..., n (5.16) qk1 *' qkk all be positive in order that Q be positive definite. Since in the present example it is well known that the null solution is asymptotically stable in the large if. and only if, all roots of the characteristic equation det [ 4 - IX ] - 0 (5.17) are less in magnitude than unit ( j X < 1, i 1, 2,..., n), it is possible to obtain similar results which are both necessary and sufficient. Since this result is used so often it will be stated as Theorem 2. Theorem 2. The discrete-time. unforced, linear, time-invariant dynamic system of (5.13) is asymptotically stable in the large if, and only if, given an symmetric, positive-definite matrix Q there exists a symmetric, positivedefinite matrix P which is the unique solution of the linear equation -Q= Pr -P (5.18) and

v(s) = x' P x is a Lyaunov function with A V() = - x Q x. The proof of Theorem 2 is analogous to the proof in the continuoustime case of reference [3]. Note carefully that the theorem does not say that if the system is stable, then given any symmetric, positive definite P the resulting Q is positive definite. Example 5.4. Consider the nonlinear system x(tk+l) = 5 (x(tk) ) x(tk) (5.19) where ~ (x(tk)) is an (n x n) matrix whose elements ij are functions of X(tk). (a) If we let V(x) = 11 2 xt =max { ci xil } i (5.20) where c, c,.. c are positive constants, then from the principle of a contraction, the system (5. 13) is asymptotically stable in the large when l i F (x(t )) x( ) 1 < I I x(c) I For the norm chosen II (x) I I = max { Ci i n < max { 2 i j=1 n I z j-1 c. 1 C. J 4.(x_ -xj} x! ( ij ( x) XjI } x i 1 ij (S) I * CI l X 1 } t i~ (!) I } max { c I x } ii j~~~~~ n c. C. < max { 22 I - i j=1

-92 Consequently if C. n C max { -c |. < (x) } < 1 for all x i j=1 j 1J c.i (5.21) the system of (5.13) is asymptotically stable in the large. (b) If we let the norm be then V(x) = 1 x1 1 n n I(xx I =l c j= i j=~ n = i Ci xi i=l *ij (_xji. (5.22) n i=l 11 C. C. J n j=l 4.ij (x) I C.j xi I jJ n C. < max{ 2 | () } - j i=1 C'ij 3 n z c. lx. j=l J J Consequently for this norm if n c. max{ z j i=- C 1i ij (x) I } < 1, for all x (5.23) the system of (5.13) is asymptotically stable in the large. Therefore we conclude that the system of (5.13) is asymptotically stable in the large if it is possible to find n positive constants c, c02..., c such that maximum absolute sum of either the rows or columns of the weighted matrix I 4 (_ ) I C 1 C 2 1 ~ ~ c- ln (X_) I n c 2 I I 22 () 1 (5.24) C 1 1 nl I 4nn (x) I

-95 is less than unity. Example 5.4. b. To illustrate the application of this result, consider the sampled control system of Figure 4. In the sampled-data controller the storage element denoted by z-1 saturates as would be expected in any physical system. The saturating storage is represented by the function of Figure 5. From this figure it is seen that the instantaneous gain f(x) / x is such that 0 < f (x)< I x I < 1 (5.25) for 0 < x < oo. The problem is to determine if the system is asymptotically stable in the large with this saturating element. Since our methods only give sufficient conditions a negative result would be inconclusive. From an inspection of Figure 4 the state equation of the unforced system can be written as xl(tk+l) = l(tk) -. 2 x2(tk) + 0. 6F (-Xl(tk) + 0.3 2(tk) ) (5.26) x2(tk+l) = Fs (-Xl(tk) + 0. 3 x2(tk) ) It is desirable to make a transformation of variables in this case so that the argument of the function F is a single variable. One such transformation is to let Y(tk) = X(tk) (5.27) Y2(tk) = -x1(tk) + 0.3 x2(tk) With this transformation the system equations are easily put in the form of (5.13) Yl(tk+l Y2(tk+l) _ _ 1/3 (-2/3 + 3/5 -1/3 ( 2/3 - 3/10 F s(Y2(tk) ) Y2(tk) Fs(Y2(tk) ) Y2(tk) Yl(tk) Y2(tk) (5.28)

-94-- - — CONTROLLE 1 — I CONTROLLER _I I nT I ~ ~ ~. ~ -4 I Figure 4. Sampled system with saturating storage Fs(y) y Figure 5. Saturation function 8 A' PA 8 =c Ul Figure 6. Input space —Example 5.10

-95 Considering just the columns, we conclude that the system is asymptotically stable in the large if positive constants c and c2 can be found such that c1 I 1/3 1 + -- 1/3 1 < 1 c2 (5.29) 02 Fs(y) F (y2) 1 -2/3 + 3/5 2 + 1 2/3 - 3/10s 2 < 1 Cl Y2 Y2 I FS(Y2) I A study of these inequalities show that for Y restricted to the y2 I I Fs(Y2) I range 0 < - y2 < 3.34, the inequalities are satisfied for = 2 -c, where >. Thus the system is as ptoticaly stable i the rge where c > 0. Thus the system is asymptotically stable in the large, (c) Estimation of Transient Behavior. If the asymptotic stabiltyin the large of a discrete-time dynamic system has been established by means of a Lyapunov function, it is possible to estimate the transient behavior. This depends upon regarding the value of the Lyapunov function for every point x as a measure of the distance from the equilibrium state x = 0. Consider the obvious equality for x ~ 0 -e --- A V(x(tk) )V((tk+l)- V(XV(t k)) [A V( (tk)) / (x(tk) ) V(x(t ) (5.30) from which we obtain the difference equation for V V(x(tk+) = [1 + A V((tk) )/(() ) V(x(tk) )(5.31) Since the system is asymptotically stable in the large the ratio -1 < A V(x(tk) ) /V( Xt) ); and if condition (ii) of Theorem 1 rather than the alternate condition (ii1) and (ii2) are satisfied, then -1 < AV(x(t))/Vi < 0. Letting V(x(tk))

A V(xS) r7 =max |..... 1 x V(x) (5.32) AV(s = rin T=2 V(_) we see that (1 - X 1 ) V((tk) ) < V((tk+l) < (1 - 2) V(x(tk)) (5.33) Thus if the system starts in initial state x(t0) the "distance" from the origin, as measured by the Lyapunov function decreases in the following way (1 - 1)k V(x(t) ) < V(x(tk)) < (1 -2)k V(x(to) (5.34) and (1 - 77 ) the smallest. Note however that TL and T2 depend on the Lyapunov function cloosen. In fact if the function were choosen so as to satisfy only condition (ii ) and (ii ) of Theorem 1, then T2 = 0. In this case the transient behavior would have to be studied in regions of the state space and not the entire space. Example. 5.5. From Theorem 2 we know that if the linear system ( )k l= ) x(tQ) (5.35) is asymptotically stable in the large, then by choosing any symmetric, positive definite Q and solving the linear equation Q=P - P i (5.36) for P, we obtain a satisfactory Lyapunov function V(x) = x P x (5.37)

The determination of Dr1 and r72 is well known [3] in this case. =n 1 max x x Qx x Px (Q -l) max (5. 38) I x Qx 772min - - 2 x x Px Q -1 min where Xm (Q P ) and Am. (Q P ) denote the largest and smallest charmax mmin acteristic roots of the equation det [Q P - I Ah = 0. Example 5.6. In example 5.4 if the saturation effect were not present (Fs(y2)/y =1) the conditions for asymtotic stabilityin the large could 2 have been achieved with c = 1 and c = 2. With these values we see that the Lyapunov function, in This case the norm of x, as a function of time is bounded by k II x(tk) 1I ( 2 ) k(2 ) 1I x (t) I I (5.39) This means that in each sampling period the norm is at least halved. Now let us consider the effect of the nonlinear saturating term. Restricting I y2 1 < 10 confines 0.1 < I Fs(y2) 1 /y I < 1. In this case the least upper bound on the transient behavior is obtained with c1 = 1.86 and c2 = 1 which insures asymptotic stability in the large. The bound on the transient behavior is I x(t) 1 < (2930) t0) H1 l ) (5.40) Thus we see that the effect of the saturation is to slow down the transient behavior. Further the slowing down action increases with increased j y 2. The same conclusion is easily obtained from physical reasoning in this example. (d) Effect of Inputs. Also, when the system stability has been determined by means of a Lyapunov function, it is possible to estimate the effect of bounded input disturbances. The following example illustrates the method.

-98 Example 5..7 Consider the nonlinear discrete-time regulator system (tk+) = f(x(tk) ) + v(tk (50 41) where f (f) = 0. The function f is a contraction with respect to a certain norm so that Ii f( ) I <c |1 x|I, 0<c <1 (5.42) Therefore the system is asymptotically stable in the large. The disturbance, v, is bounded so that I I v I < c0. The effect of this disturbance is to perturb the system away from the equilibrium point, x = 0. We are interested in determining the extent of these perturbations. To do this, note that the difference A Vis A V(tk) )= II (tk+) i - I x(tk) H = II f((tk) ) +v() - II x(t (5.43) < 1 |I x(tk) 1 + c 1 - x(tk) i I The difference AV < 0 as long as i l >0/(1 ) (5.44) This implies that for any x such that 11 x > cO/ ) the effect is to -- - - /(1 - cl) force x into the region bounded by I | x I I = O/ l), but once x is in this region there is no restoring effect. Thus the perturbations resulting from the disturbanc e input are confined to this region. Note that this region becomes smaller as cl -- 0 (as the transient behavior becomes faster). Example 5.8. As a second and more complex example consider the system (tk+ 1) = 4 x(tk) +f(x(tk) ) + v(tk) (5.45)

-99 where the nonlinear term f represents parameter variations (f (0) = g. With f and v equal to zero the system is asymptotically stable in the large. It is assumed that the parameter variations and the disturbances are such that 1 If(x_) i < c |I I xI i Iv 11 < co As a Lyapunov function in this example we choose the form V = x' P x where P is the solution of the linear matrix equation Q = P - P P 4o Since the linear part of (5.45) is stable, for any positive definite Q the solution P is positive definite. The difference A V is AV(x_) = - x' Q x + 2 f' P x + 2 v P x + 2f P (5.46) + f' P f +v P v Obviously AV(x) < - min{ x Qx} + 2 1 f P x1 x +2 v' P xi +2 1f' P v (5.47) + If' Pfl + Iv P v Furthermore from the Schwarz inequality and relation (5. 38) i 1 x I <- 1/2 (x' 2:x 2)1/2 < x 2 \i P x <(_.X ~ )1/2 (x'' 2' x 1) /2 I _12(s' "' P2 Q~ 1/2<_ ilxi k1/2 (4, p2 4) max 1/2 ((, P2 4) max' P' p1/2 2 1/2 If' Prvl <Is (_/ (vP ) < C1 1 II co max (P) 1- max f' Pfl < (f max(P) < c2 11 112 X (P) max I PlvI <(v V) mx (P) 2 < c0 x (P) max

-100 while 2 min {x Qx} = (Q) i x ll x ^ mmConsequently A V < 0 if -all xll +bll xl +c<0 (5.48) where 1/2 i 2 2 b = [cO \ Vm (I P a) + Xc cl X (P) ] o0 max 0 1 max 2 c=co \ (P) o max (P) If the system is to be stable then a must be a positive number. Note that this condition is satisfied if the nonlinear effect is sufficiently small (i. e c1 sufficiently small). If this condition is satisfied, since b and c are always psoitive, it follows that A V is negative for b+ b + 4 ac lix lli 2a a (5.49) Thus in this rather complicated example the effect of the disturbance is confined to a sphere of radius - (b+ b 4 ac )/2a. (e) Design and Optimization. In complex systems it is almost a necessity that the design procedure be so devised that it ensure system stability. Procedure based on the direct method have this property. First consider the methods of the next example. Example 5. 9. If a system is asymptotically stable in the large then the sum

-101 Go k A V (x(tk) = V (x(t)) (5.50) Thus if the - A V (x) is considered as the penalty for system deviation and the sum of such penalities along a solution sequence as the performance index, then (5.50) gives a simple method for evaluation of this index. In the linear case x(tk+l) = th) if the penalty function - A V (x(tk) ) x (t) Q x(tk) then 00 (x(t) = - x (tk) Q x(tk) = x (to) P x(to) (5.51) where P is the solution of -Q = &I P - - P. Thus if some parameters of 4 denoted by a are adjustable we can select them so as to minimize the performance I(x(t0) ) min { I (x(t0) } =min { x(t0) P x(t0) }(5.52) a a Note that this minimum is a function of the initial state x(t ) so that the minimizing parameters a are different for each initial stale. The design objective might be to select the parameters a so as to minimize the maximum performance index for the initial state x(t ) in a certain region X of the state space min { max I(x(t0) } = min max (t) P x(t) } (5.53) a x(t0) EX a x(t0)X In this way a is made independent of the initial state x(t0)o Example 5.10. As a final design example consider the problem of designing a regulating input for the plant described by (tk+l) = 4 X(t) + A u(t) (5. 54)

-102 and where the control inputs are constrained such that ui(t) | <ci, ci >0, (i =1, 2,..o, n) (5.55) Assume that the unforced system is asyptotically stable in the lare so that given any symmetric positive definite Q the linear equation -Q = p' P ( - P has an unique solution the positive definite matrix P. Then V(x) = x P x is clearly a Lyapunov function for the unforced part of the system. For the forced system the difference AV=-x Q x + 2 u AP x + xu A PA u (5.56) The transient behavior relative to the Lyapunov function (norm) choosen is made as fast as possible by choosing u, the control input, so AV is as negative as possible. If there are no constraints on the input magnitude,by ordinary calculus we find that the optimum u in this sense u* (t (A p )1, A P () (5.57)! P This solution exists if A P A is nonsingular. Actually A P A is positive definite (and thus nonsingular) provided that the columns of A are linearly independent. Physically this condition requires that the effects of the control inputs be linearly independent. If this is not so we can always consider a subset of the inputs which are independent. Therefore in what follows A' P A is assumed to be positive definite (nonsingular). When the input magnitudes are constrained in magnitude, it is convenient to consider the space of the inputs, For example, with two inputs (m = 2) the constraints require that a permissible input lie in the shaded rectangle of Figure 6. If the optimum control input u*, calculated by (5. 57), lies inside this rectangle then it is the solution. If the optimum control input u* lies outside the rectangle, then the constrained solution lies somewhere on the boundary of the rectangle. We have previously noted that A V assumes its largest negative value when u = u*. The first term in A V is independent of the choice of u. The last two terms of A V become more positive as u moves away from u*. To study this effect let u = u* + 6. Then V=x Qx -u APA u +6 A PA 6 (5.58) where 6A' PA 6 > for all 6 0 and-u* A P A u < for u* 0, since A' P A is positive definite. The loci of 6rA' P A 6 = c (a constant) is a quadratic surface (in two dimensions an ellipse). The optimum control input

-105 in the sense of making A V as negative as possible and with constraints on the input magnitudes is to set * * u = + (5.59) where that 6* is that value of 5 which produces the smallest term' A' P A 6 and is just tangent to the rectangle containing permissible inputs. This optimum solution requires extensive computation for each x(t0). Note that it is always possible to insure stable operation with the constrained inputs (saturation) by setting _(tk) = a u_ (tk) (5.60) where a is choosen such that max {a l u | } < c. (5.61) i I l This solution is not optimal but it has the advantage of being easy to calculate and insures stable operation. These two examples illustrate only a few of the ways in which the direct method may be used in the design of discrete-time systems. As more workers in the control field become familiar with the techniques I expect that such design methods will become very numerous. 6. Acknowledgement. This lecture very closely follows reference [4] by Dr. RB E. Kalman and the author. Many of the methods were first suggested by Dr. Kalman. 7. Reference List. 1. W. Hahn, "On the Application of the Method of Lyapunov to Difference Equations", (German), Mathematische Annalen, Vol. 136, 1958, pp. 402-441. 2. W. Hahn, "Theory and Application of the Direct Method of Lyapunov", (German), Erg. der Math., Vol. 22, Springer, 1359. 3. R. E. Kalman and J. Eo Bertram, "Control System Analysis and Desigan via the'Second Method' of Lyapunov, - I Continuous-Time Systems", Journal of Basic Engineering, Volo 82, Series D. No 2, 1960, pp. 371-393.

-104 - 4. R. E. Kalman and J. E. Bertram, "Control System Analysis and Design via the'Second Method' of Lyapunov, - II Discrete-Time Systems", Journal of Basic Engineering, Vol. 82, Series D, No. 2, 1960, pp. 394-400. 5. M. M. Day, Normed Linear Spaces, Springe-VerLag, Berlin, 1958.

STABILITY ANALYSIS OF NUCLEAR REACTORS BY LIAPOUNOFF S SECOND METHOD Thomas J. Higgins University of Wisconsin

I. INTRODUCTION In the first several of a series of related interesting papers,( the nature of the inherent stability, and the power and temperature timeresponses to a step-function input of excess reactivity, of several types of nuclear chain reactors (initially in an equilibrium state and operating under certain prescribed conditions as noted below), characterized by nonlinear differential equations of performance, are determined by classical procedures: primarily as based on use of the Hamiltonian function, on use of Green's function, etc. In a later paper(9) the natures of the inherent stability of the reactors so constructed and operated that the Hamiltonian analysis is applicable are redetermined somewhat more elegantly and concisely by use of Liapounoff's Direct Method (hereafter designated, for conciseness of expression, by LDM). And in most recently published papers, (10-17) consideration has been extended to obtain knowledge not merely of ordinary local stability (i.e., whether the equilibrium state is asymptotically or locally stable under disturbances resulting from the introduction of an excess reactivity of "sufficiently small" magnitude), but also of bounds on the range of reactivity under which the equilibrium point is yet stable. A rather considerable interest and scope of application attends such use of LDM: both to the general student, teacher or worker in control systems engineering and to the specialist in nuclear reactor control systems theory and design. To the latter it is of obvious value to know the theory and use of this powerful phase of analysis as it applied in his particular field of work. To the former it provides as example of the use -107

of LDM whereof the pertinent physical phenomena requires analysis, perspective and interpretation rather unique as compared with the (electrical, mechanical, hydraulic, etc.) systems more familiar to him. It is in such thought that the present paper, exhibiting the nature and use of LDM in the kinetic analysis of nuclear reactors, is advanced. As the paper is tutorial in purpose, it comprises in the large a connected exposition and interweaving of analysis and results to be found disseminated through the cited papers: thus, it is instructive, rather than original, in content. And in that prime interest re instruction centers on the application of LDM, rather than on account of reactor kinetics, the latter is enfolded only as needed to make understandable the nature of the problem and the procedure attending investigation by LDM. For detail of reactor kinetics and of the investigation of more complex reactors than considered in this paper, the reader so particularly interested will have little difficulty reading the above cited references after assimilating the content of the present paper (in a systematic reading of the cited references, it is advantageous to read them in the order published, because of the interlinking of their contexts). II. THE GENERAL PROBLEM If -- as in early prototypes -- a reactor is large in structure, low in power, and operates at equilibrium at a temperature well below that at which appreciable temperature-effect damage would result, the rate of temperature rise resulting from suddenly introduced excess reactivity (say by a step-function motion of control rods) is -- usually -- rather slow, a considerable safety margin of temperature exists, and manual or simple

-109 automatic control may prove satisfactory. But as with view of increasing rating and efficiency, demands for minimizing size and operating at temperatures close to the permissible maximum are conjoined with increase of power rating -- as is sought in modern power reactor design -- the associated control system becomes correspondingly more complex in construction and operation: to the end that knowledge of the inherent stability and the character of its responses to suddenly-introduced changes in reactivity become increasingly important relative to effecting optimal design of the control system.(18-20) Such knowledge is gained by solution of the nonlinear differential equations of performance characterizing reactor performance and the associated initial and boundary conditions. (1-30) In general, the equations of simple reactors are not formidable. Thus, a well-investigated class of homogeneous reactors, whereof in the first consideration delayed neutron effects are neglected (but can be enfolded by an elaboration of analysis), is characterized by the pair of nonlinear differential equations d log P = (/T)T (1) dt C = (p-P) (2) dt e where P is the total power generated in the reactor (necessarily positive) Pe is the total power extracted from the reactor T is the reactor temperature, on a scale whereat T=O at equilibrium

-110 -a is the temperature coefficient of reactivity T is the mean lifetime of the neutrons e is the thermal capacity and the notation and terminology are as used in most of the cited references. Specified physical conditions of operation correspond to certain functional expressions of Pe. Thus, to consider two of interest in this paper; A. Constant power extraction, Pe = PO B. Newton's law of cooling, Pe = X(T-To) wherein PO is necessarily positive, X is a positive constant, and To is the ambient temperature of the surrounding medium, necessarily negative since T = 0 is reactor temperature at equilibrium, and this temperature must be greater than that of the ambient medium if an outflow of heat is to result. With, initially, the reactor in a state of equilibrium, let at t = to a positive excess reactivity E be introduced suddenly, say by a step-function withdrawal of control rods, as characterized in strength by E = -AT at t = to. Hence, T = -E/a, at t = to (3) Then it is desired to know whether or not the resulting kinetic action of the reactor is "stable"; and, although this is not the major interest in this paper, the time responses of the temperature and power may be desired. An alternative statement of the problem,'both somewhat more familiar in form to those not experienced in nuclear reactor control theory

-111 and more directly linked to stability analysis by LDM as this is set out in textbooks, is to be gained as follows. Introducing new variables, defined by y = T and x = log P, whence P = exp x and Pe = exp Xe, in (1) and (2) yields x = -(a/T)y (4) = (exp x - exp xe)/e (5) For the stated conditions of operation, Xe is respectively Xe = x0 and Xe = X(y-yo). Then (4) and (5) comprise a pair of first-order non-linear differential equations, with the corresponding equations of first approximation X = -(a/T)y (6) y - (x-xe)/C (7) The associated singular point in the finite plane, [y = 0, x = xe for y = 0], and thus at [T = O, P = Pe for T = O] in the (P,T)-plane, corresponds to reactor equilibrium. Then it is desired to know the nature of the singular point, and thus the nature of the stability of the reactor relative to the stated equilibrium conditions enfolded in Pe. Such can be ascertained in Liapounoff's local sense of stability (i.e., examination of the nature of the response following an initial "sufficiently small" displacement at t = t0 from the singular point (XS, Ys) to (x = xo, y = YO), corresponding to, say, (yo = -E/a, x = XeO for yo) resulting from introduction of the excess reactivity E into the reactor. The pertinent criterion relative to LDM is stated and use is illustrated on, say, pages 491-497 of the text(5l) by Gille, Pelegrin and Decaulne, to cite an easily-obtained control engineering text.

-112 General investigation of stability in the large requires a more comprehensive exposition of LDM than as cited at present in control engineering texts (at least in other than the Russian language), which point is discussed in greater detail in Section IV. III. SOLUTION BY LDM In general, as emphasized on page 496 of Reference 31, no specific procedure exists for finding the required Liapounoff function for an arbitrarily-specified system. For a broad class of problems the Lurje transform and its generalization by Letov, as discussed in the recently-published text(32) by the above-named authors, can be used to obtain the desired function. Leaving aside these more sophisticated and as yet relatively unfamiliar (in the U.S.) techniques, the commonly-stated textbook approach is construction, as possible, of an appropriate quadratic form V(x,y) -- as exhibited on pages 493-497 of Reference 31. Now in two variables such defines a conic, through V(x,y) = const; and a necessary condition in use of Liapounoff's criterion relative to the above-stated problem is that the Liapounoff function V(x,y) must be positive for all values of x and y except that it may be zero at x = 0, y = 0. This conjunction suggests that quadratic forms defining ellipses, through V(x,y) = 0, may prove suitable choices -- which is the approach used on pages 495-496 of Reference 31. Now ellipses are closed curves in the (x,y)-plane; which suggests, in turn, that yet otheter closed curves may provide Liapounoff functions similarly; and again in turn, this suggests that the easily-written Hamiltonian function (expressing the sum of the kinetic and potential energies to within an arbitrary constant, say) pertinent to a particular problem may provide

-113 the desired function; and that the corresponding function in terms of T and P provides a desired function for the nuclear reactor stability analysis; and such results for the two cases A and B cited in Section II. The desired Hamiltonian is easiest constructed by noting that (1) and (2) can be interpreted as characterizing the motion of a sphere on a surface, whereof -log P is the horizontal coordinate, (a/T)T the velocity, (a/T)T the acceleration, Te/a the mass, and (P - Pe) the "general" forcing function. In turn, the "general" potential energy is, to within an arbitrary constant, og (P-Pe) d(log P) and the kinetic log Peo energy is (ec/2T)T2. Such interpretation provides obtaining the equations of the trajectories in the (T,P)-phase plane in the usual fashion; i.e., log P I (P-Pe) d(log P) + (c~/2T)T2 = const., say CO (8) log PeO whereof CO is determined by the initial conditions. For case A, Pe = PO; and substituting in (8), noting that d(log P) = (l/P)dP, and integrating yields [P-P0 - P0 log P/P0 + (e/2T)T2] = Co (9) Now (8) can also be obtained by reversing the members of (2), dividing by e, multiplying the corresponding members of the resulting equation and of (1), shifting all terms in the resulting equality to the left-hand side, multiplying through by dt, and integrating each side of the resulting expression. Accordingly, (9), as a particular case of (8), is pertinent to the system characterized by (1) and (2) from either of the two approaches yielding (8).

-114 Now (9) is positive definite for P > 0 and all T; and H(t) = P - (PO/P)P + (aC/T)T T (10) which on substituting for T and T from (1) and (2) equals zero. Hence the reactor is stable relative to the point of equilibrium (T = O, P = Po); but it is not asymptotically stable (in the local sense) since such requires that H(t) be always negative for all P and T. For case B, Newton's law of cooling, for which Pe = X(T-To) and TO < O, substituting in (8), using -XT d(log P) = -XT(d log P/dt)dt = -XT(-a/T)T dt, and integrating gives P + XTO + XTO log(P/-XT0) + (ae/2T)T = -(a/X) f T2dt + Co (11) t=O Now the left-hand member of (11) is positive definite for P > 0 and all T; and characterizing it by H(t), H(t) = -(a/x)T2 (12) which is always negative.* Hence the reactor is asymptotically stable relative to the point of equilibrium (T = 0, P = -XTO). Corroboratively, for case A it may be noted that (9) is the Hamiltonian of a conservative system; the phase-plane plots are closed curves in the (T,P)-plane; the value of C0 is determined by the initial conditions; if these are, as remarked above, furnished by P = P T = -E/a at t = 0, then the constant CO = (e/2aT)E2; the maximum swings on T, corresponding to dT/dP = 0, occur at (P = Po0 T = + E/a); the maximum swings on P, corresponding to dP/dT = 0, occur at [T = 0, P = P1, P2, the two roots of (9) with T = 0]; the point (T = 0O P = PO) is the limit point of the closed curves as E approaches zero and thus the singularity is a center; and as is well-known, this type of singularity is stable, but * Provided T A O. Special investigation for points on the line T = 0 is easily effected.

-115 not asymptotically so, in Liapounoff's sense (the interested reader will find corresponding phase-plane plots and time-resonses of P and T, for specified numerical values in References 1 and 2. It would seem that the temperature used in these plots but not in the equations is the temperature increase, thus T - E/a). Again, for case B, it may be noted that the left-hand members of (9) and (11) are of the same form, -XTo in (11) playing the role of P0 in (9), both being positive. But the additional integral term in the log P right-hand member of (11), equivalent to (T/a) f (aT/T)d log P, is log PeO interpretable as energy loss due to viscous friction (frictional force proportional to velocity). Hence the system is damped; and as t approaches infinity, the disturbed system (T = -E/a at t = 0) returns to equilibrium at (T = 0, P = -XTo), the coordinates of the singular point: which in this case is a node or focus, hence asymptotically stable, depending on assigned numerical values of the parameters of the reactor. (The corresponding plots are given in Reference 33.) In similar fashion, the interested reader can determine the stability of reactors with more complex programs of operation and/or complex structures by LDM, and compare these with results obtained thusly or otherwise in the literature: say, for adiabatic operation(2) PO = 0, which involves the interesting concept of a singularity occurring at (x=) infinity; a heterogeneous reactor,(5'6) with two media, wherefore the equilibrium for Pe may be stable or unstable depending on the structure (but proves stable for, say, uranium and heavy water as in the ZOE reactor at Chatillon); the generalization(9) of the latter to the case of two, three and n media with heat generated in each medium; and other special cases. (1216)

IV. OTHER STABILITY ASPECTS The analysis outlined in Section III is suitable, in general, for invesatgation of ordinary "local" stability: i.e., as determined if the initial departures of P and T from equilibrium are "sufficiently small". Often, however, knowledge is desired of the extent (at least in some measure) of a domain about the equilibrium point in which the initial conditions may be arbitrarily established and the system yet be stable. This problem is examined re use of LDM in recent reports by LaSalle(4) and by Smets,(l6 l7) the latter being especially concerned with nuclear reactor stability investigation. His analysis is illustrated by consideration of several of the above-r'emarked types of reactors, for a boiling water reactor, and consideration of xenon build-up. Stability in the large is considered (i.e., any initial location) and is of especial interest. Also termed global stability, the problem has been investigated for a particular set of differential equations in a recent paper(13) in a very abstract manner, and application to nuclear reactors then advanced. Again, analysis of a continuous medium reactor in which parameters are a function of a space variable, thus the temperature is also, would not seem easily ( if at all) analyzable by LDM. For such reactors an approach along lines used early by Welton(21) has been used with success.(11,2) This approach can also be applied to reactors treatable by LDM, hence comprises a somewhat more general mode of stability analysis. This method, involving Green's function analysis, has been used to investigate the stability of circulating-fuel reactors.(2-4) In conclusion, attention may be directed to a Liapounoff function somewhat different than those used in earlier-cited references, which -116

-117 affords somewhat more general results, as suggested by Popov.(10) A discussion is given by Smets,(6) in a report which has somewhat restricted circulation. But no doubt this discussion, and a very thorough and wellintegrated account of the use of LDM (and other approaches) for the stability analysis of nuclear reactors will be found in his forthcoming monograph(l7) thereon — to which the attention of the particularly interested reader is directed; as also to exhaustive bibliographies on the kinetics and control of nuclear reactors(35) in the large (from which the references of this paper have been drawn) and on the literature of Liapounoff's theory in particular and of nonlinear system theory in general.(53637)

REFERENCES 1. Weinberg, A. M. and Ergen, W. K. Some Aspects of Non-Linear Reactor Kinetics. Proceedings, Kjeller Conference on Heavy Water Reactors -- JGNER, 1953. 2. Ergen, W. K. and Weinberg, A. M. "Some Aspects of Non-Linear Reactor Dynamics." Physica, 20, (1954) 413-26. 3. Brownell, F. H. and Ergen, W. K. "A Theorem on Rearrangements and Its Application to Certain Delay Differential Equations." Journal of Rational Mechanics and Analysis, 3, (1954) 565-79.. 4. Ergen, W. K. "Kinetics of the Circulating-Fuel Nuclear Reactor." Journal of Applied Physics, 25, (1954) 702-11. 5. Lipkin, H. J. and Thieberger, R. Stability Conditions in the NonLinear Dynamics of Heterogeneous Reactors. Proceedings, International Conference on the Peaceful Uses of Atomic Energy, 5, (1955) 364-66. 6. Lipkin, H. J. "A Study of the Non-Linear Kinetics of the Chatillon Reactor." Nuclear Energy, 1, (1955) 205-13. 7. Nohel, J. A. Stability of Solutions of the Reactor Equation. CP-54-9-248, September 1954, Unpublished. 8. Nohel, J. P. and Ergen, W. K. Stability of Solution of a Nonlinear Functional Equation in Reactor Dynamics. Oak Ridge National Laboratory, Oak Ridge, Tenn., O.R.N.L. Memo, 1956. 9. Ergen, W. K., Lipkin, H. J., and Nohel, J. A. "Applications of Liapounov's Second Method in Reactor Dynamics." Journal of Mathematics and Physics, 36, (1957) 36-48. 10. Popov, V. M. Notes on the Inherent Stability of Nuclear Reactors. Proceedings, Second International Conference on Peaceful Uses of Atomic Energy, 11, (1958) 245-50. 11. Price, R. M., Jr. Numerical Solution of the Equations for a Continuous Medium Reactor. M.S. Thesis, Georgia Institute of Technology, Atlanta, Ga., 1958. Pertinent to Reference 12. 12. Ergen, W. K. and Nohel, J. A. "Stability of a Continuous-Medium Reactor." Journal of Nuclear Energy: Part A, Reactor Science, 10, (1959) 14-18. 13. Levin, J. J. and Nohel, J. A. "Global Asymptotic Stability for Nonlinear Systems of Differential Equations and Applications to Reactor Dynamics." Archive for Rational Mechanics and Analysis, 5, (1960) 194-211. -119

-120 14. Levin, J. J. and Nohel, J. A. "On A System of Integrodifferential Equations Occurring in Reactor Dynamics." Journal of Mathematics and Mechanics, 9, (1960) 347-68. 15. Ergen, J. A. "A Class of Nonlinear Delay Differential Equations." Journal of Mathematics and Physics. 38, (1960) 295-311. 16. Smets, H. A. Stability in the Large of Heterogeneous Power Reactors. Report No. 1833, Centre d'Etude de L'Energie Nucleaire, C.E.N., Bruxelles, Belgium, (December 23, 1959) 29. 17. Smets, H. B. Contributions to Nuclear Power Reactor Stability. Presses Universitaire de Bruxelles, 1960, pub. pend. 18. Sandmeier, H. A. The Kinetics and Stability of Fast Reactors with Special Considerations of Nonlinearities. Report No. ANL-6014, Argonne National Laboratory, Lemont, Ill., (June, 1959) 83. 19. Sandmeier, A. M. The Kinetics and Stability of Fast Reactors with Special Considerations of Nonlinearities. Thesis, E.T.H., Zurich, Switzerland; Juris-Verlag, Zurich, 1959) 89. 20. Sandmeier, H. A. "Nonlinear Treatment of Large Perturbation in Power Reactor Stability." Nuclear Science and Engineering, 6, (August, 1959) 85-92. 21. Welton, T. A. Kinetics of Stationary Reactor Systems. Proceedings, First International Conference on Peaceful Uses of Atomic Energy, 5, (1956) 377-88; Report ORNL 1894, Oak Ridge National Laboratory Oak Ridge, Tenn., 1955. 22. Howard, R. C. "Evaluation of the Non-Linear Kinetic Behavior of a Nuclear Power Reactor." Transactions, American Society of Mechanical Engineers, 78, (1956) 163-69. 23. de Figueiredo, R. P. On the Non-Linear Stability of a Nuclear Reactor with Delayed Neutrons. A/Conf. 15/P1815, Second International Conference on the Peaceful Uses of Atomic Energy, Geneva, Switzerland, (1958) 6. 24. Akcasu, Z. Derivation of the Criterion for Unconditional Non-Linear Stability. Memo, Argonne National Laboratory, Lemont, Ill., (May 7, 1949) 6, (Unpublished). 25. Akcasu, Z. A Note onthe Non-Linear Stability of Reactors. Memo, Argonne National Laboratory, Lemont, Ill., (July 22, 1959) 6, (Unpublished). 26. Bryant, L. T. and Morehouse, N. F., Jr. Analogue Computer Solution of the Nonlinear Reactor Kinetic Equation. Report No. ANL-6027, Argonne National Laboratory, Lemont, Ill., (July, 1959) 31.

-121 27. Smets, H. B. "A Non-Linear Stability Criterion for Nuclear Reactors." (in French), Bull. classe sci. acad. roy. Belg., 45, (1959) 102-07. 28. Smets, H. B. "On Welton's Stability Criterion for Nuclear Reactors." Journal of Applied Physics, 30, (1959) 1623. 29. Smets, H. B. "A General Property of Boundedness for the Power of Some Stable and Unstable Nuclear Reactors." Nukleonik, 2, (1960) 44-45. 30. Gyftopoulos, E. P. and Devoogt, J. "Boundedness and Stability in Nonlinear Reaction Dynamics." Nuclear Science and Engineering, 7, (1960) 533-40. 31. Pelegrin, M. J. and Decaulne, P. Feedback Control Systems: Analysis, Synthesis and Design. New York: McGraw-Hill Book Company, Inc., (1959) 793. 32. Gille, J. C., Pelegrin, M. J. and Decaulne, P. Methodes Modernes D-Pude Des Systemes Asservis. Dunod, Paris, (1959) Chap. 14. 33. Chernick, J. Report No. BNL-173, Brookhaven National Laboratory, Brookhaven, L. I., circa 1954. 34. LaSalle, J. P. Some Extensions of Liapunov's Second Method. Report No. TR 60-5, RIAS, Baltimore, Md., (1960) 25. 35. Hill, R. and Higgins, T. J. "A Classified Bibliography of Feedback Control Systems: Part III." AIEE Conference Paper No. 59-366. An up-dated version is in preparation. 36. Higgins, T. J. "A Resume of the Developmentand Literature of Nonlinear Control-System Theory." Transactions, American Society of Mechanical Engineers, 79, (1957) 445-55. 37. Higgins, T. J. "A Resume of Basic Literature on Nonlinear Systems (with Particular Reference to Liapounoff's Methods)." MIT, Paper at Workshop on Liapounoff's Methods, September, 1960.

-122 27. Smets, H. B. "A Non-Linear Stability Criterion for Nuclear Reactors." (in French), Bull. classe sci. acad. roy. Belg., 45, (1959) 102-07. 28. Smets, H. B. "On Welton's Stability Criterion for Nuclear Reactors." Journal of Applied Physics, 30, (1959) 1623. 29. Smets, H. B. "A General Property of Boundedness for the Power of Some Stable and Unstable Nuclear Reactors." Nukleonik, 2, (1960) 44-45. 30. Gyftopoulos, E. P. and Devoogt, J. "Boundedness and Stability in Nonlinear Reactor Dynamics." Nuclear Science and Engineering, 7, (1960) 533-40. 31. Gille, J. C., Pelegrin, M. J. and Decaulne, P. Feedback Control Systems: Analysis, Synthesis and Design. New York: McGraw-Hill Book Company, Inc., (1959) 793. 32. Gille, J. C., Pelegrin, M. J. and Decaulne, P. Methodes Modernes Etude des Systemes Asservis. Dunod, Paris, (1959) Chap. 14. 33. Chernick, J. Report No. BNL-173, Brookhaven National Laboratory, Brookhaven, L. I., circa 1954. 34. LaSalle, J. P. Some Extensions of Liapunov's Second Method. Report No. TR 60-5, RIAS, Baltimore, Md., (1960) 25. 35. Hill, R. and Higgins, T. J. A Classified Bibliography of Feedback Control Systems: Part III. AIEE Conference Paper No. 59-366. An updated version is in preparation. 36. Higgins, T. J. A Resume of the Development and Literature of Nonlinear Control-System Theory. Transactions, American Society of Mechanical Engineers, 79, (1957) 445-53. 37. Higgins, T. J. "A Resume of Basic Literature on Nonlinear Systems." (With Particular Reference to Liapounoff's Methods). MIT Paper at Workshop on Liapounoff's Methods, September, 1960.

A RESUME OF THE BASIC LITERATURE ON NONLINEAR SYSTEM THEORY (WITH PARTICULAR REFERENCE TO LYAPUNOVI S METHODS) Thomas J. Higgins University of Wisconsin

A RESUME OF THE BASIC LITERATURE ON NONLINEAR SYSTEM THEORY (WITH PARTICULAR REFERENCE TO LYAPUNOV'S METHODS) In detailing the time stream of the development of nonlinear control systems theory in a 1957 paper (46), the author noted that "the fifth and present stage of work originated about 1950. On one side, through the start of a considerable effort directed toward improving the performance of control systems by the deliberate inclusion of nonlinear elements and effects (as in MacDonald's work on multiple-mode switching); on the other side, through attempt to study inherent nonlinear effects in control systems in the "large", rather than just in the small wherefor linearization yields useful [but limited] solutions." Scan, now, several years later, of the (132-135) proceedings of subsequently held-conferences and symposia, of recent review papers(334009118121) and selective bibliographies and summary papers on adaptive() sampled-data(49,50,114,146,148), timelag(25125) and relay-types(129) of specialized control systems and on continuous systems in general(29 49, 62, 70, 90, 95, 123 137-139, 141,149) both supports the validity and strengthens the tone of this two-fold statement: For in each of the several mentioned areas of control activity, a concentrated, fervid and accelerating interest now centers on enlarging the scope of nonlinear aspects and on broadening the domain over which analysis and synthesis can be effected with increased "exactness". To achieve this latter in the fullest possible sense requires consideration of the actual nonlinear differential equations of performance, with consequent attendant need of the fullest possible knowledge of available analytic methods of solution (and of supporting graphic, numeric and computer techniques). An obvious approach to gain of such is assimilation of theory as set-out in the excellent recently-published texts wholly(7 106) or in part(8 2 5892) devoted to nonlinear differential equation theory -125

-126 and/or associated linear differential equation theory and to more specialized texts relative to asymptotic behavior, particularly as linked with considerations of stability of solution.(12,15,20,98) In this connection a logical course of procedure, for one not already well-versed in these texts, is to read each of the first group of remarked books as language facility permits: and then to take in turn the clearly written, easily-grasped book by Bellman(12), which advances an admirable treatment of the bases of the vector approach to differential equations; follow this with Cesari's(20) glowingly-reviewed, exhaustively-detailed four-chapter text enfolding: 1) various concepts of stability and linear systems with constant coefficients, 2) variable and periodic coefficient systems, with emphasis on second-order systems, 3) account of the first and second methods of Lyapunov and various analytic methods well-illustrated by solution of some of the better-known (by name) differential equations, and conjoined analyticaltopological methods, 4) the general theory of asymptotic development, the whole comprising a unique synthesis of content enfolded in the 69-page, 700odd item bibliography, which comprises an exhaustive listing of pertinent periodical literature, including numerous papers on control systems theory; and then end with a reading of the rather abstract and well-complementing work by Bogolyubov and Mitropol'skv.(15) In a connected area of theory, attention is to be directed to several recent texts particularly pertinent to solution of nonlinear control systems containing distributed parameter elements, which give rise to characterization by difference-differential equations.(86, 88, 99) Finally, in the thought that a powerful method of solving system problems characterized by differential or difference-differential equations is to

-127 so recouch them (often by use of variational techniques) as system problems characterized by integral equations, attention is directed to the recent excellent text on nonlinear integral equations by Smirnov(ll), complementing several recent texts on linear integral equation theory, including the pertinent content in the unparalleled text on approximate methods by Kantorovich and Kryloff(151) which contains a most excellent account of variational methods originated by Ritz, Galerkin, Treffetx, Grammel and others (collocation, least squares, etc.) and yet other methods as discussed in the author's summary of these methods( ). Finally note may be made of several works especially useful with respect to the details of the theory of parametric oscillation(35'75 112). Background and basic knowledge so gained will prove most helpful to a rapid assimilation of the more advanced of recent control engineering texts devoted to treatment of nonlinear systems on other than the now-familiar describing-function and phase-plane approaches: either in part, through one or more chapters enfolding content such as an introduction to Lyapunov's second method(2'31,32,37,38101o,136); or in whole, with content ranging from easy to medium difficult (2,27 34 41,61 73, 113 129,130,131,153,154); to the quite sophisticated texts by Letov'6 and Lurje(72 73) to which especial attention is to be directed: in that they provide, at present, the most comprehensive rigorously-based account of nonlinear control systems theory, of such basic nature that knowledge of this content ought now be held, or rapidly gained, by all seriously-interested in study and research in control engineering.

-128 In a complementive sense, and in knowledge that much of the theory and method used to effect analysis of the operating performance of nonlinear control systems were earlier, and are now, used in other branches of technology —especially in nonlinear mechanics in the large and nonlinear vibration theory in particular and in nonlinear electric circuit analysis; that a great deal of the theory and of numerous powerful methods developed for solution of problems therein are only now coming over into control engineering (as, for example, use of Lvapunov's second method in English-language literature); and that much yet remains to be bought over (such as a more general use of abstract-space theory), the control engineer can well take-up study of what has been developed to-date in these associated fields. A rational program therein is to start with some of the wellwritten, simple to moderate-difficulty, shorter texts on general treatment (28,46,61,114) then in a specialized area of nonlinear electric systems (114 39, 54, 56, 57108122) or nonlinear mechanical systems(5 87,93,97,102,111,115,119,120,144,147) though some overlap occurs among the two groups. Subsequently, one would take-up the recent exhaustivelydetailed accounts enfolded in the second editions of books by Minorsky(85) (this pending work ought be the most complete account in English); by (1~9q (45 ) Bulgakov 9) and by Andronov, Witt and Chaikin (4', both of which are strong in illustrative numerical examples; by Kauderer(53), which has a particularly-well-detailed account of parametrically-produced oscillations); by Malkin(80), especially strong in the more abstract phases of theory; by Mitropolskv(89), especially concerned with transient phenomena; and by Hayashi(45), which advances perhaps the most detailed account of sub-harmonic oscillatory phenomena in forced nonlinear systems.

-129 As evidenced in a number of the above mentioned texts, a central problem in modern control systems analysis and design comprises investigation and determination of the nature of various aspects of stability; with respect both to specified desired driving inputs and to undesired disturbance inputs. To such end the considerable literature stemming from a lengthy history of work in other connections —as well detailed in the excellent history of the development of the stability of motion by Moisseev(9), can be brought over and used for solution of the problems of control theory. Thus, the algebraic methods initiated in the work of Routh(l04) (among the first of modern workers) and of Hurwitz, the topological and trajectorial techniques initiated by Poincare and advanced by Birkhoff(13) and the analytic techniques founded by Liapounov(7) have in recent years been expanded by a host of workers, particularly in the USSR, to the end that there is now a very comprehensive and well-integrated body of available theory which has been set out in well-detailed texts by, especially, Soviet authors (10,21,23,30 55, 7879, 94,100,127) A particular attention has centered in recent years on Lyapunov's methods. The control engineer desirous of gaining a firm foundation and an extensive knowledge of his approaches, and their enlargement by a host of workers (particularly by his fellow countrymen), now has available to him two recent excellent texts written especially for those interested primarily in control engineering: by Malkin(79) (44) and by Hahn. The former is somewhat broader in scope and less concisely written, the latter is concerned principally with Liapounov's second, or direct, method.

-130 To summarize: Malkin's book comprises a thorough, clearlywritten treatment of Lyapunov theory, supported by numerous illustrative examples drawn from both the technical and mathematical literature, and enfolding discussion and illumination of valuable work contained in difficult-to-obtain (at least in the U.S.) sources. The context comprises six chapters, divided into three sections of successively increasing complexity and abstractness. The first three chapters enfold general account of the stability problem for equilibrium points, outlines the basic theory of the direct method thereto, and accounts its interlinking (with use of interesting geometric sketches) to RouthHurwitz stability criteria as stemming from consideration of the equations of first approximation. Chapters IV and V, constituting the second section, are respectively concerned with certain critical cases of equilibrium motion and with the stability of periodic oscillations (limit cycles) as characterized by linear and nonlinear periodic-coefficient equations. Chapter VI is concerned principally with certain quite advanced aspects of general theory, the theory of first approximation, the theory of stability by first approximation and with certain pertinent critical cases. Other writings by Malkin are enfolded in(77) or complement this text (78). Hahn's eight-chapter text(144) concerned largely with analytic aspects, comprises an introductory account of basic theory and of sufficient conditions for stability or instability (I, II), exemplification by solution of various rather general problems drawn from technology, particularly control engineering theory (III), generalization and ramification of the basic theory, ranging from the inverse problem to critical cases (IV-VII), and extension to investigation of the stability of systems

-131 characterized by partial differential, difference-differential, and difference equations. A lengthy bibliography of some 98 authors, many entailing from two to a dozen or so entries, evidences the degree of the author's avowed purpose of summarizing "all pertinent publications up to and including 1957", thus digesting the most significant work up to some three years ago. The subsequent literature, 1957-1960, is already large in extent: but the particularly interested reader can easily gather it from the pertinent abstracting journals, as desired. These two exhaustive works are quite formidable in their entirety, though the first several chapters of each can be read without too great analytic demands. Accordingly, one interested in somewhat simpler introductions, yet somewhat more advanced than is to be found in control engineering textbooks to date, might well look over the excellent sur(7) (66 67) vey by Antosiewicz, read the interesting reports by Lefschetz(6667) and LaSalle(64) turn, as feasible, to the excellent accounts, well buttressed by numerical examples, in the writings by Hahn(41), (as editor), Reissig(103), and Zubov(127'128), and, finally, as indeed is necessary to keep abreast in this rapidly developing area of control theory, to a systematic reading of the periodical control literature. The variety of applications and the extent and complexity of use in control theory, which Lyapunov theory has already gained in countries in which little had been written therein only several years ago is manifest in, say, the general programs of the recent IFAC Conference in Moscow and of the present Joint National Automatic Control Conference at MIT, in certain of the survey papers which will appear in the Proceedings of these Conferences, and in the large body of already published

-132 (124) papers, on such diverse areas as electrical machines(24) nuclear reactors 42), general nonlinear control systems, continuous-time and discrete-time systems(51), pulsed sampled-data systems (l43), to cite only a few items which the author has recently read or workedon with interest. In recapitulation of the foregoing it is of interest to note the very considerable degree to which the methods affording exact treatment of nonlinear systems engineering, in the various aspects of mechanics, electric circuits and control engineering, were conceived in the USSR and the rapidity with which they were developed and utilized in practice, a veiwpoint well-emphasized in LaSalle and Lefschetz's report(63). An overall survey of this, in considerable depth, is afforded by the well-detailed review articles by Alekseeva(3), Rytov(10), Mandelstam and others(81'82) on nonlinear mechanics and electric circuits in general and by Hahn and others(4o,43,74,107) on control engineering in particular; and in the accounts of the professional work of several of the more distinguished Soviet workers(6 16) 3 2) whose names occur repeatedly above. Finally, it may not be inappropriate to close with the rather apt quotation from the recent 48-th Wilbur Wright Memorial Lecture, "Mathematics and Aeronautics" by the distinguished English mathematician Dr. M. J. Lighthill(45) director of the Royal Aircraft Establishment, at Farnborough, England: "The great scientific and engineering advances of the present day are coming from the bringing together of widely different departments of knowledge —for example, the way in which electron mic ro roscopy has been used to solve the chemical bonds of genetics or solid state quantum theory to transform elec tronic circuits": and in such thought to consider the kindred way in

which quite abstract theory developed earlier in nonlinear mechanics and electric circuit theory has provided a reservoir that now affords means of solution of pressing modern-day problems in nonlinear control engineering theory and practice —a reservoir, moreover, which is both not fully tapped(2) and is yet filling.(126).

REFERENCES 1. Lectures on the Theory of Automatic Control (in Russian), M. A. Aizerman. Gostekhizdat, Moscow, 1958, ed. 2, 519 pp. 2. Nonlinear Problems in the Theory of Automatic Control (in Russian), M. A. Aizerman., M. Fizmatgiz, Moscow, 1950, 24 pp. 3. Mathematics and Mechanics in Publications of the Academy of Sciences of the USSR. A Bibliography II. 1936-1947 (in Russian), V. P. Alekseeva. Izd. AN. SSSR, Moscow, 1955, 515 pp. 4. Theory of Oscillation, A. A. Andronow and C. E. Chaikin. (Trans. by N. Goldowsky). Princeton University Press, Princeton, N. J., 1949, 358 pp. 5. Theory of Oscillations (in Russian), A. A. Andronow, A. Witt, C. E. Chaikin. Moscow, ed. 1, 1936, 518 pp.; ed. 2, 1959, 915 pp. 6. In Memory of Aleksandr Aleksandrovic Andronov (in Russian). Izd. AN. SSSR, Moscow, 1955. A memorial volume of contributed papers. 7. A Survey of Lyapunov's Second Method, H. Antostewicz. In "Contributions to the Theory of Nonlinear Oscillations": IV (book), Princeton University Press, Princeton, N. J., pp. 141-166. vol. 41 of Annals of Mathematical Studies. 8. Foundations of the Qualitative Theory of Ordinary Differential Equations (in Russian), N. A. Artemiev. Izd. Leningrad University, 1941. 9. "A Survey of Adaptive Control Systems", J. A. Aseltine, A. R. Mancini, C. W. Sarture. IRE Transactions on Automatic Control, PGAC-6, 1958, pp. 102-108. 10. Behavior of Dynamic Systems Close to the Limits of the Region of Stability (in Russian), N. N. Bautin, OGIZ, Moscow, 1949. 11. Theorie des Circuits Non-Lineaires en Regime Alternatif: Redresseurs, Modulateurs, Oscillateurs, V. Belevitch. Gauthier-Villars, Paris, 1959, 293 PP. 12. Stability Theory of Differential Equations, R. Bellman. McGraw-Hill Book Company, Inc., New York, 1954, 166 pp. 13. Dynamical Systems, G. Birkhoff. American Mathematical Society Colloquium Publications, New York, 1927. 14. Mecanique Nonlinearire, les Ocillations a Regimes Quasisinusoideaux, A. Blaquiere. Memorial des Sciences Mathematiques, vol. CXLI, GauthierVillars, Paris, 1960, 157 pp. -135

15. Asymptotic Methods in the Theory of Nonlinear Oscillations, (in Russian), N. Bogolyubov, Y. A. Mitropol'skv. Fizmatgiz, Moscow, 1955, 447 pp.; ed. 2, 1958, 408 pp. 16. "Academician Nikolai Nikolaevich Bogolyubov, Soviet Physicist (On the Occasion of his Fiftieth Birthday)". Journal Technical Physics (English translation), vol. 37, 1960, pp. 235-238. 17. Current Status of Dynamic Stability Theory, F. E. Bothwell. AIEE Trans., vol. 71, part 1, 1952, pp. 232-8. 18. "Elements of the Theory of Resonance," E. W. Brown, Rice Institute Pamphlet 19, 1932, 60 pp. 19. Oscillations (in Russian), B. V. Bulgakov. GITTL, Moscow, 1949; ed. 2, 1954, 89-1 pp. 20. Asymptotic Behavior and Stability Problems in Ordinary Differential Equations, L. Cesari. (Erg. der. Math. u. ihrer Grenz, no. 16), Springer-Verlag, Berlin, 1959, 271 pp. 21. Stability of Motion (in Russian), N. G. Cetaev. GITTL, Moscow, ed. 1, 1946, 204 pp; ed. 2, 1955, 207 pp. 22. Research in the Dynamics of Non-Holonomic Systems, (in Russian), S. A. Chaplygin. Gostekhizdat, Moscow, 1949, 111 pp. 23. Problems of Routh-Hurwitz for Polynomials and Entire Functions (in Russian), N.. Chebotarev, N. N. Neiman. Trudy Math. Inst. Steklova, vol. 26, 1949, 332 pp. 24. "Objectives and Trends in Feedback Control System Progress," H. Chestnut. Electrical Engineering, vol. 77, 1958, pp. 58-63. 25. "Time Lag Systems —A Bibliography," N. H. Choksy. IRE Transactions on Automatic Control, vol. AC-5, 1960, pp. 66-71. 26. Theory of Ordinary Differential Equations, E. A. Coddington and N. Levinson. McGraw-Hill Book Company, Inc., 1955, 429 pp. 27. Nonlinear Control Systems, R. L. Cosgriff. McGraw-Hill Book Company, Inc., New York, 1958, 328 pp. 28. Introduction to Nonlinear Analysis, W. J. Cunningham. McGraw-Hill Book Company, Inc., New York, 1958, 343 pp. 29. "Survey of the Methods Available for Analysis and Synthesis of NonLinear Servomechanisns," S. Demczynski. Electrical Energy, vol. 1, 1957, pp. 279-84. 30. Foundations of the Theory of the Stability of Motion (in Russian), G. N. Dubosin. Izd. Moscow University, 1952, 318 pp.

-137 31. Stability of Automatic Control Systems (in Russian), K. V. Egorov. Ucheb posoble Stu. Spet. Avtomatika Telemekhanika, Moscow, 1954, 79 PP. 32. Electric Automatic Control Systems (in Russian), A. A. Feldbaum. Obovonghiz, Moscow, 1957, 808 pp. 33. "Research in Servomechanisms," H. Freeman. Sperry Engineering Review, vol. 12, 1959, pp. 23-9. 34. Discontinuous Automatic Control, I. Flugge-Lotz. Princeton University Press, Princeton, N. J., 1953, 168 pp. 35. Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems (in Russian), F. R. Gantmakher and M. Krein. Moscow, Ed. 2, 1950, 359 PP. 36. Alexej Nikolajewitsh Krylow (1863 bis 1945), J. L. Geronimus. Verlag Technik, Berlin, 1953, 56 pp. 37. Theorie et Technique es Asservissment, J. C. Gille, M. Pelegrin, P. Decaulne. Dunod, Paris, 1956, 703 PP. Feedback Control Systems: Analysis, Synthesis and Design, McGraw-Hill-Book Company, Inc., New York, 1959, 793 PP. 38. Methodes Modernes d'Etude des Sysvemes Asservis, J. C. Gille, P. Decaulne, M. Pelegrin. Danod, Paris, 1960, 460 pp. 39. Oscillations and Waves (in Russian), G. S. Gorlik, Moscow, 1950. 40. "Neuere Sowjetische Arbiten zur Regelungsmathematik," W. Hahn. Regelungstechnik, Vol. 12, 1954, pp. 292-296. 41. Beihefte zer Regelungstechnik. "Nichtlineare Regelungsvorgange" W. Hahn, Ed. R. Oldenbourg-Verlag, Munich, 1956. 42. Probleme und Methoden en der m.odernen Stabilitats Theorie. W. Hahn, M.T.W.-Mitt., 1957, 119 pp. 435. Stabilitatsuntersuchungen in der neueren Sowjetischen Literatur. W. Hahn, Regelungstechnik, Vol. 3, 1955, pp. 229-231. 44. Theorie und Anwendung der direkten Methode von Ljapunov, W.Hahn. Erg. der. Math. u. lhrer Grenz. no. 22, Springer-Verlag. Berlin, 1959, 142 pp. 45. Forced Oscillationsin Nonlinear Systems, C. Hayashi. Nippon Publishing Company, Osaka, 1953, 164 pp. 46. "A Resume of the Development and Literature of Nonlinear ControlSystem Theory," T. J. Higgins. Transactions, ASME, Vol. 79, 1957, pp. 445-53.

-138 47. "Classified Bibliography of Feedback Control Systems: Part 1. Sampled Data Systems," R. Greer and T. J. Higgins. AIEE Conference Paper No. CP-59-1269, 18 pp. An updating of this is now in preparation. 48. Nonlinear Electrical Networks, W. L. Hughes. Ronald Press Company, New York, 1960, 313 pp. 49. "On the Treatment of Nonlinear Control Systems" (3), (in Japanese), Y.I.K. Izawa. Automatic Control, Vol. 7, 1960, pp. 142-149. 50. "Recent Advances in the Field of Sampled-Data and Digital Control Systems," E. I. Jury. University of California Institute of Engineering Research, Department of Electrical Engineering Series, No. 60, Oct. 20, 1959. AIEE Transactions, Vol. 78, part 1, 1959, pp.769-777, is pertinent. 51. "Control System Analysis and Design via the Second Method of Lyapunov: I. Continuous System: II. Discrete-Time System," R. E. Kalman, J. E. Bertram. ASME Transactions, Vol. 82, ser. D., Journal of Basic Engineering, 1960, pp. 371-93, 394-400. 52. "Stability Theory" (hectographed notes), W. Kaplan. Department of Mathematics, University of Michigan, Ann Arbor, 24 pp. In Reference 133 (1957). 53. Nichtlineare Mechanik, H. Kauderer. Springer-Verlag, Berlin, 1958, 684 pp. 54. The Methods of Oscillation Theory in Radio Engineering (in Russian), I. M. Kopchinskii, G. E. I., Moscow, 1954. 55. Some Problems in the Theory of Stability of Motion (in Russian), N. N. Krasovskij. Izd. F.M.L., Moscow, 1959, 211 pp. 56. Research in the Dynamic Stability of Synchronous Machines (in Russian), N. M. Krylov and N. N. Bogoliubov. Kharkov, 1932, 99 pp. 57. New Methods of Nonlinear Mechanics as Applied in the Study of VacuumTube Oscillators (in Russian), N. M. Krylov, N. N. Bogoliubov. ONIT, Moscow, 1954, 243 pp. 58. Methods of Nonlinear Mechanics as Applied to the Theory of Stationary Oscillations (in Russian), N. Kryloff, N. Bogoliuboff. Izd. AN. Ukr. SSSR, Kiev, 1934, 111 pp. 59. Introduction to Nonlinear Mechanics (in Russian), N. Kryloff, N. Bogoliuboff. Ukr. Acad. Nauk., Vol. 1-2, (1937) 365. 60. Introduction to Nonlinear Mechanics (free translation by S. Lefschetz), N. Kryloff and N. Bogoliuboff. Annals of Mathematics Studies, No. 11, Princeton University Press, Princeton, N. J., (1947) 106.

61. Analysis and Control of Nonlinear Systems: Nonlinear Vibrations and Oscillations in Physical Systems, Y. H. Ku, Ronald Press Company, Inc., (1958) 360. 62. "Theory of Nonlinear Control," Y. H. Ku, Paper given at 1960 IFAC Conference, Moscow. 63.'Recent Soviet Contributions to Ordinary Differential Equations and Nonlinear Mechanics," J. P. LaSalle, S. Lefschetz. Report RIAS TR59-3, RIAS, Baltimore, April 1959, 47 pp. 64. "Some Extensions of Liapounov's Second Method," J. P. LaSalle. Report TR60-5, Baltimore, 1960. 65. Differential Equations. Geometric Theory, S. Lefschetz. Interscience Publishers Inc., New York, (1957) 364. See Chapter 6 especially. 66. "Liapounov and Stability in Dynamical Systems," S. Lefschetz. Report TR58-2, RIAS, Baltimore, 1958; Memorandum 58-5; Bol. Soc. Mat. Mex., 1958, pp. 25-39. 67. "Controls: An Application of the Direct Method of Liapounoff," S. Lefschetz. Report TR59-8, RIAS, Baltimore, 1959. 68. Stability of Nonlinear Automatic Control Systems (in Russian), A. M. Letov. G.I.T.L., Moscow, (1955) 312. 69. The Problem of Stability of Nonlinear Control Systems (translated by J. G. Adashko), A. M. Letov. Princeton University Press, Princeton, N.J., (1960) 350. Contains chapters added to the first Russian edition. 70. "Survey of Mathematical Methods for Nonlinear Control Systems," J. M. Loeb. ASME Transactions, Vol. 80 (1958) 1439-50. 71. "Probleme General de la Stabilite du Mouvement," M. A. Liapounoff. Annals of Mathematical Studies, No. 17, Princeton University Press, (1949) 469. Reproduced from Ann. Fac. Sci. Univ. Toulouse, 9, (1907), 244-474. Originally in Comm. Soc. Math. Kharkov, 3, (1893), 265-472; ONTI, Moscow, 1935; Gscowoscow, (1950) 471. 72. Some Nonlinear Problems in the Theory of Control (in Russian), A. I. Lurje. G.I.T.T.L., Moscow, (1951), 215. 73. Einige nichtlineare Probleme aus der Theorie der selbsttatigen Regelung translated by H. Kindler and R. Reissig), A. I. Lurje. Akademie-Verlag, Berlin, (1957), 167. Some Nonlinear Problems in the Theory of Control, Her Majesty's Stationery Office, London, (1957), 165.

-140 74. Womit beschaftigen sich dit Regelungstheoretiker in der Sowjetunion?, K. Magnus. Regelungstechnik, Vol. 4, 1960, pp. 113-114. 75. Theory and Application of Mathieu Functions, N. W. McLachlan. Oxford University Press, Oxford and New York, (1947), 401. 76. Ordinary Nonlinear Differential Equations in Engineering and Physical Sciences, N. W. McLachlan. Oxford University Press, Oxford and New York, Ed. 1, (1950), 201; Ed. 2, (1955), 270. 77. Some Questions in the Theory of the Stability of Motion in the Sense of Liapounov (in Russian), I. G. Malkin. Sborn. Nauk Trudov Kazansk Av. Inst., (1937), 103. 78. The Methods of Lyapunov and Poincare in the Nonlinear Theory of Oscillations (in Russian), I. G. Malkin. Gostekhizdat, Moscow, (1949), 244. 79. Theory of the Stability of Motion (in Russian), I. G. Malkin. Gostekhizdat, Moscow,(1952), 432. English Translation: U. S. Atomic Energy Commission, Technical Information Service Extension, Oak Ridge, Tennessee; Office of Technical Services, Washington, D. C., 456 pp. German translation by W. Hahn and R. Reissig, Theorie der Stabilitat einer Bewegung, R. Oldenbourg, Munich, (1959), 402. 80. Some Problems of the Theory of Nonlinear Oscillations (in Russian), I. G. Malkin. G.I.T.T.L., Moscow, (1956), 492. 81. "New Studies in Nonlinear Oscillations" (in Russian), L. Mandelstam, H. Papaleksi, 1936. In Mandelstam's Collected Works, Akad. Nauk. SSSR, Moscow, (1950), 3, 89. 82. "Expose des Rescherches Recentes sur les Oscillations Nonlineares, L. I. Mandelstam, A. Witt. Journal Technical Physics, SSSR, 2, (1935), 81-134. 83. L. I. Mandel'shtam and the Theory of Nonlinear Oscillations (in Russian). Akad. Nauk. SSSR, (1956), 441. 84. Methods of Integrating Ordinary Differential Equations (in Russian), N. M. Matveev. Izd. Leningrad University, (1955), 655. 85. Introduction to Nonlinear Mechanics, N. Minorsky. W. Edwards Publishing Co., Ann Arbor, Michigan, (1947), 464. Enlarged Ed. 2 pend. pub. by Van Nostrand. 86. On Certain Applications of Difference-Differential Equations, N. Minorsky. Electrical Engineering Department, Stanford University, April 15, 1948, 58 pp.

87. The Theory of Oscillations, N. Minorsky. In Surveys in Applied Mathematics, Vol. II, Dynamics and Nonlinear Vibrations, John Wiley and Sons, Inc., New York, (1958), 111-197. 88. Linear Differential Equations with Delayed Argument (in Russian), A. D. Mishkes. Gos. Izdat. Tekh-Teor. Lit., Moscow, (1951), 255. German Translation, Lineare Differential Gleichungen mit nacheilenden Argument, Deutscher Verlag der Wissenschaften, Berlin, (1955), 180. 89. Transient Phenomena in Nonlinear Oscillating Systems (in Russian), Y. A. Mitropolskii. Izd. AN Nauk. Ukrain. SSSR, Kiev, (1955), 283. 90. "Review of the Nonlinear Control Theory" (in Japanese), T. Mitsumaki. Automatic Control, 6, (1959), 85-91. 91. Development of the Theory of Stability (in Russian), N. D. Moisseev, Gostekhizdat, Moscow, (1949), 663. 92. Qualitative Theory of Differential Equations (in Russian), V. V. Nemickii and V. V. Stepanov, Moscow, Ed. 1, (1949); Ed. 2, (1951), 550. English translation, Princeton University Press, Princeton, N. J., 1960. 93. Torsional Vibrations of Nonlinear Systems with many Masses (in Russian), I. S. Meiman. Oborangiz, Moscow, 1947. 94. Stability of Linearized Systems, I. I. Neimark. LWA, Moscow, 1949. 95. "Some Nonlinear Problems in the Theory of Automatic Control Systems," P. J. Nowacki. Archiwum Automatyki i Telemechaniki, 4, (1959), 3-24. 96. "Die Behandlung von nichtlinearen Problemen in der Regelungstechnik," P. J. Nowacki. Regelungstechnik, 8, (1960), 47-50. 97. Bases for the Applied Theory of Elastic Vibrations (in Russian), Y. G. Panovko. Mashgiz, Moscow, (1957), 336. 98. Sur les Solutions Asymptotiques des Equations Differentielles, T. Peyovitch. Soc. Math. Phys. de Serbia, N. F., Belgrade, [1952), 52. 99. Ordinary Difference-Differential Equations, E. Pinney. University of California Press, Berekely and Los Angeles, (1958) 262. 100. Some Problems of the Theory of the Stability of Motion (in Russian), V. A. Pliss. University of Leningrad, (195), 181. 101. Dynamics of Systems of Automatic Control (in Russian), E. P. Popov. Gos., Moscow, (1954), 798. German Translation, Dynamik der automatischen Regelsysteme, Akademie Verlag, Berlin, (1958), 780.

-142 102. Uvod do Teorie Nelinearnych a Quasiharmonichych Kmitov Mechaniskych Sustav. L. Pust, A. Tondl. CSAV, Prague, (1956), 174. 103. Die direkte Methode zum Schwingungs-Stabilitatnachweis, R. Reissig. Forschungen und Forschritte in der nichtlinearen Mechanik. 104. A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion, E. J. Routh. MacMillan, London, (1877), 108. 105. "Development of the Theory of Nonlinear Oscillations in the USSR," S. M. Rytov. Radio Engineering and Electronics, 2, (1957), 168-92. Translated from Radiotekhnika i Electronika; 2, (1957) 1435-1450. 106. Equazioni Differenziali Nonlineari, G. Sansome, R. Conti. Cons. Naz. delle Ricerche, Rome, (1956), 647 pp. 107. "Review of the Research and Practical Application of Automatic Control Engineering in Japan, Y. Sawaragi. Japan Science Review, Mechanical and Electrical Engineering, 3, 1958, pp. 133-136. 108. Contributions to the Theory of Electrical Circuits with Nonlinear Elements, J. J. Schaffer. Van Gorcum and Co., Assen, (1956), 93 pp. 109. "Soviet Literature on Control Systems, P. L. Simmons, H. A. Pappo. IRE Transactions on Automatic Control, AC5, (1960), 142-7. 110. Introduction to the Theory of Nonlinear Integral Equations (in Russian), N. S. Smirnov. ONTI, Moscow, (1936), 122 pp. 111. Static and Dynamic Stability of Structures (in Russian), A. F. Smirnov. Gos., Moscow, (1947), 307 pp. 112. "A Survey of the Scientific Work of Lyanpunov" (in Russian), V. I. Smirnov. Prik. Mat. Mech. SSSR, 12, (1948), 479-560. 113. Grundlagen der selbsttatigen Regelung (translated under D. H. Kindler), W. W. Solodonikow. Band 1. Allgeimen Grundlagen der Theorie linearisierter selbsttatiger Regelungssysteme; Band 2. Einige Probleme an der Theorie der nichtlinearen Regelungssysteme, R. Oldenbourg-Verlag, Munich and Veb-Verlag Technik, Berlin, (1959), 1-728, 729-1180. 114. Nonlinear Vibrations in Mechanical and Electrical Systems, J. J. Stoker, Interscience Publishers Inc., New York, (1950), 273 pp. 115. Introduction to the Theory of Vibration (in Russian), S. P. Strelkov. G.I.T.T.L., Moscow, 1950. 116. "A Selective Bibliography on Sampled Data Control Systems," P. R. Stromer. IRE Transactions PGAC-6, (1958), 112-114. "Bibliography of Sampled Data Control Systems," H. Freeman, O. Lowenschuss. IRE Transactions PGAC-4, (1958), 28-30.

-143 117. Lamesche, Mathieusche, und verwandte Funktionen in Physik und Technik, M. J. O. Strutt. Springer-Verlag, Munich, (1932), 116 pp. 118. "Der gegenwartige Stand der selbsttatigen Regelung in Japan," Y. Takahashi and Y. O. Oshima. Regelungstechnik, 3, (1955), pp 161-166. 119. Self-Oscillating Systems (in Russian), K. F. Teodorchik. G.I.T.T.L., Moscow, Ed. 3, (1952), 271pp. 120. "The Method of Continued Fractions Applied to the Investigation of Oscillations of Mechanical Systems." I. Simple Linear and Nonlinear Systems (in Russian), V. P. Terskih. Leningrad, 1955. 121. "Automation and Automatic Control, A Survey of the Field," J. G. Truxal. Proceedings, Conference on Science and Technology for Deans of Engineering, Purdue University, (1958), 5-30. 122. Erzeugung von Schwingungen mit wesentlich nichtlinearen negativen Widerstanden, R. Urtel. Nachrichtentechnische Fachberichte, 13, 1958. 123. "Der zeitige Stand der Analyse und Synthese von Nichtlinearitaten in Regelungssystemen," B. P. Th Veltman, Regelungstechnik, 5, (1957), 77-86. 124. "On Some Applications of Liapounoff's Theory of Stability in Electrical Machinery" (in Czech), Z. Vorel. Aplicare Matematiky, 1, 1956, pp. 59-75. 125. "Transportation Lag - An Annotated Bibliography," R. Weiss. IRE Transactions on Automatic Control, AC-4, (1959), 56-68. 126. Nonlinear Problems of Random Theory, N. Wiener. Technology Press, Cambridge, Mass., and John Wiley and Sons, Inc., (1958), 131pp. 127. Mathematical Methods for Investigation of Systems of Automatic Control (in Russian), V. I. Zubov. Gos. Soynz. Izd., Leningrad, (1959), 224 pp. 128. The Methods of A. M. Ljapunov and Their Application, V. I. Zubov. Leningrad, 1959. 129. Theory of Relay Systems of Automatic Control (in Russian), J. S. Zypkin. Gos., Moscow, (1955) 463 pp. German Translation by W. Hahn and R. Herschel, Theorie der Relaissysteme der automatischen Regelung, R. Oldenbourg, Munich, and VEB Verlag Technik, Berlin, (1958), 472 pp. 130. Problems of the Theory of Nonlinear Systems of Automatic and OpenLoop Control (in Russian), J. Zypkin. Gos., Moscow, 1957. 131. Nonlinear Automatic Control Systems, by Y. Z. Zypkin. Cleaver-Hume Press Ltd., London, pub. pend.

132. Actes der Colloque International des Vibrations Nonlineaires, Ile de Porquerolles, 1951. Publ. Sci. et Tech. du Ministrie de l'Air, No. 281, Paris, (1953), 296 pp. 133. Proceedings of the Symposium on Nonlinear Circuit Analysis (Ed. by J. Fox). Brooklyn Polytechnic Institute, New York, 1953, 411 pp.; 134. Transactions of the All-Union Conference on Automatic Control Theory, Five (?) set to-date. 135. "Proceedings of the First Joint National Automatic Control Conference," Dallas, Nov. 4-6, 1959. In IRE Transactions on Automatic Control AC-4, Dec. 1959, pp. 1-245. 136. Theory of Automatic Control of Motors: Equations of Motion and Stability M. Ajzerman. Gostekhizdat, Moscow, (1952), 523 pp. 137. "The Behavior of Nonlinear Systems," F. H. Clauser. Journal of the Aeronautical Sciences, 23, (1956), pp. 409-32. 138. "An Introduction to the Study of Nonlinear Control Systems," J. F. Coales. Journal of Scientific Instruments, 34, (1957), pp. 41-7. 139. "Nonlinear Control Systems, J. F. Coales, Process Control and Automation, 7, (1960), 195-204. 140. "On the Application of the Method of Lyapunov to Difference Equations" (in German), W. Hahn. Mathematische Annalen, 136, (1958), 403-41. 141. Berechnung nichtlinearer Schwingungs-und Regelungs-Systeme, H. W. Hahnemann. VDI-Zeitschrift, 98, (1956), 46. 142. Stability Analysis of Nuclear Reactors by Lyapounoff's Second Method, T. J. Higgins, Paper presented at "Workshop on Lyapunov's Direct Method" at Joint Automatic Control Conference, M.I.T., Sept. 7, 1960. 143. "Asymptotic Stability of Some Nonlinear Feedback Systems," T. T. Kadota. Report, Electronics Research Laboratory, Department of Electrical Engineering, University of California, Berkeley, Institute of Engineering Research, Series No. 60, Issue No. 264, Jan. 4, 1960. 144. Oscillations in Nonlinear Systems, T. Khajori, Moscow, 1957. 145. "Mathematics and Aeronautics —The 48th Wilbur Wright Memorial Lecture," M. J. Lighthill, Journal, Royal Aeronautical Society, 64, (July, 1960), PP. 375-94. 146. "Bemerkungen zur Theorie und Literatur der linearen Impuls-Regelsysteme," L. Prouza. Regelungstechnik, 5, (1960), 162-7.

-145 147. "Application des Methodes Graphiques a la Mecanique non Lineaire," W. Masson. Memoire de Science, Universite de Bruxelles, Bruxelles, Belgium, 1955. 148. "A Survey of Techniques for the Analysis of Sampled-Data Control Systems, G. Murphy, R. Ormsky. IRE Transactions on Automatic Control, AC-2, (1957), PP. 79-90. 149. "Schrifttumszusammenstellung Uber Nichtlineare Regelvorgange," W. Oppelt. Fachusschuss Regelungs-Mathematik der GAMM, March, 1955, 15 mimeographed pages. 150. On The Application of Liapunov's Second Method to the Synthesis of Nonlinear Control Systems, A. Stubberud, C. T. Leondes, M. Margolis. Paper at NEREM Conference, Nov. 1959. 151. Approximate Methods of Higher Analysis, V.I. Krylov (translated by C. D. Benster). P. Noordhoff Ltd., Groningen, Netherlands, 1958, 681 PP. 152. "The Approximate Mathematical Methods of Applied Physics as Exemplified by Application to Saint-Venants Torsion Problem," T. J. Higgins. Journal of Applied Physics, 14, (1943), pp. 469-480. 153. Analytical Techniques for Non-Linear Control Systems, J. C. West. English Universities Press, London, England, 1960, 224 pp. 154. Nonlinear Control Systems, J. E. Gibson. (in preparation). 155. Dynamics of Nonlinear Servomechanisms (in Russian), N. S. Gorskaya, I. N. Krutosva, V. Rutkovskii. Izdatel'stvo Akademii Nauk SSSR, Moscow, Russia, 1959, 319 pp.

A PROBLEM IN STABILITY ANALYSIS BY DIRECT MANIPULATION OF THE EQUATION OF MOTION D. R. Ingwerson Sperry Gyroscope Division of Sperry-Rand Corporation Sunnydale, California

A PROBLEM IN STABILITY ANALYSIS BY DIRECT MANIPULATION OF THE EQUATION OF MOTION I NT O I I) I! CT ()ON St a i 1 i t analysis b y the L[yapunov method is simply the problem of find in( a pos i t i ve de ini te function, V, whose time derivative, taken in the direction of the motion of a system, is non-positive. If V also becomes infinite with inf1inite ( eviat ion from equ ili br i um, s tab ii t y for all possible i ni t i a 1 condit ions is assured. The failure of a function to satisfy these conditions does not necessarily imply instab ii ty. \Var ious ormal me thods f or producing such functions, notably thle one ofl li.tr'e, a re available. All of these redu ei(. to ce-,rtain fun(damental manipulations of the equation of motion. The naitulre of t he de,s i r ed ope rait ions is not always evident from t e e( q ut t i o n a lone and, t e rh fo r e t e formal procedures are u s e f u 1. The followiny pro tlem is instructive in that it can i) e inve st igated by purely "brute force" techniques without any mo r knowledge of the y a punov method than t he statement above. Such an approach is suggested(. It can also be solved by the method of [.ur'e and b)y tthe use of describing functions. The r.es ullt 1. rom d escri bi ng f'unct ion annalysis is given for comparis on. -149

-150 By carrying through the indicated operations, it is seen that the production of Lyapunov functions is just a matter of integrating parts of the differential equation. The question of which parts to integrate may be answered by trial and error and inspection of the results. Such an inductive process is useful only to gain an understanding of the principles involved. The Lvapunov function achieved in this way was originally produced by a more formal method of matrix integration, soon to be published, and it was observed that the mechanics of the method for this special case reduce to those indicated. TIlE PROBL.EM The third order system shown below has a linear transfer function with three poles and no zeros preceded by a nonlinear gain which increases with increasing error. It is compensated for stability by a derivative feedback which modifies the signal to the nonlinear element. It is desired to system find the conditions under which this/is stable for all inputs. rt>~ N(e) I t ) r t)V-^n --- N N(e)=e+ ce3

-151 It is assumed that K, T, T2, c and c,2 are all positive cons tan t s. y ma k i ng the subst t t tions; -T 1 - 1 _ K _( 1 1 = r-r': T'1 S; T and t TT 1F2 212 1 2 1 2 Jn a;c c ordance with delf inition (10) ot the paper "Principal ) e l i n i t i o n s o f' S t a I) i 1 i t y o f t e e ( i 1 i r i m p o s i t i o n mu t e x i t a 1' ter a timne tI if r(t) is I'x e dl at the instantaneous va 1u l it aI ssumes (i t t'T' reo re, t -,r t1 the substitution; y - r ( t ), re duc es t he pIro ) e m to a considera t i o n o f t h e o110lo i n y e (ua t i o of mo t ion. 1 232 ". 3 Ti'F]2':y': t (K +T T be'<) )1 y 7 4 b c y;b 2 Yy y 1 b (I v 2 - 0 In order t h at the system ) e stable f'or a all input t un ctions r (t) the'equl iib rium postion o thi eqtion must. be stable for all po ssible initi a d i s tu r b a nc e s.

-152 CONI)T lIONS FOR STAIULITY FROM DESCRIBING F IUN(TION ANALYSIS The describ ing function for the nonlinear g(ain is ~) 9 (1 ~ 1' cle )' If N(e) is replaced by a constant gain of this magni tude, the e quat ion of the system, express ed in operational not ti o n i s; 3 o hs1 S" ( ( 1) 4 1'1'lh e Ro t h-l u r i t z c r i t r ion pr ed i c t s t a i i t y for this whe n; I) b j) - (1b c - 1) (b 0 12. - " e ) 1 5 () In term s o' t h e va ritab es in the Ire-'v i o0.s e a t io n, t i s e xp e s i o n ) o e s (h C I ( 1) -. 1 (y! - 2Q] > 0, -i 2 1) l, wller e r e v iand vt a r i nt r p rt e as th p eak ma n i tu d es of t le I 0.'0 [un(l amental compont n ts of the varialbles y and v. The condition t'o r s t; I)-i I i t y i s p a e d i n t h i s a n a 1 y t i c f' o r m f o r c o mpa r i s o n wi th the result d erived ) y the. yap unov mnet ho d. A MEANS FOR F1NDLNG A LYAPINOV FU NCTION It is well known that integrals for certain dlifferential. e(luations can l)e found by multiplying the e.!uation b y an

-1553 appropriate integrating factor. The integrating factors are usually obtained by a combination of inspection and guesswork. A common practice in second order, conservative systems is to obtain an energy integral by multiplying by the derivative of the dependent variable. In higher order systems the integrating factor is more complicated. As an example of the way this may be applied to the problem, let the equation of motion be divided into linear and nonlinear parts. [i, - y + bl + (b2 i b2) + bay N] b 3'3 2 2.2 3 3 FN] = bi (y - 3c2Y.cy2 c 3 (y' + 3cy2 yy y C2 y ) First consider. the equation [Lr] = 0. It may be observed that the first a'nd third terms can be integrated by multiplying by y. Therefore, y..2 y X dt 2 + (b2 +.c2) ] b 4 y - 0. The term in the brackets might be selected as a candidate for a Lyapunov function for the linear equation except that it is only semi-definite. Another integration can also be accomplished by multiplication by y..2 2 L y b y =d r 1 3 2 3. y[L]' L 2'b-y — + Y + (b2 + b3c2)T 0

Here the bracketed term also has some of the desired characteristics, although it is also semi-definite. The next logical step is to try a combination of the equations above...2 2 2 (by + ) [L] t - + (b 1 + b2 + b3c2) + b 3 ] (bl + y ) I-dt L1 2 1 2 3 2 + bl1y + b3yy + bly + bl(b2 + b3c2)y 0 The term in the bracket is positive definite but since [L] 0, its derivative is -(bIy' + b3Yy+ bly + bl(b2 + b3c2)2 ) which cannot be non-positive. However, the undesirable terms in the derivative can be converted to a more convenient form by an integration by parts, giving; 9.2 (by +') [L - dt [ + b y + (b + b b c ) L + b yy 122 Now, if the bracketed expression is chosen as a 2 Lyapunov function, its derivative is, -(blbo + blb3c2 - b3)Y, which is non-positive when the coefficient in parentheses is positive. From Sylvester's inequalities it may be found that this is the same condition under which the bracketed expression is positive definite. Thus, a Lyapunov function and its derivative for the equation, [L] = 0 are; See page 11 of "The'Direct Method' of Lyapunov in the Analysis and Design of Discrete-Time Control Systems", J. E. Bertram.

-155-..2.2 2 V 2' =! - J- b;y (b) +) b bb Y L (1 2 3 2 2 3 13 2.2 — (bKb12 4 h1b3c2 - 3). The remainder of the problem consists in applying similar reasoning to the equation, FD] = 0. In this case, the same integrating factor may be used for the nonlinear part of the equation as is used above for the linear part. A SOLUTION By applying the operations indicated for the linear part of the equation to the nonlinear part, and eliminating the undesiratle remaining terms through an integration by parts, it can be deduced that; (d t - 2. 2 3 3 2 (d t 2 42 YY d 3 b4Y Y + 1 b 1 4 4 + b (b c2-l ) (yY + 3c2' y + C2 ) v That the variable term in parentheses is semi-definite is verified by Sylvester's inequalities. Call the bracketed expression V N'

This is monotonically increasing in y and y as can be verified lby observing that ofvN avN and ( - ay aO hav e z eroes only for zero values of y and T. Therefore, VN is s em i -d e f i n i t e. As a,yapulnov function for the eq(uat ion, U] -0 O, the s um of the funic ti on s V and VN m a y ) e used. l.et V + V - V. I N N I, Then (b).' )[ ] dV D V —' 0 where; 2,.2..2 1.) sL <) " 1 ^- (..') 1 v 9'V - { ^ 2 l'- } )V iV non-positiv L o r; ) -3 { 2 1) (y2t3 - 1) 2Y2; ) 3. 0.2 I 3 V is non-posit ive O r;,-.: 2 3 b1.2. 4 (l,22 1) + b b' (3v 3 cyy ) c,2 Y )] > y) ThI i i n e uality may 1) e compared w i t I the one ) t a ined l)y tt e l lse ol f d escribing functions. In 1)oth cases stal)ility exis ts for all inputs when; (b1c92 - 1) > 0.

APPLICATION OF LIAPUNOV'S SECOND METHOD TO CONTROL SYSTEMS WITH NONLINEAR GAIN J. E. Gibson School of Electrical Engineering Purdue University Z. V. Rekasius School of Electrical Engineering Purdue University

I. Introduction The major difficulty in applying Liapunov's "Second Method" to the analysis of practical control systems is due to the lack of a straightforward procedure of finding a Liapunov's function (i.e. a function of the system variables satisfying Liapunov's stability or instability theorems). However, several Liapunov's functions have been developed that apply to a large group of control systems that can be described by the so-called "first canonic form" of system differential equations. The first canonic form is defined in Section II. As is shown in Section II any autonomous closed-loop system with a single nonlinear gain element can be described by the first canonic form of differential equations. Several Liapunov's functions applicable to systems expressed in the first canonic form of differential equations are outlined in Section III. These functions enable one to establish sufficient conditions for asymptotic stability of such systems. The systems to which the procedure of stability analysis presented in this paper is applicable can be represented by the block diagram shown in Figure 1. It is assumed that the input into the system, r(t), is removed at time t = O. i.e., r(t) = 0 for all t > 0. (1.1) Under the above assumption the block diagram of the system can be simplified as shown in Figure 2. It will also be assumed that the input-output characteristics of the nonlinear gain element can be described by a continuous, singlevalued function; y = f(x); f(o) = 0, (1.2) where x is the input into and y the output of the nonlinear element. -159

II. Canonic Transformations Consider a closed-loop servo system described by a set of differential equations i = Xizi + f(x) i = 1 2,... n, (2.la) dt n x = z1 (2.1b) i=l and n dti Pi zi - rf(x). (2.1c) This form of differential equations is called the First Canonic Form (or Lur'e's First Canonic Form) of differential equations (Ref. 1, p. 1357). It may be used to represent a closed-loop system with a single nonlinear gain element, and with the driving function removed at time t = 0. The block diagram of such a system is shown in Figure 2. To show that Equation (2.1) actually represents the system of Figure 2, let A d D = t (2.2) Then, from Equation(2.1a) and Equation(2.1b) one finds (D - X) Zi =y i = 1, 2... n (2.3) and n x ai =i z'. (2.4) where y = f(x) (1.2) represents the nonlinear element characteristics. Solving Equation (2.3) for zi and substituting into Equation (2.4) one obtains n X = 7, y i= (2) i11 D- X (2.5)

-161 - Note that the loop transfer function of the system of Figure 2 is G(s) = G1 (s) G2(s) = - ) (2.6) Consequently, from Equation (2.5) and Equation (2.6) the loop transfer function is n G(s) = -a (2.7) i=l - xi i=1 s - ki Equation 2.7 indicates that the constants Xi are the poles of the loop transfer function G(s) at the corresponding poles. Thus the first canonic form of differential equations for the system of Figure 2 (or that of Figure 1 with either constant or zero input) can be obtained from the partial fraction expansion of the loop transfer function G(s). To complete the transformation of system differential equations into the first canonic form one may differentiate Equation 2.1b with respect to time and then substitute Equation 2.la. This procedure yields i = 1 2,... n, (2.8) n and r = - ai. (2.9) i=l Once the numerical values for the coefficients Xi. ai, Bi and r have been calculated it may be possible to prove the stability of a system by means of one of the Liapunov's functions discussed in the next section. III. Liapunov's Function Lur'e has considered the function X n n ai aj zi zj V = Z Z i (a) da (3.1) i=l j=l i + ij as a Liapunov's function for systems described by the first canonic form of

-162 differential equations. It can be shown* that this V-function is negative definite** if the following inequality is satisfied: f (a) da 3 0. (3.2) 0o provided that the constants ai are real for corresponding real Xi's and are in pairs of complex conjugates for corresponding complex conjugate pairs of Xi's and that Re ki < 0. The time derivative of this Liapunovts function, in connection with the first canonic form of system differential equations, is 2 n 2 dV = r f (x) + alzi) - dt =l n n aj -f(x) Z zi ( i - 2 ai ). (3.3) i=l i=l Xi + The time derivative of this Liapunov's function [Equation (3.3)] can be made positive semidefinite***by letting n aj 2 a. -.. - — + Hi i = 1, 2,... n. (3.4) 1 J=1 Xi + X. Lur'e has also shown that by adding to the Liapunov's function of Equation (3.1), the term 2 2 2 X= Az2+ + A2z2 + C zS z+2 1 1 ss1Ss+l Zs+2 + C3 Zs+3 s+4+ Cs-n-l n-l n (3-5) *Ref. 2, p.46 *i.e., V is negative everywhere in the phase space of the variable x, except at the origin where it is equal to zero. dV ***I.e., dt is non-negative everywhere in the phase space of the system variable x.

-163 where the constants A and C are infinitesimally small negative numbers, the time derivative of the Liapunov's function [Equation (3.3)] can be made positive definite. Consequently the application of Liapunov's stability theorem leads to the following stability theorem known as Lur'e's Theorem:* If a system described by Equation (2.1) satisfies the following conditions: a. There exists at least one solution of a set of stability equations [Equation (3.4)] such that ai are real for corresponding real Xi's and are in pairs of complex conjugates for corresponding complex conjugate pairs of Xi; x b. / f (a) da >0; f(o) = 0; c. the constant r >0, d. Re Xi < 0 for all i=l,2,... n; then the system is globally asymptotically stable. Local asymptotic stability can also be established by means of Lure's theorem, if there is a range of values of x, containing the equilibrium state, over which Equation (3.2) is satisfied. The preceding stability equation [Equation (3.4) ] may frequently reject systems that are actually stable since it puts too many restrictions on the system. Since Lur'e's theorem represents sufficient conditions for asymptotic stability, which may not always be necessary conditions for stability, it is possible to relax the requirements of Lur e's theorem considerably, thus making it applicable to a greater number of stable systems. *Ref. 2, p. 51

By adding to and substracting from Equation(3.3) the quantity n 2 JT7 f(x) Z aizi i=l and then selecting as stability equations n 2 ai ( 7 j=1 aj xi + Xj - \r ) = pi i=1,2,... n (3.6) one obtains 2 dt L i=l Consequently, Equation (3.6) can also be used as a stability equation in Lur'e's Theorem. In other words the roots ai of Equation (3.6) can be used instead of the roots ai of Equation (3.4) to prove that a system is stable by the use of Lur'e's Theorem. The exact solution of Equation (3.6) is not known for higher order systems (n > 3). It is sometimes possible to find approximate values of the roots a. of Equation (3.6) by rewriting this equation as I a 2: i ai --- i + then assuming the values of Equation (3.8), solving for change in the values of ai n aj =-n j=1 j j= n aj 2 fr ai - 2 aiZ; j=2 + Xj ai and aj on the right-hand side of ai and repeating the procedure until the became negligible. The relationship* j +r'( 3.8) 3.9) p ~> - can be used to check the answer. *Ref. 2, p. 55

-165 Lur'e also considered the function n n "ai aj V = E Z - i zj (3.10) i=l j=l Xi+ Xj as a possible Liapunov's function in connection with the first canonic form of differential equations and obtained the stability equation n aj 2 a. Z - = ai; i = 1, 2,... n. (3.11) 1 j=l X. + 1 j A system was shown to be asymptotically stable if: a. the roots a. of Equation (3.11) satisfy the requirements of Lur'e's stability theorem, b. Re Xi < 0 for all i = 1, 2,... n, c. x f(x) > 0 for all I|x > 0; f(o) = 0. Various other simplified stability criteria (i.e., other stability equations based on the above two Liapunov's functions as well as other V-functions) have been successfully applied to prove stability of closedloop systems with a single nonlinear element. References 1 through 4 contain many examples of such simplified stability criteria. Many stable systems will be rejected by the simplified stability criteria [Equations (3.4, 3.6, and 3.11)] presented in this outline due to rather weak restrictions on nonlinear gain characteristics. A procedure whereby the nonlinear element characteristics are restricted to a much smaller region of its input-output plane is reported in Reference 3.* Such restrictions decrease considerably the number of actually stable closed-loop systems which otherwise would have been rejected by the V-functions of Section 3 of this paper. *It will also be included in Technical Report No. 3, to be published in October, 1960 by Purdue University, School of Electrical Engineering under Contract No. AF 29(600)-1933 from the Holloman Air Force Base, New Mexico.

-166 It must be emphasized, however, that the problem of finding Liapunov's functions which would yield necessary conditions for stability of the systems shown in Figure 1 in case of higher order systems (n > 3) has not been solved. Hence, if one set of stability equations rejects a system, it does not mean that the system is definitely unstable, since a different set of stability equations may be used to prove that such a system is stable. IV. Problems Problem 1: Consider a closed loop system shown in Figure 1 with G1(s) =1 Gl(s) = ~2() — s(s+l)(s+2) and the nonlinear element being a saturating amplifier. The input-output characteristics of the amplifier [Figure (3)] satisfy the inequality x f(x) >0 for all I|x > 0; f(o) = 0. Let the driving function r(t) be removed at time t = 0. a. Arrange the differential equations describing this system for time t > 0 into the canonic form d = X. z + f(x) i = 1,2,... n, dand and n x =i a —z X = E aizi' b. Find also dx _ 9 dt Find numeric values for all the coefficients of the canonic equations.

-167 Problem 2: Consider the closed-loop servo system described by the canonic equations dz1 dt dz2 dt dz3 dt - -2z1 + f(x) =-3z2 + f(x).-5z3 + f(x) x = 0.333 z = z2 + 0.667 z3. Find the loop transfer function g(s) if r(t) = 0 for all t > 0. Problem 3: a. Consider the V-function of Equation (3.1) as a possible Liapunov's function for the system of Problem 2. Assume that the nonlinear element is a saturating amplifier with the input-output characteristics as shown in Figure 3. What conclusions can you draw about the stability of the system of Problem 2? b. Try n v = i=l n j= j =1 ai aj zi zj x1 + j as a possible Liapunov's function for the system of Problem 2. Can you draw any stability conclusion from this V-function and dV its time derivative -? dt Problem 4: The V-functions of Section III cannot be used as Liapunov's functions for the system of Example 1. This is due to the fact that 1 = 0. 1

-168 In such cases where one of the poles of the loop transfer function G(s) is at the origin of the s-plane, the function 1 2 n n a a V =-2 Iaz +z ai a. i=2 j=2 x. + x I + x z zj. - f (a) da J / ^ may frequently be used as a Liapunov's function. a. What conclusions can you draw about the stability of the system of Problem 1 from this V-function? b. The results of part (a) should not be surprising. Can you give some reasons explaining the result of part (a)? (Hint: replace the nonlinear element by a linear amplifier, i.e., let y =kx). Problem 5: Consider the system shown in Figure 1 with G(s) ( s(s+2) (s+3) and a saturating nonlinear element, the imput-output characteristic of which is shown in Figure 3. a. Find the region of the nonlinear element input-output characteristic plane [Figure (4)] over which the system is stable.

-169 R Fig. 1 Block Diagram of a Closed-Loop System Nonlinear Element with a Single Fig. 2 Simplified Block Diagram of a Closed-Loop System with a Single Nonlinear Element output, y y=-(x) Input, x Fig. 3 Input-Output Characteristics of a Saturating Amplifier

-170 R + - - ~G(s) N( El 1 C online ar lement I Fig. 4 Block Diagram of the System of Problem 2. / A/ /I x / / / Fig. 5 Allowable region of the amplifier gain for the system of problem 5.

BIBLIOGRAPHY 1. Lur'e, A. I., Rozenvasser, E. N., "On Methods of Constructing Liapunov Functions in the Theory of Nonlinear Control Systems," Proceedings of the International Federation of Automatic Control Congress, Butterworth Scientific Publications, 1960. 2. Lur'e, A. I., Some Nonlinear Problems in the Theory of Automatic Control (book), Her Majesty's Stationery Office, 1957. 3. Rekasius, Z. V., Ph.D. Thesis, Purdue University, 1960. 4. Letov, A. M., Stability of Nonlinear Control Systems (book, in Russian) Gostechizdat, 1955. -171

SOLUTIONS OF THE WORKSHOP PROBLEMS Problem 1: a) The loop transfer function of this system is G(s) = 1 s(s+l) (s+2) Expanding this transfer function into partial fractions one obtains G(s) 0.5 _ + 0. s s+l s+2 Consequently, from Equation 2.7 and Equation 2.1 the canonic form of differential equations for this system becomes dzl f(x), dts dz2 — 2 - z + f(x), dt dz3 dz3 = - 2 3 + f(x), dt and x = - 0.5 z1 + z2 - 0.5 z3. b) Differentiation of the last equation with respect to time and substitution of the preceding three equations yields dx =- z2 + z3. dt Problem 2: The canonic equations can be rewritten in operational notation as follows: y 1 = D+2 y z2 - D+3 z _Y 3 D+5 -173

-174 and x = 0.333 Zi - z2 + 0.667 z3 where y = f(x) represents the output of the nonlinear element. Eliminating the canonic variables among the above four equations one obtains x 0.333 1 + 0667 D + 2 D+3 D + 50.6 or, from Equation 2.7, G(s) X(s) _-0.333 _ 0.667 Y(s) s + 2 s+3 s + 5 Thus s + 1 G(s (+2) (s+3) (s+5) Problem 3: a) The V-function of Equation 3.1 implies the use of either Equation 3.4 or Equation 3.6 as the stability equations for the system. Note that, from Equation 2.8, 2.9 and 2.la dx dt - 0.667 z1 + 3.000 z2 - 3.333 z3 + 0.000 y. Hence r = 0 and consequently Equation 3.6 is, for this system, identical to Equation 3.4. Substitution of the numerical values in Equation 3.4 yields: 0.500 a + 0.400 a a + 0.286 a a = 0.667, 1 2 1 3 2 0.400 ala2+ 0.333 a2 + 0.250 a2a3= -3.000, 0.286 a a3 + 0.250 a2a3 + 0.200 a = 3.333.

-175 Simultaneous solution of the above three equations yield the constants al = + 3-333, a2 = - 12.000, a3 = + 11.667. Hence the requirements of Lur'e's Theorem are satisfied and thus the system is globally asymptotically stable. b) The suggested V-function implies the use of Equation 3.11 as the stability equation. Substitution of numerical values into Equation 3.11 yields 2 0.500 a1 + 0.400 ala2 + 0.286 ala3 = - 0.333, 2 0.400 ala2 + 0.333 a2 + 0.250 a2a3= 1.000, 0.286 aa3 + 0.250 a2a3 + 0.200 a2 = 0.667. Simultaneous solution of the above equations yields a1 = 1.340 - jO.513, a2 = - 4.360 + j2.720, a3 = 3.033 - j3.210. Since all X's of this system are real and all a's are complex, the V-function used in this part rejects this system, even though it was shown in part (a) that the system is actually stable.

-176 Problem 4: a) The time derivative of the proposed V-function, corresponding to the first canonic form of system differential equations is In2 r dt V i- Z aiZi + f(x) (a1 - 1) Zi + Z Zi 2 a - + r f(x) a1 = P, and n 2 ai T a - = pi j=2 xi + xj Substituting the numerical values (from the solution of Problem 1) one obtains a1 = 0 and - a2 - 0.667 a2a3 =- 1.000, 2 - 0.667 a2a3 - 0.500 a3 = 1.000. Hence a2 = - 1.414 - j1.000, a3 = - 1.414 + j2.000. Since X2 and X3 are real while a2 and a3 are complex, the stability equations reject this system.

-177 b) If the V-function used in this problem could prove that this system is stable, then it should also prove that this system remains stable if the nonlinear amplifier is replaced by a linear amplifier with a positive gain K. That is, it should prove that the system, described by the equations G(s) = 1 s(s+l) (s+2) and y = kx; O 0 k oo is also stable. This is obviously impossible, since the linearized system is unstable for sufficiently high values of gain K. Problem 5: Expanding the loop transfer function G(s) into partial fractions one obtains G(s) = - + 2 s+2 s+3 Hence, from Equation 2.1, the canonic equations for this system are: dzl dt- 2zl + f(x) dt dz2 - = - 3z2 + f(x) dt x = Zl - 2 2 and dx 2 f(x) -2 z1 + 6 z2- f(x) dt 1

-178 Try Equation 3.10 as a possible Liapunov's function for this system. From the stability equation (Equation 3.11) one obtains - 0.5 a2 - 0.400 ala2 = 1.000 - 0.400 ala2 - 0.333 a22 = - 2.000. Simultaneous solution of the above equations yields a = - 6.448 a2 = + 3.552. Consequently, from Lurte's Theorem, this system will be globally asymptotically stable if x f(x) >0 for all J|x > 0; f(o) = 0. This means that for global asymptotic stability the input-output characteristics of the nonlinear element shall be confined to the 1st and 3rd quadrants of the x-y plant.

UNIVERSITY OF MICHIGAN 3 9015 03023 1644