THE UNIVERSITY OF MICHIGAN COLLEGE OF LITERATURE, SCIENCE, AND THE ARTS Department of Mathematics Technical Report No. 18 NECESSARY CONDITIONS FOR OPTI-MIZATION PROBLEtMS WITH HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS M. B. Suryanarayana ORA Project 02416 submitted for: UNITED STATES AIR FORCE AIR FORCE OFFICE OF SCIENTIFIC RESEARCH GRATIT NO. AFOSR-69-1662 ARLINGTON, VIRGINIA administered through: OFFItE OF RESEARCH ADMINISTRATION ANN ARBOR February 1971

NECESSARY CONDITIONS FOR OPTIMIZATION PROBLEMS WITH HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS* M. B. Suryanarayana 1. INTRODUCTION In the present paper we consider a system of nonlinear hyperbolic partial differential equations (state equations) of the form azi/xhy = f(x, y, z, z,, ), (x, y) e G, 1 n1 i l,..,n, z(x, y) = (z,...,z ), v(x, y) = (v,..,v G = a < x < a + h, b < y < b + k], (1.1) with Darboux-type boundary conditions z(x, b) = 0(x), a < x < a+ h, z(a,y) = r(y), b < y < b + k (1.2) with constraints v(x, y) E U, (1.3) and we are concerned with the minimum of a functional of the form I[z, v] = =1 Ai (a + h, b + k). (1.4) *This research was partially supported by research project U. S. AFOSR-69-1662 at The University of Michigan. The author is greatly indebted to Professor Lamberto Cesari for his valuable guidance and constant encouragement during the writing of this paper. 1

Here p(x) = (1,...,0p ), a < x < a + h, and J(y) = (l,..., ), b < y < b + k, are given absolutely continuous functions (AC) in the respective intervals, with 0(a) = 4(b). The control space U above is a given fixed set of the u-space E.. The constants A., i = l,...n, are given. The minimum of the functional I[z, v] is sought in suitable classes Q of 1 \1 m pairs z(x, y) = (z,...,zn),v(x y) = (v,...,v ), (x, y) G, satisfying i 1 (1.1), (1.2), (1.3), the functions z belonging to a Sobolev space W (G)on P G, 1 < p < + o, and continuous on G, and the functions v being measurable on G. In the present paper we give Pontryagin-type necessary condition for the minimum. First,in no. 3 we obtain an existence and uniqueness statement for the solution z(x, y) = (z,...,z ), (x, y) E G, z (W1 (G))n, of the Darboux problem (1.1-2) (the original problem) for a given p, 1 < p < + ao, for given cp, 4, and for a given measurable function v(x, y) = (v,...,v ), (x, y) E G. We derive this existence and uniqueness statement from our previous paper [6b ] on multidimensional integral equations of the Volterra type. The optimization problem (1.1-4) can be written in the form proposed by Cesari [2] with state equations of the Dieudonn6-Rashevsky type, the Hamiltonian function H then containing 2n multipliers i, i'., i = l,...,n. As shown in [2], these 2n multipliers are expected to satisfy a suitable system of lineax partial differential equations and corresponding boundary conditions (the conjugate problem). Inno. 4 we formulate the conjugate problem pertinent to the optimization problem (1.1-4), and for the first time we prove, in the present situation, 2

an existence theorem for the solutions i.,.i of the conjugate problem. In other words, we prove, under hypotheses, that there are multipliers ki, I-i' i = l,...,n, in L (G), satisfying in a suitable sense the partial differential 00 equations and boundary conditions pertaining to the conjugate problem of problem (1.1-4). As in no.3, again we derive the existence statement from our previous paper [6b ] on multidimensional integral equations of the Volterra type. In no.5 we give a new proof of the increment formula of [2], under a set of hypotheses different from those in [2]. In no.6we derive as in [2] the Porfcryagin-type necessary condition for the optimization problem (1.1-4) with the existence of suitable multipliers actually proved. In no.7we make a number of remarks on the obtained results, particularly in relation to the previous papers by Cesari [2] and A. I. Egorov [3 a ]. In particular, we show that the present necessary condition yields-under strong smoothness hypotheses-the necessary condition previously proved by A. I. Egorov [3a]. On the other hand we show (no. 7, example 3) that these smoothness hypotheses under which Egorov's condition has been proved are not known a priori, while our necessary condition holds. 2. NOTATIONS If (X, ri ||) denotes any normed linear space, then X, n > 1 denotes the cartesian product of X with itself, n times; for x = (x,...,x ) E X we define ||xll =.n I|x i. If x e E, the n-dimensional Euclidean space, then we take l||xl Ix| = Z.=1 |I J. We shall denote by r, a rather arbitrary 3 -

family of measurable control functions. Precisely, let r be any set of measurable functions; v:G + U, v = (v,...,v ), with the following property: (*) For every function v E r, any point u E U, and any closed subset S c G, the control function v, defined by v = v in G - S, and v = u in S, belongs to P. Thus, every constant function v:G - (u), u E U, belongs to r. For functions cp L (G), 1 < p < + x, we denote by IIcp(I or l||cp the usual p - - p L- norm; in particular, |ilcII = Ess Sup I|cpT For functions in a Sobolov space p [W (G)] in G, say, z(x, y) = (z,.,z ), we shall denote by z = (,..,zn) 1 n and z = (z,...,), the usual generalized first order partial derivatives y y y of z, and we take I||zIw = lz l(G) -= zI + IZxlJp Iyz P wp 3. THE ORIGINAL PROBLEM We shall need the following hypotheses: (H,): The functions c(x) = ( n0,..., ) and 4(y) = (q,..., ) are defined and absolutely continuous on [a, a + h] and [b, b + k], respectively. The derivatives px and y which exist almost everywhere belong to x y L ([a, a + h]) and L ([b, b + k]), respectively, for some p, 1 < p < + oo. P P Furthermore p(a) = t(b). (H): The function f = f(x, y, Zl, z, z, u) (fl,.,fn) is defined on G x E x U, and each fi is continuous in u and measurable in (x, y) for fixed (zl, z2, z ) c E3n 4

(H3): For each v E r the function s (x, y) = f(x, y, o, o, o, v(x, y)) belongs to L (G) with p as in (H1). (H4): The functions fi are differentiable as functions of (z, z2, z3) and the derivatives f/z, f/z, dfi/dz3, ij = l,...,n, are continuous in (zl, Z2, z ) for fixed (x,y,v). Clearly they are measurable in (x, y). (H ): There are functions Kjr(x, y, u), j = 1,2,3, r = l,...,n, such that for all (x, y, u) E G x U and (zl, z2, z) E E3 we have fil/azr(x, y, Z1, z2, z3, u)J < Kj((x,, u) (3.1) and such that K. (x,y, v(x, y)) E L (G) for v e F, j = 1,2,3, i,r = 1,...,n. We state below an existence theorem (3.i) for the solution z of the Darboux problem (1.1-2) and a theorem (3.ii) concerning their behavior. We refer to [6b ] (or [6a ]) for proofs of these and other statements. Theorem (5.i) provides norm estimates on the solution z as an element of W (G), along with pointwise estimates on z, z, and z. Theorem (3.ii) shows the dependence x y of the solution on the data. (3.i) Theorem: Let v Fr be given. If H -H hold, then there exists a unique z E (W (G)) (with p as in H1), continuous on G, satisfying (1.2), for P which the generalized partial derivatives z, z, z exist and satisfy (1.1) a.e. in G. Furthermore, there are constants B1 and B2 depending only on h, k, p and on K = (lKij(x, y, v(x, y)||; i = 1,2,3; j = l,...,n) such that 5

Ilzl < BEk 1/p (II1cip + 2-111pl ) + hl/P ( 11ylp + 2-111 11 1 p IIIp + (h + k) ls(x, Y)|p, (5.2) Jz(x, y)I < 2 [llplC +11I I + BB2(h + k) 4 eK(h + k)(B + fG s (a,P)ddl)] (5.5) Iz (x y) < 0(x) + BB2 lzy(x, Y) < +2(Y) BB where B l= Ilp h1/q + I' lpk + Khk (11p|l + JItf) + Jr s (ap) xwp h yp B I=Ccxp p c L o and Q (x) = e [1px(x) + Kklcp(x) + fb +ks,)d], 1 x + b s0 ~(y) = e [lKh (Y)) +KhlJt(y)l + h s (ay)dac] 2: y a' The existence of the solution and the norm estimate (3.2) follow from [6 b, Appendix, Theorem 5, A.2] while pointwise estimates are a consequence of the absolute continuity (in the sense of Tonelli) of the solution z, and a repeated application of Gronwall's lemma. [See 6b, Appendix, (A 10)] or [6a ]. (3.ii) Theorem: For i = 1,2, let z. denote the solution of (1.1-2) corresponding to the data (cpi,'i) satisfying (H1) and control function v. in r. Let z = z - z2, cp =cp - cp2, =' -'2 and s(x, y) = I:(x, y, zl, zlx zly, v ) - f(xY,y, Zl, Zl, v2). With this notation, the above inequalities (5.2), (5.3), (5.4) again hold with s replaced by s and no further changes. [See 6b, Appendix, A.10; or [6a].] 6

Furthermore, if cp = c; r = 12 and v1 = v2 outside a square S [x - 6, x + 6] x [y - 8, y + 6] c G, then the pointwise estimates become fz(x, Y)f = 1z (x, y) - Z2 (x, Y) < B1 Jffs(a,P)doldP ~x y 52 S Z(~, y) | < e J s(x,)df + B2 IISS(aB)dcrdg y "" ~~X - 5y Jz (x, y) < e r s(cy)dc + B2 ff s(av,)doadg (3.5) where B1 and B2 depend only on h, k, and on K = max ((IKij(x, y, v (x, y)ll, i = 1,2,3; j = l,..,,n; r = 1,2). We shall need in the sequel these particular pointwise estimates. Remark 1. In view of the uniqueness of the solution z of the Darboux problem (1.1-2) for any given element v E r, we shall denote the functional I[z,u] of (1.4) simply by I[v], or I:'F E1. Remark 2. By introducing the notation zl = z; z2 z z, the x 5 y Darboux problem (1.1, 1.2) can be written in the equivalent Dieudonn6Rashevsky form: zIx = z2; Zx = f(x, y, Zl, z2, z, v); Zly = Z; Z2 = f(x, y, Z1, z2, z3, v) (3.6) with boundary conditions, zl(x, b) = cp(x); z2(x, b) = (Px(x); zl(a, y) ='y); z3(a, y) = y(y) (3.7) 7

It is to be noted that even though the system (3.6) seems overdetermined (four equations in three unknowns), it is not actually so, since the second and the fourth equations are equivalent. 4. THE CONJUGATE PROBLEM Cesari [2] has proved Pontryagin-type necessary conditions for problems of optimization with state equations in the Dieudonne-Rashevsky form f (x y, z, v); z. = g(x, y, z, v), z = (zl,..,z), i=1,2...n; and taking as Hamiltonian the expression H = f. +...+ f + lgl +...+ ng By 11 nn 11 nn assuming the zi, Xi, i to be in suitable Sobolev spaces, Cesari [2] showed that the multipliers Xi, pi should satisfy the "conjugate problem," i.e., partial differential equations of the form i. +. = -H/az, i = 1...n, Aix iy 1z 9to. along with boundary conditions, which are complementary to those for zi,...z and in relation to the cost functional under consideration. In view of remark 2 of no. 5, we use here the same Hamiltonian with the remark that since z2 and z do not appear in (3.6) we take 2 = 0, and the Hamiltonian reduces to H = \1z2 + + f z+ f, (4.1) 1 n 1 n where. = (,... = (i.,...,i) and the products are inner products in E. By taking the cost functional I in (1.4) in the equivalent form n (Cesari, [2]) I 2-l1 a +h -l b+k I = 2 A z(x, b + k)dx+ 2 fb A z(a + h, y)dy (4.2) where A = (A1...An), the conjugate problem becomes 8

)21 + 11 (%. +j 2) pfj/az1 lx ly O=1 3 2 1 x2y - - Z=1'(3 + ) / X = " Z1 (,+ 4 2) f j/z3 i = 1,2...n (4.3) 3x 1 j=l 3 2 j Xl(a + h, y) = l(x, b + k) = 0; 42(x, b +k) = 3(a + h, y) = A/2 4.4) In the present paper, we show first that the conjugate problem (4.3), (4.4) is equivalent to a system of two-dimensional Volterra-type linear integral equations of the type we have studied in a previous paper [6b]. The results obtained there will enable us to prove in this paper, the existence of multipliers hi, Li as solutions of the conjugate problem in a suitable class of functions, in L (G) (not a Sobolev space). In order to obtain the equivalent system of integral equations, we treat lx as arbitrary and formally integrate both sides of (4.3) as follows (in conjunction with boundary conditions in (4.4)); i x i Xl(x, y) = a+h Xx (a, y)da, y i ) y) = % (X ) -= f ~lX y) = -b+k x D) - y w. (~f/$z )(x, ~)d xb+k' b+k 1 i -1 X SY i i(xy) = 2-A f 1y; (+ x)ddP + k3(' Y) i fa+h b+k Ilx xh yb+k( ( f/zl ))(, R)dd - x a (w.( f/Az ))(a, y)da, 9

(x y) 2 -2 _A -f fX i i2, i b+k a+h l1 (o, P)dadp - Y+k(w. (f/z2))(x, )dP, (4.5) where w stands for X + i2' It is clear that w satisfies the integral equation w = Tw where ( )ix \ fa+h f4b+k i b+k i (Tw) (x, y) A.i + J w af/az + J wb+k/ = x y 1 y 2 a+h i + Ja+h w ~ f/Sz (4.6) x 3 (4.i) Theorem: If H2, H4, H5 hold, if v E r and z is the corresponding solution of the Darboux problem (1.1), then there exist infinitely many sets of solutions 3V', Y)' "2 in (J(G))n, with lX X (x,( y),( (x, y) e GI AC with respect to x for almost all y, jl(x, y), >2(x, y), (x, y) e G, AC with respect to y for almost all x, Al, 3, 11' y2 satisfying the boundary conditions (4.4), and having generalized partial derivatives, lx' 3x' 41y' 2y. Proof: As a consequence of (Theorem 3, no. 5; [6b ]), it is seen that there is a unique w e [X (G)] with w = Tw, where T is defined by (4.6). The conclusion of the theorem follows now by defining the functions A, i.1',3 as in (4.5), in terms of the unique solution w of (4.6), and an arbitrarily chosen [X (G)] - function \,lx Remarks: (a) Since w =. + 2 is uniquely determined as the fixed 53 2 point of T, for different choices of \ lx we still get the same k3 + Y2. (b) The solutions of (4.5) need not belong to a Sobolev class since, 10

for example, af/lz and hence A1 as given in (4.5) need not possess derivatives 3 3 with respect to y. (See example 3, no. 7.) (c) If f does not depend on (zl,z2,z ), then by choosing x = C, it is seen that a possible set of multipliers is given by the constant functions \ = =; = 2 = A/2. If af/zi, r = 1,2,3, are continuous in (x,y), as 1 15 2 r is the case when they depend on z only, then the multipliers can be chosen to be continuous. Finally, if f is linear in (Zl, z,z3) with coefficients analytic in (x,y), then the multipliers can be chosen to be analytic in (x,y) (see [6b]). 5. THE INCREMENT FORMULA AND AN ERROR ESTIMATE Let v and v be any two elements of r, the set of control functions o e Let z and z be the solutions of (1.1-2) corresponding to v and v, respectively. Let (A, ) = (Al, >5, C1''2) and (\, =h) = (,, c' I'','2) be solutions of (4.3-4) corresponding to (v, z) and (v, z ) respectively, 4n (Al' >3 1' C2)) c [L (G)] as in (4.i). In the sequal, when there is no confusion, the symbol H(u) stands for the expression H(u) = H(x, y, z(x, y), z (s, y), z (x, y),, ( x, y), [(x, y)) y = H(x, y (, z(x y), 2 z(x, y), u, 4x, y), i(x, y). where z, A, p are related to v and u denotes a point of U Also, for the sake of simplicity, we shall denote by z the expression z = (z(x, y), z (x, y), z (x, y)) = (zl(x, y), z2(x, y), z(x y)). In any case, we have z = z, z2 = z z3 = z. 1x y 11

In order to obtain a necessary condition of the Pontryagin type we express the increment I[v ] - I[v ] in terms of the integral of H(v (x, y)) over G, To this end, let us observe that, by simple calculations involving integration by parts of the expression a+h b+k a+h b+k Z - z) + Z z (z z ) a lb 1( lex lx 1( 1 Ey ly 3 3x -3x + 2 (zz - Z2 )]dxdy 2 2ey- 2y and the boundary conditions (1.2) and (4.5) we obtain [v ] - I[vo] = + fG [H(v (x, y)) - H(v(x, y))]dxdy, where'J~l G 1Q' = f 1 JJ [I/Hz.(x, y, z^, v,, A, ) - $I/azj(x, y, zl, v,, ii)](z. - z.) dxdy and z (x, y) = zj(x, y) + Q(x, y)[zj (x, y) - zj(x, y)], 0 < @(x, y) < 1. For details we refer to Cesari [2], or the author [6a ]. Error Estimate: It is clear that if f. (in the state equations (1.1)) are 1 linear, i.e., of the form Az + Bz + Cz + D(x, y, u) where A, B, C are matrixx y valued functions on G, then n reduces to zero. For the nonlinear case, we shall now obtain an estimate on r, and for this purpose, we need the following hypotheses: 12

(H6): There exists a function M(x, y, u), (x, y, u) E G x U, with M(x, y, v(x, y)) c L4(G) for any v E F, such that, for (x, y, Zl, z2, z3,u) E G x E x U and 1 < p < + oo we have If(x, y, z, z2, z, u) < M(x, y, u) + B [IZl i + {Iz + z31 ] / for some constant B3 > O. For p = + x, we require Jfl < M(x, y, u) + 0( Iz 1 + z 21 + Iz 3) for some function 0()) > 0, 0 < + < + o with 0(0) < K~ for some K. (H7): There exist functions K' (x, y, u), i = 1,2,3; j = l,...,n, such that 7 ij KI (x, y, v(x,y)) e L (G) for any v c F, and such that for (x, y, u) ij ~~ 3n e G x U and z, z e E3, i = 1,2,3, j = l,...,n, we have, f/azJ (x, y, Z1, z22 Z, u) - af/z (x, y, z1, z2, z3, u) I K.(x, y, u) Z=Ilz - Zs (5.2) Remark 1: Let, as before, s(x, y) denote If(x, y, z(x, y), vl(x, y)) - f(x, y, z(x, y), v2(x, y))I where = (z, z, z ) and z is the solution of (1.1, 1.2) corresponding to vl and vl, v2 E P. Then it is seen from H3, H4 and Ha that s E L (G); (p as in H1). Indeed, Jf(x, y, z(x, y), vl(x, y))J Klz(x, y) + If(x, y, o, vl(x, y))I where K = max (IKj (x,, v(x,y5)ll: j = 1, 2,3; r = l,...n); (see H ). Since Izi E L and If(x, y, o, vl(x, Y))I * L 5 pp p (by H3), it follows that s c L (G). The assumption (H6) is made only to guarantee that in addition, s L4(G). The same conclusion can be made under the following hypothesis: (Ht): There is a function M(x, y) defined on G such that (i) M(x, y) v (x, y) 6 L4(G) for every v e r, and (ii) for (x, y, zl, z2, z3) G x E3n 13

and ul, u2 E U, we have if(x, y, l z2, z, ul) - f(x, y, zl z2, z3, u2)| I M(x, y) | - 2 |. Indeed, s(x, y) = if(x, y, z(x, y), vl(x, y)) - f(x, y, z(x, y), v2(x, y))| < M(x, y)1vl(x, y) - v2(x, yJ < MJv l + MJv2 and s e L4(G). Further, if (H') and (H) hold with M(x, y) E L (G) and r c [L (G)], then statement (3.5) in theorem (3.ii) can be replaced by Iz(x,y)! + Iz (x,y)l + Iz (x,y) B4|IV1 - v211, (x,y) G, (5.3) where the constant B4 depends only on JIM|, h, k and on all Kij, K.. We shall denote below by u an arbitrary fixed point u E U. Let v be o an element of F, let z(x, y), (x, y) c G, be the corresponding solution of the Darboux problem (1.1-2), and let z denote the 3n-vector function z(x, y) = (z, z, z ) = (zl z2 z ) as above. Let (x, y) be an interior point of G. Let b > 0 be the minimum distance of (x, y) from the boundary of G. Let K/n denote the maximum of the 12 n numbers |Kij.(x, y, u)||, JKI.(x, y, u)JJ, j oo' ij oo JKi. (x, y, v (xy)) 1, lK' I(x,, y v (x,y))|, i = 1,2,, j = l,...,n; where the K are as in (H) and the K' as in (H). By (H6) the function ij 5 ij 7 6 s(x, y; u) = |f(x, y, z(x, y), u) - f(x, y, z(x, y), v (x, y))j belongs to L4(G). Thus, given > 0 there is a b > 0 such that fSSIs(x, y; u)| dxdy 2 4 2 < 2 and ff c s(x, y; u)| dxdy < 2 for every measurable set C c G of measure < 46. We may well assume 0 < <. Let S denote the square -_ -- o [x- < x < x +, y - < y < y + 8] and let C be any closed subset of S Let v be the function defined by v = v in G-C, and v = u in C. Then the 14

function s(x, y) = If(x, y, z(x, y), v (x, y)) - f(x, y, z(x, y), v (x, y))| is zero outside C and equals s(x, y; u) in C. Note that v E r, we denote by z the solution of problem (1.1-2) relative to v, and as usual we write = (z, z ) (z1, z, z ). Inequalities (3.5) yield in this case ~~ ex y l~ 26 ~E y+5 jz. (x, y) - zj(x, y)J < B[JS s(as,)dadP + f, s(x, P)dP + f- s(a,y)da] (5.4) x-5 for all (x, y) E G, j = 1,2,3, independently of the particular closed set C c S., and where B depends only on h, k and K. This inequality and the fact that s E L4(G) under H6 (or (Hg)) can now be used to estimate r. F'he integrand in the expression for r can be written as [= 3 + 12 la/azj(x, y, ZG, v) - af/az (x, y,, v) +i=l:3 +2 I laFi./z j(x, y,, v ) - afi/zj(x, y, y, v)l] * (Zj - Zj) -,.= 33 fj K 1 + - iej which yields, using (H7) HIl < Jx +!211. K ~ SG f = Z iz - zi IJz - z J(x, y)dxdy + JIx3 + I 11 7fG Jz - zJ Jlf/z (x, y, z v ) 15 2oo G-2S=l jc ~ j j - f/zj.(x, y, z, v )fdxdy (5.5) But, since the integrand in the last term above is zero outside S, we have HI < IlkI + ~2If * K ~ (11 + 2r2) where 15

= iG I j=l i - zl I z - Z l J( y)dxdy and 1 =SG ij=l oe I j 2= ff Ss- I - zj (x, y)dxdy 2 c =1 j. j' To obtain an estimate on n, we observe that jzi. - zil < IZi - z and then, by (5.4), we get'1 G(i-_l Izie - il(x, y))2 dxdy < 36 B2 ff [ff s(a,P)dadp + fY+ s(x,P)dP G YfS y-5 ix+& 2 + f- s(a,y)da] dxdy x-6 Using Holders' inequality and the fact that s e L4(G) under (H6), it is seen that I Il < M 2 [Js s 2(aP)dadP + (ff s (ac9)dadP) /2] (5.6) F5 2 for some positive constant M2. Similarly, using (5.4) and Holders' inequality, we get y+& X+6 1121 < ff 3B[Jf s(,,d)dad + s(x)d + f s(a,y)d]dxdy - SE Ss y-s x-5 < 3B(452 + 25 + 25)ff s(ca,)dcadp S6 < 6B(45 + 4)2 (f S s2(aP)dadP)l/2 (5.7) Using (5.6) and (5.7), the inequality (5.5) can now be written as 111 < M 52[ff s ca,)dadp + (ff s (c,)dadl)l/2 S5, 16

+ (Jf s (ac)dadP) 1/2 (5.8) S~ where M is a constant depending only on K, B, and I|l3 + 211|. 2 Given E > 0, let us now choose a positive number t with 0 < t < I < E/6M. 2 2 4 2 Let 5 > 0 be now chosen as before with JS s < 2 and ff s <. Then S S M 2( 2 + + ) M 2 6 2 6 hI| < M 6 (2 + t + 0) < M 3* (c/6M) = E 5 /2. In conclusion, if v E r, u E U, and E > 0 are given, then there exists a 8 > 0 such that for a function v = v outside S. and v = u in a closed subset of S6 E o 6 e & I(v ) - I(vo) = 1 + Jff[H(v (x,y)) - H(vo(x,y))]dxdy (5.9) with InI < 2 e/2. 6. A NECESSARY CONDITION FOR OPTIMALITY In this section, we shall state and prove a necessary condition for optimality, analogous to the one-dimensional Pontryagin's necessary condition. We need the concept of "minimum condition" for the class of problems under consideration, and this is made precise in the following definition: Definition 6.1: Let v E F. Then v is said to satisfy the "minimum O O condition" if there is a set B c G with meas B = meas G such that for (x, y) E B, we have H(v (x, y)) < H(u) for all u e U. We recall that H(u) stands for H(x, y, z(x, y), z (x, y), z (x, y), u, \(x, y), 4(x, y)) where z and (A, p) X* <y = (- l', i Pl' 12) satisfy (1.1, 1.2) and (4.5, 4.4) respectively. The following hypothesis is needed in the proof of the necessary condition: 17

(H8): The functions fi = fi(x, y, z1, z2 z3' u), i = l,...,n, are continuous 3n on G x E x U. Remark: In the proof of the necessary condition, the inequality (3.1) of (H ) is needed only for (zl, Z2, z ) = (z(x, y), z (x, y), Z(X, y)) where z is an optimal trajectory. Further, the hypothesis (H6) can be replaced by (Hg). (6.i) Theorem: (Pontryagin-type necessary condition): Let v e r be optimal for I; i.e., I(v ) < I(v) for all v e r. Let conditions H - ~O~~ — ~1 n HA hold. Then, there exists a unique function z e [W (G)] satisfying the Darboux problem (1.1-2) and o - many sets of 4n multipliers (Al, 3, p1' p2) E (L(G)) satisfying (4.4-5) with v replaced by v. With this z and any of these sets of multipliers, the optimal control v necessarily satisfies the minimum condition. Proof: The existence of z and of Al, A3, 1', P2 under the hypotheses H H5 has been shown in no. 3 and no. 4. Before proving the necessary condition, let us note that throughout this proof, zl = z, = Zx Z = Z, (A,>) = ( 3l,, P1' 2) have the same meaning and they correspond to v. Further, as inno. 5, H(u) = H(x, y, u) = H(x, y, z(x, y), zx(x, y), z (x, y),u, X(x, y), i(x, y)). For each natural number n, let C be a closed subset of G such that (i) n meas (C) > (1- n ) meas G, and (ii) on C the functions v, z = (z1, z22 z5), (\, ~) = (xl, A3, 1', ~2) are all continuous. Let C' be the set of all points 18

of density of C so that meas C' = meas C and the functions v, z, X, i are n n n o continuous on C' with respect to itself. Now, for any u E U, let R(x, y; u) n = H(x, y, v (x, y)) - H(x, y, u). Then, this function is continuous on C' O n 00 -_ for each n. Let B = (interior of G) n ( U C'). Then meas B > meas C' > (1-n ) nln - n meas G for all n and hence meas B > meas G. Further, since B c G, it follows that meas B = meas G. We shall prove that B is the required set, i.e., for (x,y) c B, we have H(x, y, v (x,y)) < H(x, y, u) for all u e U. Let (x, y ) be an arbitrary point of B. Then there exists an N such that (x, y ) E C. Now, let us choose 61, 62' 53, 54 as follows: (i) Since (x, y ) E C', it is a point of density for CN and hence there is a 51> 0 such that 0 < 6 < 51 implies meas (C n Ss(x, y )) > meas (S ~~1 1 N u~N o o 2 (x, y )) where, as before, S(x, y) is a square of side length 25 with center at (x, yo ) (ii) Let us suppose that the minimum condition does not hold at (x, y). Then, there is a u e U with E = R(x, y; u) > O. Using the continuity of R(x, y; u) we obtain a 52 > 0 such that 1R(x, y; u) - R(x, y; u) I < E/2 whenever (x, y) c C and I(x, y) - (x, y )J < 2 52. Thus, R(x, y; u) > c/2 for all (x, y) C CN n s5 (x, yO). (iii) The function s(x, y; u) = If(x, y, z(x, y), u) - f(x, y, z(x, y), v (x, y))|, with u as in (ii), belongs to L4(G) (by (H6)); and hence there exists a 53 > 0 such that for 0 < 6 < 63, we have fS s 2(u,, )dadp < K2 and Iff s (u, a, P)dadp < K2 show 5 is some number with 0 < 2 < _ < e/6M 19

(c as in (ii) and M as in the equality (5.6)). (iv) Since (x, y ) is in the interior of G there is a 54 > 0 such that S. = Ss(Xo, yo) c G for 0 < < 4 Let a > 0 be such that a < min(S1, 62, %3, 8 ) and let v be a function defined by v (x, y) = u if (x, y) E C f nS and = v (x, y) otherwise. Clearly, c N a o v is an element of F. Also, R(x, y; v (x, y)) is zero outside C n S and is > c/2 for all (x, y) E C n S. Thus N cr I[v] - I[Vo] = 1' + ffG [H(vE(x, y)) - H(v (x, y))]dxdy - JC n s R(x, y; v (x, y))dxdy N a c - 2 C meas (CN nS ) < q - 4 E meas S 2 n - C a 2 2 where In c < E a /2 from no. 5. Thus, I[v ] - I[v ] < - ea /2 < 0. This is C 0 contrary to the assumption that v is optimal. The contradiction arose because of (ii). It follows that for any (x, y) E B, we have H(x, y, v (x, y)) < H(x, y, u) for all u C U. This concludes the proof of the theorem. 7. DISCUSSION AND EXAMPLES In this section, we shall discuss the Pontryagin-type necessary condition given in theorem (6.i) in relation to the results of Cesari [2] and A. I. Egorov [3 a ]. We shall first show that our results yield those of A. I. Egorov under conditions of smoothness. We shall also give examples where our necessary 20

condition applies. In particular, example 3 of (D) below will show that our results are actually more general than those of A. I. Egorov. A. The Linear Case If the state equations are linear, i.e., of the form z = Az + Bz + xy x Cz + D(x, y, u) where A, B, C are matrix-valued functions on G, then we y have seen that the increment formula reduces to I(v ) - I(v ) = JfGH(v (x, y)) ~~0 GO E - H(v (x, y))dxdy. Now, if a control v E r satisfies the minimum condition, then in particular H(v (x, y)) < H(v (x, y)) a.e. in G and hence I(v ) < I(v ) for all v in F; i.e., v is optimal for I. Thus, the necessary condition is also sufficient in the linear case. For the existence of solutions for the Goursat problem (1.1, 1.2) [as well as the conjugate problem (4.3, 4.4)] in this case, we may require that the matrix-valued functions A(x, y), B(x, y), C(x, y) be in L (G) and that D(x, y, u) be continuous in u. Further, we shall require D(x, y, v(x, y)) to be in [L (G)]n for v E r. B. Various Types of Cost Functional Bl. It is clear that the cost functional (1.4) or I[z,v] = L Ai.z(a+h, b+k) can be written in the Lagrange form J[z, v] = G f(x, y, z(x, y,), z (x, y), v(x, y))dxdy with f =, A.f. and the f. as in (1.1). O I=1 11 I However, the Lagrange problem of the minimum of J[z, v] with f not necessarily equal to Z Aifi, and z, v satisfying (1.1-3) can always be formulated as the Mayer problem (1.1-4) by suitable transformations. This is done, as usual, by introducing a new variable z with z =f(x y z z v) xy 2x y ) 21

z (a, y) = O, z (x, b) = O. Then, the functional J can be written in the form J = z (a + h, b + k) (cfr. Cesari [2] or the author [6a ]). B2. The general Mayer problem with cost functional I'[z, v] = 0 (z(a + h, b + k)) with z, v satisfying (1.1-5) and an arbitrary 0 (t), E (twice continuously differentiable) can be reduced to problem (1.1-4). As above, 0 this is usually done by introducing a new variable z satisfying o n 2 i j6 i i n z = SZn ( a /cz z + y( /zfx, Zy, z z zy, v) XY ij=1 x y 1=1 x y and z(a (y)); z(x, b) = 0(c x),... x)) 1 n 1 n where cp(x) = (cp,...,p ), (y) = (l,..., ), are the initial data as in (1.2). (See A. I. Egorov [3 a] and the author [6 a]). B3. The problem of minimum of J[z,v] = fa F(x, z(x, b+k), z(x, bk)dx a xO can be reduced to that of z (a+h, b+k) where z is defined by z xy.Y [(z F/.zi)z + ( )f(x, y, z z v)] and z~(a,y) = O; z~(x,b) y xi x y fx F(a,cp(a), cp'(a))da (see A. I. Egorov [3a]). a B4. Let J denote any linear combination of the functionals mentioned above in B1, B2, B3. It is clear that the functional J can be reduced to the form (1.4) by suitable addition of an auxiliary variable z. C. Comparison With A. I. Egorov's Results The optimization problem (1.1-4) was studied by A. I. Egorov [5 a ] where he proposed a necessary condition in terms of the Hamiltonian H(x, y, z, Zxy, v, y ) = Ci~ fi(x, y, z, Zy zy v) 22

and multipliers (x, y) = (,...,I ) satisfying the Goursat-type problem oi =,H/=zi - (/x)( z)) - (/z)(sH/z) (7.1) xy x y (x, y) E G. i = l,...,n, with boundary conditions i i i = -( H/ z ) for y = b+k, =-(H/z) for x = a+h, (7.2) X y y x OL(a+h, b+k) = A, i =,...,n. (7.3) In (7.1) the derivatives are evaluated at (z, z, z, z, ) and the numbers A. in (7.3) are those in (1.4). In [3 a ] the control variables v. are assumed to be piecewise continuous. In the present paper the controls v. are assumed to be only measurable, and we proved, under our general assumption (H ) - (H ) that the functions z belong to a Sobolev class W (G) and are continuous in G. In any case, the p -i j derivatives ( /ax)(tH/z ), (6/ y)( H/ z ) which appear in (7.1) need not in general exist, as example 3 below in D will show. On the other hand, under suitable regularity conditions, equations (7.1-3) can be derived from the conjugate problem (4.3-4), by defining = (,...,Q ) in terms of the multipliers \l, \ l 2 We need the following assumptions: *For a given optimal pair (z, v) and corresponding multipliers \, 3', 1' p2' let us assume that the following partial derivatives exist as generalized 23

derivatives n 3 i n j f' ( (Z(x) l + )( j/a Z)) ( /y)(^= (A' + 2)( /Z) i =.,...,n i j 2 i From (4.5) and (*) it follows that 3 and 2 xy i = l,...,n, exist as generalized derivatives, and in view of (4.3) we have 3xy = -y - (/Y)(=n (X3 + 2)(j/ 4y1 3 C12 -((afj/ ),y' 32xy l-x )(=l3 + 2)( i/ iN i + i a.e., in G, i = l,...,n. Thus, if Q = X + j2' that is, = 3 + 2' i = l,...,n, then 0 exists a.e. in G as a generalized derivative and, in view of (4.3) we get (7.1) with H + EZ.Gf.. Again, from (4.5) we get = xi = -.1 - (aH/6z ) = -( H/6z ) for y = b+k, (7.4) x x y y and analogously i i ~~i o i= -(~H/6z ) for x = a+h, i = l,...,n. (7 5) y x Finally, G(a+h, b+k) - ( +X + p2)(a+h, b+k) = A. Thus, under assumption (*), the sums = X + j2, i =l,...,n, act as multipliers i described by (7.1-2). It is of interest to note that 0 = \ + 2 is obtained as the fixed point of the contraction operator T (see remark (a) 24

in no. 4), and thus Q is unique, in harmony with the uniqueness of Egorov's solution 0 of (7.1-2). D. Examples (1) ([3 a, p. 560). Let G = ((x, y) O < x < 1; 0 < y < 1) and consider the problem of minimumof S S 1 f (1 - 2y)z(x, y)dxdy with side condition z = -2z-2z - 2z - v, xy x y boundary conditions z(x, o) = z(o, y) = O; here z is a scalar and v is a control variable with values [0, 1]. To obtain the multipliers (and use theorem (6,i)), we first introduce a new set of variables z,, i=1,...,6 defined by z = J z =; fo /f (1 - 2P)z (a, P)dadp; z - z; 12 x Y 4 1 5 4x z6 = Z4. Then S attains minimum together with z4(1, 1) and the side conditions are now z = Z3x = = -2z - 2z - z - v; z = z = z = (1l- 2y)z1 2y 3x lxy 1 2 5 5y 6x 4xy l The corresponding conjugate problem is lx+ ly = - H/z. = 2(X3 + 2) (1 - 2y)(X6 + "2y = - H/3z2 = -1 +2(\3 + 2); 3x = - H/5z3 = -l +( 3 + 2); 4x + 4l4y = -H/z4 = 0; pY = -Hi/az5= -\4; k6x = -H/~z6 = -;4; 25

with boundary conditions, Xi(l, y) = 0 = |j.(x, 1) for i f 6 and j / 5; 6(1, y) = 5(x, 1) = 1/2. Here H = Xlz2 + (lZ 3+ ( 2)(-2z1 2z2 z - v) + 4Z5 + p.4Z6 (X6 + j5)(1-2y)z. It is seen from the boundary conditions that one can take k4 = 04 = 0; %5 = k6 = 1/2 on G. Further, if O = p2 + 3 and X = a - 9, then by formal differentiation of the above equations, we get xy = + 2(9 - 9) + (1 - 2y) or t = 2 + 1 - 2y; ~(x, 1) = O. Solving for ~ as a function of y, we get ~(x, y) = y - e. Thus, 9 - 9 = y - e ( ) Solving for 9 as a function of x, we get 9(x, y) = (e -1) 2(y-l) x-l (y - e ). It is clear that for (x, y) E G, e < 1 and hence 9(x, y) 2yo-2 > 0 or < 0 according as y < yo or y > y where y = e. But then, H as a function of v is minimum for v (x, y) where v is a function on G defined O' O by: v (x, y) = 1 for y < y; = -1 for y > y. Now, to obtain the value of the functional, we first solve the following for z: z = -2z - 2z - z - v xy x y o with z(x, o) = z(o, y) = 0. It is seen that z is given by - -x -2y z(x, y) = 2 1(l- e )(e - 1) for y < y = 2 1 -(1 -e )(1 + e - 2e ~ ) for y> y 0 and the functional takes the value -1 2 1 -2 2 S = e (y + e - y) where y = This is the optimum value obtained in [3 a ] also. (2) ( [ a ], p. 561): To find the minimum of the functional S = J z(x,l)dx - 4 z(l, y)dy where the side conditions on z are given by 26

z = v; Ivi < 1, 0 < x < 1; 0 <y < 1; z(x,o) = z(o, y) = 0 (7.6) xy If z is a variable satisfying the relations o zy = - and z (o, y) = z (x, o) = O (7.7) Oxy y x o o then the above optimization problem reduces to the problem of minimum of zo(l,l) with (7.6, 7.7) as side conditions. The conjugate problem is described in terms of the multipliers \1 1, \3' 2' 4' %4 4 6' 5; lx + y= 0; 5''2y = + 6 + ); 3x = - (6 + 5); 4x + 4y =; 5y = -4; 6x = -4; with boundary conditions X.(1, y) = 0; =j(x, 1) = 0 for i f 6, and j 2 5; 6(1, y) = 1/2; >5(x, 1) = 1/2. In order to obtain a set of solutions, we may introduce the auxiliary equations X = 0 and \4 = 0 on G. Then, we obtain [i = O; 0 4 = 0; 4 = 1/2 and \6 = 1/2 on G. Also,,2 = (y - 1) and A = -(x - 1). Thus, the Hamiltonian reduces to H = (y - x)v + (z - z). y x It follows that as a function of v alone H is minimum at v (x, y) where v o0 o is defined on G as follows: v (x, y) = -1 for 0 < x <y <; = 1 for 0 < y < x < 1. Substituting in (7.6), and integrating we z(x, y) = -xy + A(x) for 0 < x < y < 1; = xy + i(y) for 0 < y < x< 1 (7.8) where 0 and * are absolutely continuous functions defined on [0, 1] with 0(0) = 4J(0) = O. Now 0 and r are to be chosen so that the two expressions 2 2 2 of (7.8) coincide for x = y. Thus 0(y) - y = y + ~(y), i.e., *(y) = 0(y) - 2y. 27

2 Hence z(x, y) = -xy + 0(x) for 0 < x <y < l; = xy - 2y + 0(y) for 0 < y < x < 1; where 0 is some arbitrary absolutely continuous function defined on [0, 1] with (o0) = 0. The corresponding value of the functional is S = / z(x, l)dx - f z(1, y)dy = [cp(x) -x]dx - [c(y) + y 2y2]dy = -1/3 This again is in harmony with the optimum obtained in [3 a ]. (5) ([6a ], p. 207). Let G be the rectangle [0, 1] x [0, 1]. Let us consider the problem of minimum of the functional S = z(l, 1) with side considerations and constraints z = (1 + z )v; z(x, o) = O, z(o, y) = 0; -1 < v <1 (7.9) Let us first observe that for any v(x, y) in L (G), the solution of (7.9) is given by z(x, y) = I [-1 + exp (JY v(cx, 3)dB)]da (7.10) Now, since v(a,P) > -1 for all (ap), fo v(a,P)dP > -1 and hence fJ exp(fJ v(a,P)dP)da > l/e. Thus, S = z(l, 1) > -1 + (l/e) for any admissible pair (z, v) satisfying (7.9). It follows that the function v (x, y) defined by v (x, y) = -1 for almost all (x,y) c G, is optimal for S, and vice versa. In order to verify that v satisfies the minimum condition, we formulate the conjugate problem l + -ly H/z1 = o; = -iH//z0 = -x - v(X + p2); k3x = -'H/~z3=- l; with boundary conditions, \1(l, y) = l1(x, 1) = 0; 28

\3(1, y) = >2(x, 1) = 1/2. Here the Hamiltonian H is given by H = lz + iZ3 + (3 + 12)f where 1 = z; z = z z = zy; f = (l+z )v. The multipliers corresponding to v are obtained as solutions of the above system of equations 0 with v replaced by v (x, y). Clearly, we may chose 1 = 0; and >3 = 1/2 -l 1 on G. But then 2y = -v o(2 + 2 );' 2(x, 1) = 1/2. A solution of this equation is given by >2(x, y) = exp (J v(, )d) - (1/2). Substituting in the Hamiltonian, we get H(x, y, z, u, A, i) = u(l + z ) exp (f v (x, P)dp) x y o where z corresponds to v. Now, if v (x, y) = -1 on G, then H = u(l + z ) o 0 X exp (y - 1) = u(exp fy v (x, P)d) exp (y -1) = u/e. Clearly, H(u) > H(v (x, y)) for (x, y) E G and u E U = [-1, 1]. Remarks: It is to be observed that in the above example, the optimal solution v (x, y) = -1 happens to be smooth, and as mentioned in no. 7 (c), Egorov's condition also holds. However, this example (3) can be easily modified into another one for which Egorov's necessary condition cannot be applied. Indeed, if w(x), 0 < x < 1 is a fixed continuous, positive nowhere differentiable function (such a function exists), we consider instead of (7.9) the equation z = (1 + z ) v ~ w with the same boundary conditions and constraints xy x as above. Then (7.10) is replaced by z(x, y) = fo [-1 + exp(w(a) fY v(a, P)dp)]da and the optimal control is still v (x, y) = -1 a.e. in G. Here, Egorov's Hamiltonian H [5a] is given by 0(1 ~ z )vw and the second order derivative x (Hzx)x required in (7.1) does not exist. 29

Also of interest, in the above example is the fact that the multiplier >2(x, y) need not have partial derivative with respect to x. Thus, in general, the multipliers need not have partial derivatives with respect to both the variables; as such they may not belong to a Sobolev class. 50

BIBLIOGRAPHY 1. A. G. Butkovskii, A. I. Egorov, and K. A. Lurie, "Optimal control of distributed parameter systems" (a survey of Soviet publications), SIAM-J. control (3), Vol. 6 (1968), pp. 437-476. 2. L. Cesari, "Optimization with partial differential equations in DieudonneRashevsky form and conjugate problems," Archiv. Rat. Mech. Anal, Vol. 33 (1969), pp. 339-357. 3. A. I. Egorov, (a) "Optimal control of processes in certain parameter systems," Automat. Remot. Control, Vol. 25 (1964), pp. 557-566; (b) "Necessary conditions for optimality for systems with distributed parameters," Mat. Sb., Vol. 69 (1966), pp. 371-421. 4. C. B. Morrey, Multiple integrals in the Calculus of variations, Berlin, Heidelberg, New York, Springer 1966. 5. S. L. Sobolev, Applications of Functional analysis in Mathematical Physics, Izd 1050; Amer. Math. Soc. Transl. Vol. 7, Providence, R. I., 1963. 6. M. B. Suryanarayana, (a) Optimization problems with hyperbolic partial differential equations, Thesis at The University of Michigan, 1969; (b) On Multidimensional integral equations of Volterra type (to appear). 31