Upper Bounds on the Expected Value of a Convex Function using Gradient and Conjugate Function Information John R. Jirge Dept. of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109 USA Marc Teboulle Dept. of Mathematics, Statistics and Computing Science Dalhousie University Halifax, NS Canada B3H 3J5 Technical Report 87-19 August 1987

Upper Bounds on the Expected Value of a Convex Function using Gradient and Conjugate Function Information. John R. Birge Department of Industrial and Operation Engineering The University of Michigan Ann Arbor, MI 48109 USA Marc Teboulle Department of Mathematics, Statistics and Computing Science Dalhousie University Halifax, NS Canada B3H 3J5 *Supported in part by Office of Naval Research Grant N00014-86-K-0628 and by Dalhousie University. This work was conducted while the first author was visiting the Department of Mathematics, Statistics and Computing Science, Dalhousie University.

I Abstract: New upper bounds are given for the expected value of a convex function. The bounds employ subgradient information and the conjugate function. We derive the bounds and compare them with previous bounds with different information requirements. Keywords: Bounds, Convex Functions, Stochastic Programs, Utility Functions

1 INTRODUCTION Evaluating the expectation of a convex function is a central requirement in utility theory (see, for example, Fishburn [19701) and stochastic programming (see, for example, Dempster [19801). In general, these problems involve optimizing the expectation of some function of certain random variables and decision parameters. We assume that this function is convex and that certain properties of the convex function and the underlying probability measure are known. We show that new upper bounds on this expectation are available when the information includes subgradient and conjugate function information. This result is especially useful when the original integral is not ea.ily computable as we show below. The most basic bound on expectations of convex functions is Jensen's lower bound (Jensen [19061 which requires knowing only the finite means of the random variables. Madansky [19591, following Edmundson [19561, gave an upper bound based on the theory of moment spaces. This bound again requires finite mean value information and a bounded n-dimensional rectangular domain of stochastically independent random variables. Ben-Tal and Hochman 19721 extended and refined the Edmundson-Madansky bound by including information of the expected value of the absolute difference between the random variable and its mean.Gassmann and Ziemba [19861 provide a weaker bound that does not require independence (as in Dupacova [19741) or n-dimensional bounded regions. Frauerdorfer [19661 provides the extension of the Edmundson-Madansky bound with dependencies and knowledge of the joint expectations of the random variables.

The general process of obtaining these bounds as solutions of moment problems is described in Birge and Wets [19861. The solution of linear approximations is given in Birge and Wets [19871. Explicit solution procedures also appear in Ermoliev, Gaivoranski and Nedeva [19871 and Cipra [19861. They are also used in Dula [19861 to provide bounds for the expectation of convex functions with additional properties given first and second moment information. Our results differ from the above results in our not requiring explicit moment information but instead information regarding the conjugate function and the expectation of the gradient and the inner product of the gradient and the random vector. We first give a one-dimenstional result in Section 2. Section 3 provides an extension in n-dimensions. Section 4 compares our bound with previous bounds in IR and Section 5 provides the comparison in iRn. Section 6 describes possible refinements, and Section 7 gives conclusions. 2. AN UPPER BOUND IN R Let (Qs, 2, F) be a probability measure space and let X: Q-+ (a,b) be a random variable, where -ooe<a<b + cl, with distribution F. Let 4: (a,b -, IR be a convex differentiable function. We denote expectation with respect to F by E and throughout this section we assume that E4(X), E4 (X) and EX4 IX) exist and are finite.

4 Theorem 2.1 Let:> (a,b)-.R be a convex differentiable increasing function and assume that E4( (J)>o. Then, EfX, - E'IXJ ]: = 0 (2.1J Proof: For any convex differentiable function s on (a,b), the following inequality holds: (sl)- ([tl) (s-tl]' (t) for all s,t E (a,b) (2.2) Set t4= [clearly in (a,b)l, s = EX*V(X Since 4 is increasing and E4'(X]>o, sE(a,b). Substituting s,t in (2.21 we obtain lX], +I EXX'[ X -'1 EX'(xl - X I 4'XJ Ef'XI E~'(Xl (2.31 Take expectation on both sides of (2.3) with respect to F and observe that in the right hand side of (2.3) (with E4'(X)>o): EX4'(X E'(x I - EX-'(XJ = 0. E4'IX) The result follows. a

C Remarks 2.1 (1) If 4 is strictly increasing arid concave then inequality (2.1) is reversed. (2)] If 4 is strictly decreasing, then assuming E,4'(X)<o, inequality (2.1) is still valid. (3) The differentiablity assumption on 4 can be relaxed. For if 4 is convex its left and right derivatives o)l(x) and 4'+ [x) exist, and are finite and non-decreasing. Moreover the subdifferential of 4 is given by, a4(x) = (zeIR: 4)'(x)<z4'[xl) (see e.g Rockafellar 1i970J, pp. 228-229). Theorem 2.1 remains valid if we substitute any z E; 4f(x = (4'_ (x), )'+ (xl) for'[(x). Jensen's inequality for a convex function f provides us with a Lower bound for E{(X): 4)(E(X))l E (X] (2.4) Combining inequality (2.4) with Theorem 2.1 allow us to derive a rearrangment type inequality Corollary 2.1 Under the assumptions of Theorem 2.1, we have EX'[(XI E(X EE'(X) (2.5) Proof Simply follows from (2.1), combined with (2.4), and using that 4 is increasing and E4'(X) >o. O

C: J More generally, Let g: (a,b)-+IR be a given increasing function. Since': is convex, >' is increasing and so f(t):=4'(g(t)) is increasing. Then inequality (2.5) implies EgiX)f(X): Eg(Xl EflX) (2.6) Inequalities (2.5) or (2.6) can be used to obtain bounds on system reliability. For general results on rearrangement inequalities and applications see Karlin and Rinotl t1981J and the references therein. 3. RN UPPER BOUND IN Rn In this section we present a natural extension in IRn of the upper bound derived in Theorem 2.1. Let X be a random vector on the probability space o., 2, F) with distribution function F and let S C IRn be the support of X. Fssume that S is convex and let l: S-*IR be a convex differentiable function. The gradient of 4 at x is denoted by V7tx). The conjugate cornv',ex f-unr:tirnn nrf t i; ripfintpr hij >*(y) = sup (xty - x)) X In the sequel we assume that EI(X), EXT74(X) and EVI(X) exist and are finite. Theorem 3.1 Ef(X) < EXTV4(X) - 4*(EV<(X)) (3.1) Proof Since 4: S-.IR is convex and differentiable, the gradient inequality holds, i.e., 4at) - p ) >1 (a-p)TV(1p) for all ct, p e S. (3.2)

7 Setting p = X in (3.2) and taking expectation with respect to F in inequality (3.2) implies E-[1X] EXT7[X1) + C(a) - cTEV7X] for all a E S. Hence, T T E4(X)I EX T7VIX + inf (t ) - ao EV7XI). (3.3) CX lNote that: inf (4cl - oT EV7(XI)= - sup (cT EV4(X) - (cdll = -.* (EV7(XII. Inequality (3.1) follows immediately from (3.3). O Remark (3.1) Rn alternative proof of Theorem 3.1 may be derived using the following useful relation: (see Rockafellar 1970J, p. 2571 4* (74(z]) = zT'(zl - 4(z) (3.41 Setting z=X and taking expectation in (3.4) we obtain EN(X) = EXT V(X) - E * (V7(X) (3.5) But since i is convex so is <* and hence by Jensen's inequality 4*(EV74(Xl), E4*(V4(X)) (3.6) Then (3.5) combined with (3.6) implied (3.1). This proof will be useful to refine the upper bound; see Section 6. E Remarks 3.2 (1) If ) is concave Inequality 3.1 is reversed. (2) Rs mentioned in Remarks 2.1 (3), Theorem 3.1 remains valid if instead of V7(.) we substitute any ze a 4. E

8 The one dimensional version of Theorem 3.1 (n=l, S=(a,b)J provides us with the upper bound E(XI) $ EX'(X) - * (E'I(X)-: = C. The next result shoius that the bound C is tti't'r tijr tlF' upper bound Di derived in Theorem 2.1. Theorem 3.2 Under assumptions of Theorem 2.1, we have EI() X] C D (13.71 Proof For any convex functio < and any ce dcrrm (t, 3 E dotrl t* tthe inequality <t(a + 4**(1) > c( (3.8) holds SuJhstitiitinn in (3 Rtl: EX'X- e lr hI = rlnm d tndI "'' E~'{',-'.i.... p = EV'(X) E range 4' c dom 4*, the result (3.7) followts D 4. COMPRRISONS OF BOUNDS IN R Throughout this section <: IR-+IR is a given convex differentiable convex function and X is random variable uwith distribution F and density f with support (a,b). We compare the upper bounds C = EX4'(X) - g*(Et'(X)) and D = 4 EX'lX) with the following E4'X- )*E and well known upper bounds: Edmundson-Madansky[ 1956] EM: = fb-x) (a) + (x-bl < (b) 4.1) b-a where x: = E(X) < oo, [a,bl is a finite interval.

Ben-Tal-Hochmarn [19721 BH.d[ - la, + 1 x [ d _-a) (4.2) 2'x-a trx 2 (b —bj (x-a) where x = EX) < oo, [a,b] is a finite interval, and b x d: = E IX - 1= 2J' x-x dF[x) = 2 (x-x) dF(x) is the S?7 a expected absolute deviation about the mean. Using this additional information on the random variable X, it is shown that BH gets closer to E4[X): = 4 than EM i.e, <, BH < EM Remark 4.1 The upper bound BH can be obtained for an infinite interval (a,b), - oo ( a <b < + o, under additionals assumptions on 4; see Ben-Tal and Hochman (1972). D ExampLe 4.1 ([X = x2, x>o, XU (0,1). Then = 1/2, d = 1/4, = 1/3. Using (4.1) and (4.2) we obtain EM = 1/2 and BH = 3/8. Here 4* [y] = 1/4 y2 and then we compute C = 5/12, D = 4/9. Jensen's inequality yeilds the Lower bound J =1/4 and J ( BH < C < D ( EM. D The next example illustrates a situation where EX4'(X) and E4'(XI (and hence C and D) are easy to compute uhile E4(X) requires the evaluation of a complicated integral. Moreover, in this example, the upper bounds BH and EM are shown to be trivial, i.e., BH=EM +oo.

I f) Example 4.2 Let W(x) =-Ln (1-x2) -l(xla and Let X be a random variable uith density f[x)=3/2(1-x2) for oxfl1. Then, = - (1- ) Ln (1-x2) dx (4.3) IJ To compute the integral (4.31 we use the following known integrals (see e.g. Gradshteyn and Ryzhik (19801 pp. 557-5581: lo x^1 Ln 1+x) dx = 1/A (Ln2-3([A+ll) (>-1) J' x?\ Ln l(-xl dx = I/A (4(1) -' (A+)11 (A>-1) where 4(1) = -y, y = 0.577215...Euler's constant and the functions', [3 satisfy O4(x+1)= 4(x+- 1, X.( }- H)]X= 2p(x), x>0 (see e.g. 112)p. 945). x 2 2 Rfter some algebraic manipulations one obtains > = 5/3 - 2Ln2 0.280372. The conjugate function is given by,* (yl= + -1 + Ln(- 1+q-1 )2 y EX'X= 3l1 x<2dx = 1, E>'(X = 31l xdx = 3/2. Hence, 4' (E'({X)) = 1*(3/2) z 0.4653 and thus C= 0.53468 and D = 4 (2/3) = 9/5 - 0.58778. Jensen's inequality yields the Lower bound J = 4(x) = <(3/8) 0.1515. A rough estimate of I could be obtained by averaging the upper bound C and the Lower bound J to give 0.34309. FinaLLy we note that since here 411) = +o, EM = BH = +oo. (It can be verified that d x0.) 0

11 In Theorem 3.2 we show that C D0, and Ben-Tal and Hochman prove that BH < EM. Examples 4.1 and 4.2 illustrate situations where C ( D % EM. In fact we tested many other examples and found C $ EM. This inequality is not, however, always valid as illustrated in our next example. Example 4.3 Let ~(XJ = 1- - (x - (x-11/2, 0<X(2 and fxl- 2 1- i - x - (112, oX <2. 4- r Then, x = 1, EM = 1, BH 0 0.7766, L = f(x] = 0 and = E4t[X) 0.447. The conjugate 4* is * (y) = yll + -l.- + 1.1, 14.3 f 2./ 2 i l+y l1+y EX.'(X) = 3 4 I - 2.1065, and E'IX) = 0. 3(4 - i) Hence * (E~'(X)) = <*(01 = 0 and then C - 2.1065 > EM = 1. 0 Note that the bound 0 is not computable here since the assumption of Theorem 2.1 EV'[X)>0 is violated. The example not only demonstrates that EM is better than C, but also that the bound C may be a "bad" upper bound. However, we show in Section 6 that the bound C can be considerably improved to be even sharper than BH; see Theorem 6.1 and Example 6.2. Ue have already mentioned in the introduction, that the computation of the upper bounds EM and BH requires a finite mean x. This is not the case for C. Further BH requires the value of d which may be difficult to compute. In the last example of this section we consider the case when'x and d fail to exist and therefore the upper bounds EM and BH are not available.

Example 4.4 Let X be a random variable with density Tfl + x I Then = 4 Jo, X dx = -2 -2.8284, and EX'(XI = - 2., 0 + K2 2 EX(4'[X) = 42. Hence C = - 2? - 4* (- /2)= 3i a - 2.21213. 5 COMPRRISONS OF BOUNDS IN Rn Gassmann and Ziemba (1986) extend an idea of Edmundson and Madansky (1956) to derive an upper bound on the expected value of a convex function of a random vector. The bound is given as the solution of the following linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m - GZ = max ( 4(v]) A.: Ai = 1,2 A.v=x, I)o) (51} i=- i=i i=1 I I where it: S-IR is a convex function, S C nR is convex, (vl,..., Yml are the extreme points of a bounded convex polyhedron containing S. and x = (EX1,..., EXT is the finite mean of the random vector X. We compare the bound GZ with the upper bound derived in Theorem 3.1: C = EXT V([x) - *[tEV7[x). Example 5.1 (Taken from Gassman-Ziemba [19861 p. 42.) V2 2' 2 2 xe: x1-x2-x2/27T if x1+x2 <1 <Ix~,x2)=e,fix,x)=< =iX2^^ otherwise

13 2? Then S= (x x, 1} and | -x' d x, d> = - ^' 1.0364 f5.2) 2=l e -r -x d 2 d 2=e -1 -vir-xz Using (5.11 the best upper bound derived in [111 is shown to be GZ 1.54308. le nowL computp the hniincl f The cnnjtigate of (x1l, x)2 = eXI is V* (Yl, Y2) = Yi Lnyl -Y and EV7(X) = (EeX1, O)T = 3, o) (using 5.2), then >* (Ev(ixi)= * i3, Ui = (i lLn ^-zj) -U.YY48. ee e Now, EX7()X] = E(X, e l -' - -x x, eJ X - dx, dx, 72 3e-21 0.2146 and then C - 1.20948 < GZ - 1.54308. ze Note that Jensen's inequaLity yieLds the Lower bound J = (xl) = [ 10,01 = 1 and thus an estimate of ~ could be obtained by J + C 1.10474 giving an error of about 7%. O 2 For a random vector X = (X,... Xn)T with independent components X, the Edmundson-Madansky and Ben-TaL-Hochman upper Bounds are avaiLable where S is an n-dimensional rectangle of the form n S = fT [al b.. They are given by the following expressions [11: i=1 [ 1

14 EM= il=0 I........ ~, i =0 r, ~ik ^Ci1 ik',.... herec a ci, k bk-.I U i' 1 I 0 b I n k and i=b ak bk -ak (5.3) (5.4) BH = nm P) l(a...... ) mI k 1 -&3~~~ where 6k is 1 or 2 or 3, A 3 is the set of 3n n-dimensional vectors whose components are aLL 1's and/or 2's and/or 3's, dk 1 2(xk- aki' pk2_ Rk 2 2(b -I k k k k k-k - k= 3 1 2' a1 2 a k 33JS~ As mentioned in fl], for the independent case the upper bound GZ is in fact worse than EM (and therefore BHJ. Example 5.2. 5x2 Let [xl, x2)= - 12 2'2 1 1-x 2' fix, Z = 1 if o< xl,< X21 1 o otherwise

I5 Then, P = E1> l 2 = Using (5.3) and (5.4) wue obtain respectively EM= 1 i(0,O) + p (0,1) + P(1,O] + (1[.1) =- 4 4 and BH = — (WO01 +.10I1) + 4I1,0) + 4 (1.1)) + 1 o (0, 1 + 7 410)+ 16 8 2 V1I, 1 + (b1I -* (1, 1) =1. From (5.1) we have: 2 2' 4 2" 2 8 A2 33 GZ = max (- 7+ + 2A,: A + A2 + A, + 4 1 + 7\ —= 4 2- 4 1 2 3 4 4 A + = - max (1- 2 o \ (1)=1 2~ 2 Now we compute the conjugate function at the point x* = [xl*, x2*)T *[(Xl, x ) =1 Ix*+elT)Fx*+e), R =- 1 )', e = T1,1. 1'2 2 A =2 " -'5) and E V7 X1, X2)= (20.To, ExTV7fX) = - Then *(E V74x)) = +* (2,0) = 1. This yields the upper bound C=, and we have < BH < C < EM < GZ. Finally Jensen's inequality yields the Lower bound J = $Wx) = +(,1 ) = 0. (Note incidently that J + C = = 1 The bound C is in fact sharp (i.e., C = for many examples. For example, consider a piecewise linear function of the form:

i 6,,l tj i -i-Lip it T T i5 uhere t iS fixed. This is the form of the recourse function in stochastic linear programming Isee Wets (19661 j In this case, k.. 4*. = inf ~ A' ~. (v.l: = 1, Av =. i=' I' t5.6i from RockafeLLar (19701, Theorem 16.5, wher? ~ iv'J)= sup Iv^Tx - (x-tT-i) - t't if v' = n',+ 0 otherwise From f5.7), vt if ve- co (n i =1,..., k *lv! = + 1o otherwise. i5.7] 15.81

!7 Let ((x) = (x-t)T r(x) ([T(x) not necessarily unique) and let V7(x)]= Tt(x, then E([4x) = E((x-t)T.7(x)) = E([xT' )) - E (tT. j' ) =E(xT V7x)) - r*(E(V4<(X) where wue note that E(7<(xJ) E co (iT' I 1,..., k). Hence, =:c. (5.91 6. REFINING THE UPPER BOUNDS IN IR AND "n The upper bound C derived in Theorem 3.1 can be naturally sharpened in the one dimensional case, uwhen X is a continuous random variable with density function fix). Theorem 6.1 Let <: IR -*IR be a convex differentialbe function and X a random variable with support (a,b) and density f(x). Then, b EW>X] (K (a,b) - J x4x fix) dx- 4* [EX)= C (6.1) 2~ a where K(a,b): = b>(bftb)b)- a4(a)f[a] (6.2) Moreover C is sharper than C, i.e., C < C (6.31 Proof: From Theorem 3.1 in IR, E44(x) < C = EX 4'(X)- *IE{'(IXI) (6.4) b Integrating by part EX,'(X) = x 4'(x) f (x) dx ue obtain from (6.4): a b b E44(x) < (x<dx) f (x))^ - J 4(x) fix) dx - J x4(x)ft(x)dx-4>*(E4tX)) a a

4 0 I U b = K a,b) - EWf -J xxl f'[x) dx - 4*[E('tX)) (6.5) a and then [6.11 foLlows. To show that C, C, observe that C = (EX)'(Xl + EIX) - *IE'(XEX)I =1 (C + E4 (XJ) and that E (X)l C, 2 2 implying (6.31) Example 6.1 We reconsider ExampLe 4.3 given in Section 4. 2 K(a,b)= K(O,2) = and J x(x)f'(x) dx = 3 —. This yields the 4- o 3(4 —r) upper bound C = 16-3Tf 1.2766 < C = 2.1065. Note that we 6(4-iT) still have EM=1 < C; but see Example 6.2. l It is interesting to note that when X - U (a,b) then the upper bound C is better than EM. For if X ~ U (a,b) then EM = -l[. bl 2 and C= 1 (b t + (a) - 4* (1j) where t: = 4bH-(a e (4'(a), 4'(b)) 2 b-a (since > is convex). Hence, from the inequality 4(b) + 4*(1 >) b C, it follows immediately that C < EM. Following Remark 3.1 of Section 3, we can further sharpen the upper bound C by using a lower bound established by Ben-TaL and Hochman (19721 which is better than Jensen's lower bound. Let g: IR-+IR be a convex function, and Y a random variable with support (a,b) then:

19 EglYl > gy d )' g 9 ( -d ): = L Ipd) (6.6) and L (t.d).. g(EI(), where =E(Y) < oo: d = E | V - | and p: = Pr(lVy). From (3.51 we have E>i(X) = EXW('X) - E4*[('(XI1 (6.71 Applgng (6.6) with g: = * and Y: = M' [XI one obtain: [we denote +': = E4'(X)) E4*(4'Xl)) p* + -d1+ (1d-P) _ > ( *E4'[X))) (6.) 2p3 20l-(3 and hence from (6.71 this implies E4X}) ( EX'(X) - L4x (d,l): = CR C. c 3) where p = Pr 1('(X) >' ) and d = E|'(X)- -' I Furthermore, if X is a random variabLe with density f(x), even sharper bounds are possible. Following the proof of Theorem 6.1 together with (6.9) we obtain C:I2 = ) - j x I(x)f (xidx - L, (d,p) 16.10) a Clearty, C7 C< and <, C. R natura question is whether R or CR is better than BH? It seems difficult (if not impossible) to prove such a result in general. However in our worst examples (4.3 and 6.1) this appears to be true as demonstrated below.

E:.-ample 6.2 We have already computed in Example 4.3and 6.1. -- 0.447 BH -- 07766, EX'IX ) - 2.107, K 0,2) = 8, J'xxfitx)dx-34- 4- oT 3(4-'r) For the random variable {'(X1 we compute p = and d =2 4-IT Then using the conjugate 4* given in (4.3) we obtain Ly^ (d, p1 - 4*[(dl + 4 (-dl) - 1.5354. Using (6.9) and (6.101 it follows that C1 0.5711 and C= 0.5088. Thus, < C< C < BH <EM < C < C. D We now turn to the problem of refining the bound C in IRn. For a random vector X = (x,.....,Xn) with independent components Xi the Ben-Tal Hochman lower bound is available when S is an n dimensional rectangle n of the form S = n (ai, bl arnd is given by: -i1 n L: ln lh l ( anl (6.11) 2 k-1 1 n where h: In -ff is a convex function, 6k is 1 or 2, a2 is the set of 2n n-dimensional vectors whose components are all k k k d k k- d l's or/and 2's, P 12= = lk 1k a + 2< + 2 - - k'1 2 2(1-: and d, p, x denote the corresponding parameters of X.. To apply (6.11) as in (6.7) and (6.8) requires showing that the random vector

21 7'-!x1,...,Xn) has independent components, and computing the corresponding parameters tk and dk. This fairly complicated task can be avoided by characterizing the class of functions for uhich the function [.l:=4*"(7V4[ ) is convex. Moreover this alLows us to express the new upper bound explicitly in terms of the problem's data (without requiring knowledge of 4*j; see Theorem 6.2. The following result gives a sufficient condition for 4' to be convex for a large class of functions arising in applications. Lemma 6.1 Let >: ScFfp - IR be a given twice continuously differentiable function. If g: IRn -, IR, g(zJ: = j7 p(z) is convex, then *4(zj:= 4*(741(zi) is convex. Proof.p is a convex function if and only if it satisfies the gradient inequality, i.e., for any x and y in S * (xJ- t (yt) x-ylT y)l. (6.121 By definition, t4 (y) = 4* (7 4(y)) and thus V7 4y) = V72 VfylV) tV4yl))

i-U.' where V2 ly]) denotes the Hessian of *t. UsingV q4* = (V 4) 1, it folLows that V'7(yj = 2 44y).y. InequaLity (6.12) then becomes: 4* (v 4Nxj) - (V *l(y J) > (x-yJTV2 4l(yJ.y (6.13) Now, since >* is convex, applying the gradient inequality to 4* and using V <* = ( V)-J we obtain [* V( 4(x)l- 4* (V 4(y)l) ( 7 4(x)l- V{yl)Ty. Thus to prove (6.13) it is sufficient to show (V(X)_-~V(yl)T y> (x-y)TV2 (yl y (6.14) Since g(x): = V7[x) is assumed convex, we have V7(x) - VCly) l (x-y) 72 iyl). (6151 Multiplying (6.15) by y>o (recall that ScIRn+) yields (6.14). We can now derive a refined upper bound for a random vector X with independent components X[. We make the following assumptions: (I) S=Pa.,b.l cIRn i=1 + (II) V'(J) is convex Theorem 6.2 Suppose I) and (II) hold. Then, E4(X) <EXTV(XM) - L4,: = CR where (6.16) L n k-1 k 11 k.. 16.17 A- n 1 n 1 n

L i Proof From (6.7) (in IR) ue have E'X} = E'XT7(X) - Et;*(7X)ll Under Rssumptions [I) and (II), Lemma 6.1 is applicable and thus E4(X) = E*(71[(X)) )LL as defined in (6.11). Moreover, using'P(zj = zT7,t(z) - (zlJ, the expression (6.171 foLLows. 0 Example 6.3 We reconsider Example 5.2 where we already computed = 0.25, EM = 0.75, C = 0.50, BH = 0.375, GZ = 1, EXT 74X) = 1.5 and d = d2 =, x = x= -. The assumptions (I) and (II) are cLearLy 4'1 2 satisfied and thus Theorem 6.2 is applicable. UWe compute p1 = P2 = and thus using (6.17) 1 3 _ I+ 3) 3,() )2 L 1 33 2 L, 4 4 4 4 4 4 4 with 4zz2) = (z,z2 T V(z,, zl2 - [(z, z2) = 5 z2 + 1 z2 + z z Then L. = 1.1875 which yieLds the upper bound CR = 0.3125 <BH=0.375.

24 7. CONCLUSIONS WUe have given rieLi uFpper btlouinds for th-c, r.:p-t-ir tip —. nF1.- -: function using gradient and convex con jugate function information. LUe have shown that these bounds and their exter;ions car- be' better than previous bounds in several examples. WUe also deroinstrijteJ tioLii our bounds are especially useful. twhen the original integral is comrptlicated but has a gradient that can be easily integrated or when the infour-lationirequired for other bounds (e.g., moments) is not available. The new bounds are then applicable in a variety of applications tith these characteristics.

References 1. Ben-Tal, A. and Hochman, E. (1972) tlMore bounds on the expectation of a convex furncUciri off a ra-rndomd.r variabl.e. J. AppL. Prob. 9, 803-812. 2. Birge, J.R. and lUets, R.J.BE. (1986) Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse. Math. Prog. Stud. 27, 54-102. 3. Birge, J.R. and WLets, R.J.B. 11987) Computing bounds for stochastic programming problems by means of a generalized moment problem. Math. of Operations Res. forthcoming. 4. Cipra, T. (1986) Moment problem with given covariance structure in stochastic programming. cosopis pro Pestovani Materratiky, forthcoming. 5. Dempster, M.R.H. (1980) Introduction to stochastic programming in Stochastic Programming,_M.R.H. Dempster, ed., Rcademic Press, London. 6. Dula, J.H. (1986) Bounds on the expectation of convex function. Ph.D. Dissertation, The University of Michigan, Rnn Rrbor, Michigan.

"26 7. Edmunrdson, H.P. (19561 Eounds on the expectation of a convex function of a random variable. The Rand Corporation Paper 9B2, Santa Monica, California, USR. 8. Ermolier, Y., Gaivoranoki, R., and Nedeva, C. (19871 Stochastic optimization problems with partially knoiun distribution functions. SIRM J. Control and Optimization, forthcoming. 9. Fishburn, P.C. [19701 Utility Theory for Decision Making, R.E. Krieger, Huntington, New York. 10. Frauendorfer, K. (1986) Solving S.L.P. recourse problems with arbitrary rultivariate distributions-the dependent case. Institut fuer Operations Research, Universitate Zuerich, aorking papPr. 11. Gassmann, H. and W. Ziemba, (1986) A tight upper bound for the expectation of a convex function of a multivariate random variable. Math. Prog. Stud. 27, 39-53. 12. Gradshteyn I.S. and I.M. Ryzhik. (1980) Table of Integrals, Series and Products.Corrected and Enlarged Edition. Rcademic F'ress, Ne'L York. 13. Karlin, S. and Y. Renotl, (1981) Univariate and multivariate total positivity, generalized convexity and related inequalities in Generalized concavity in optimization and economics. Edited by S. Schaible and Ziemba W.T., Rcaderic Press, New York.

27 14. Mt daln-ra k. R OIc1? Fp.-' -r:; ora, thef (ro rlCrtation cr f a rir^,rr,/. f;rv-tirn f of a mr! tivariate rjrind-.m v^.n ih! fR nnals Math St,-it,,r, 74-'/4u. 15. Rockafellar, R.T. f1970) Convex Analsis, Princeton University FPrl-:';;, Princeton, New Jersey, USA. 16. Wlets, R.J.B. (19741 Stochastic programs with fixed recourse: the equivalent deterministic program. SIRM Review 16, 309-359.