APPLYING QUANTILE OPTIMIZATION TECHNIQUES TO SOME CLASS OF PROBABILISTIC CONSTRAINED PROGRAMMING PROBLEMS Department Andrei V.Naumov of Industrial and Operations Engineering University of Michigan Ann Arbor, Mi 48109 Andrei I.Kibzun Department of Applied Mathematics Moscow Aviation Institute Moscow, 125080, Russia Technical Report 92-14 February,1992. 1

APPLYING QUANTILE OPTIMIZATION TECHNIQUES TO SOME CLASS OF PROBABILISTIC CONSTRAINED PROGRAMMING PROBLEM ANDREI I.KIBZUN AND ANDREI V.NAUMOV Department of Applied Mathematics, Moscow Aviation Institute, Moscow, Russia, 127080 Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, Michigan 48109 ( Revised on January 26, 1992.) This paper considers linear probabilistic constrained programming problems. Such problems can be transformed to equivalent optimization of an auxiliary random variable distribution quantile function. The General Minimax Approach allows us to obtain an upper bound on the optimal value of the objective function of the equivalent problem. An algorithm for application linear programming solution technique to improve this upper bound is presented. We present also an algorithm to reduce a subset of the considered problems to unconditional quantile optimization problems. In this case we propose to use the stochastic quasi gradient algorithm for quantile function optimization to obtain an asymptotic exact solution of the equivalent problem. Keywords: linear probabilistic constrained programming problems; algorithm for reduction; quantile optimization problem; guaranteeing solution; quantile optimization algorithms. 1

1 Introduction. In this paper we consider the following probabilistic constrained programming problern: = cTu -- mi (1.1) subject to: ~o(u) = P{w: Au + Bw < b > a Alu < bl where u is (nxl) vector of control variables; c, b, bl are accordingly (nxl), (mxl), (n12xl) given deterministic vectors; w is (m2xl) random vector with given probability distribution; A, Al, B are given deterministic matrices of appropriate dimensions. The practical importance of these problem is well known. We only refer to a few papers [8],[15],[20],[21], where the interested reader can find different applications of this problem. In general, methods for the solution of such problems are based [17] on the application of suitable nonlinear programming techniques supplied by Monte Carlo simulation procedures or procedures for calculation multidimensional integrals [2],[3],[17] to find values and gradients of the probability functions. The list of these methods includes 2

the following: sequantial unconstrained minimization technique [4],[19], method of feasible direction [5],[16], supporting hyperplane method [20],[21],[22], reduced gradient method [12],[13]. In many practically important cases, however, the confidence level a is interpreted as system reliability, so a is close to unity. In this case, the procedure for calculating value and gradient of the probability function %o(u), using Monte Carlo or multidimensional integration techniques, becomes complicated or loses precision. On the other hand, problem (1.1) can be rewritten [5] as an equivalent quantile optimization in the following form: minm (u) (1.2) subject to: Alu < bl. Here the quantile function (a'(u) is: (,(u) = min{p: P{w: CTU < p, Au + Bw < b} > a} This allows us to use the General Minimax Approach (GMA) [11] to obtain an upper bound on the optimal value of the objective function. Let us define >(u) = cTu and the vector function Q(u, w) = Au + Bw, then according to GMA, the solution (<>, U^) of the following problem: min (k(u) (1.3) 3

subject to: max Q(u, w) < b wEEa Alu < bl, where Ea is a confidence set with probability measure more or equal a, has the property: min b)(u) < ), (1.4) for all feasible u. The maximum of the vector function Q(u, w) means the maximum of each it components, i.e. max Ai.u + Bi.w i = 1, m. wEEa The quality of the upper bound 4, essentially depends on the selection of the confidernce set E,. For standard Gaussian distribution of random vector w, it is convenient [11] to select Ea as a sphere CO centered at the mathematical expectation of the random vector w with probability measure a. In the problem under consideration, 4(u) and Q(u, w) are linear functions. Hence we can apply linear programming solution techniques to obtain a guaranteeing solution, i.e an upper bound (,. In this paper an algorithm for improving this guaranteeing solution, using a sequence of linear programming solutions, is considered. Often it is possible to reduce the initial problem (1.1) to an equivalent unconditional quantile optimization problem, i.e. problem (1.2) without additional probabilistic 4

constraints on the function Q(u, w) and the new function <, which can depend on the random vector w, i. e. I = D(u, w). We present in this paper a general algorithm for this reduction. This algorithm distinguishes some class of initial problems (1.1), wich can be reduced to the equivalent unconditional quantile optimization problem. It allows also us to use a stochastic quasi gradient algorithm [6],[7] to obtain an asymptotically exact solution of the equivalent problem. This algorithm is especially effective in the case of optimizing high confidence level quantile functions. We can also accelerate the convergence of this algorithm using the guaranteeing solution as a good starting point. We discuss the application of this techniques in our paper. 2 Algorithm for reduction to equivalent unconditional quantile optimization problem Let us consider initial problem (1.1) and write the general solution of the equation g = cTu. It can be written in the following form [1]: = (cT)+I + (I - (T)+cT)z, (2.1) where I is (nxn) identity matrix; z E Rn and (cT)+ is the pseudoinverse with respect to vector CT. The pseudoinverse (cT)+ can be defined [1] as a matrix (nxl), which satisfies the Penrose conditions [16]: (CT)+CT(CT)+ = (cT)+ cT(CT)+cT = cT (2.2) 5

cT(cT)+ and (cT)+cTare symmetric matrices. Then we can select (cT.+ as [10]: (cT)+ = Vc(cVc)-l, (2.3) where V is (nxn) matrix, which satisfies the following conditions: cTVc 0 (2.4) VccT is symmetric. Let us write the conditions of the initial problem (1.1) using general solution (2.1): P{w: A(cT)+O + A(I - (cT)+cT)z + Bw < b} > a (2.5) Ai(CT)+p + Al(I - (cT)+cT)z < b Our goal is to reduce (2.5) to the following form: P{w: A*z + B*w - b* < *} > a (2.6) Atz < b, where p* = (p, p,..., (p)T, (* E Rm*; A*, B*, b*, AT, b1 are deterministic matrices and vectors with corresponding dimensions: (m*xn),(m*xm2),(m*xl), (mtxn), (mxxl); m*, mI are new dimensions. It is obvious, that according to this goal, it is sufficient to select matrix V in such a way, that every element of the column vector A(cT)+ becomes negative, every element of the column vector Al(cT)+ becomes nonpositive, and matrix V satisfies (2.4). When some ith element of the column vector Ai(cT)+ is negative, we add: the corresponding row A1i.(I - (cT)+cT)/(Aii.(cT)+) to matrix A*, a row consisting of 6

zeroes to matrix B*, an element blj/(Aj1.(cT)+) to column vector b* and an element p to column vector o*. This allows us to obtain an equivalent system of restrictions in the form (2.6). Let us consider V as a matrix of n2 variables vij, i = l,n, j = 1,n. Hence, by (2.3),(2.4),(2.5), for obtaining the required matrix V, it is nesessary to find feasible solution of one of the following systems of restrictions: Ai.Vc < -e, i = 1,m, cTVC > e, (2.7) Vi.C.j= Vj.C., i,j 1,n, j $7 i, Ali.Vc< O, i = l,m or Ai.Vc > e, i = 1, m, cTVc < -e, (2.8) V.C.j = Vj.C., ij = 1,n, i j, Ali.Vc> O, i = l,mi, where C = ccT. Both (2.7) and (2.8) are systems of linear inequalities and equations. To obtain nonstrong inequalities, we replace zero on the right side of the strong inequalities (2.7), (2.8) by a small parameter e, whicn has absolute value close to zero. In this case, we can apply the first phase of the standard symplex method [14] to find feasible solutions of the restrictions systems (2.7),(2.8). Hence, it was shown, that the following lemma holds: 7

Lemma 2.1 If there exists a feasible solution of at least one of the restriction systems (2.7), (2.8), then the initial problem's restriction system can be reduced to the form in (2 6). If we obtained the system of inequalities (2.6), then we can write initial optimization problem (1.1) in the following form: o -- min z (2.9) subject to: P{w: A.z + b*w - b < p, i =, m*} a, Az b1. Problem (2.9) is an unconditional quantile minimization problem: (z) — + min z (2.10) subject to: A*z < b*, where QO(z) is confidence level a quantile function of auxiliary random variable: ((z, w) = max {A.z + B,.w- b)}. i=l,m* (2.11) Lemma 2.1 select some class of the initial optimization problems (1.1), which can be reduced to the form (2.10). 8

3 Methods for the solution of the equivalent problem. Fist of all we extend the results concerning the guaranteeing solution, which was obtained in [15], to the general case of quantile optimization problem (1.2). Let w be a standard normally distributed vector. Assume, that the function t depends also on random vector w, and can be expressed in the form (2.11). Under this assumption we can write the general quantile optimization problem for the case under consideration in the following form: o = (,>(u) - min (3.1) subject to: A*u < b, where ~a(u) = minf{: P{A*u + B*w - b* < cp*; Au + Bw - b < O} > a}, and * = ((P, C,,..., ) E R*. Here the control variable u has dimension (nxl); the random vector w has dimension (m2xl); vectors b, b*, bI are accordingly (mxl),(m*xl),(mrxl) given vectors, and other matrices and vectors have corresponding dimensions. In fact, optimization problems (1.2) and (2.10) are particular cases of the quantile optimization problem in general form (3.1). Hence, the algorithm presented below can be applied to them too. According to GMA. the problem for obtaining the guaranteeing solution of (3.1) 9

can be written in the following form: -- min (3.2) subject to: maxwEE, Au + Bw - b < P, i = 1, m*, maxwEE, Al.u + Bl.w - b < O, I = 1, m, (33) A*u < b* Let us consider the confidence set E, as a sphere Eo0a centered at the mathematical expectation of the random vector w with probability measure a. Denote a radius of the confidence sphere Eo0c as Ro. All functions in (3.3) are linear functions of the control variable u. Maximums in (3.3) with Eo, insted of Ea, can be found analytically. Suppose they are attained at the points tw~ G sm for i = 1, m* and at the points wI E Mm2 for 1 =, m. Define the solution of (3.2),(3.3) as (u~, <1(). Let us consider a new confidence sphere El, with radius R1 < Ro, then P(E,1) < a. Consider also a solution of new proplem (3.2),(3.3) with Ec, insted of Eo. Define it (u, ). Then the following set: Bi*w _< (1 + bi* - A.ul, i = 1, m* = w: (3.4) Bl.w < bi Al.u^ i- = 1, m has the property: Eli E C. Hence, P(C) > P(Elo). Lermma 3.1 If P(C) > a, then V> is an upper bound on the quantile function optimal value of problem (3.1) and the following inequality holds: <) < _ ). Proof. 10

1. Consider the set C (3.4) as a confidence set instad of E,. Then inequalities (3.3) are: A.u + 1 - A*u^ <, i Z, m* (3.5) A.u-A. < 0, I = l, m It is obvious that (3.5) holds for the solution (, 41). Hence (Uf^, 1 ) is a feasible solution for problem (3.2),(3.3), with C instad of confidence set E<. It follows from the condition of lemma, that P(C) > a. Therefore,, is an upper bound on the objective function optimal value in the problem (3.1) [11]. 2. Let the maxima on E1i in (3.3) be attained at the points w,- for i = 1, m*, and at the points wt, for I = 1, m. Then by R1 < Ro the following inequalities hold: B.wl < Bw~, i = 1,m* 1i.u<B iw? i=1,m(3.6) Bl.wt < B.w? I = 1, m. In fact, (uIo ), is the solution of the problem (3.2),(3.3). So by (3.3),(3.6), inequalities Au2 + Bw - b* <* ^o i = 1m* (3.7) Al.u~ + Bl.wt -bl< O, 1, m also hold. Hence, (iu, IS>~) is a feasible solution for problem (3.2),(3.3) with Elc, instad of E,. However (ui, I( ) is the optimal solution of this problem. Therefore, q, <1. This completes the proof of this lemma. o This lemma allows us to present the following algorithm for obtaining an improved upper bound on 4D,(-): 11

Step O. k = 0. Set linear program (3.2),(3.3) solution V~ as an upper bound on the 'I(-). Step 1. Decrease with given decrement h, a radius of the confidence sphere Ek,. Iind a solution (i+1, U I') of the linear programming problem (3.2),(3.3) with new confidence sphere insted of the Ekae Step 2. Check the inequality P(C) > a, where C is from (3.4) with (U^+1, $O+1). Step 3. If P(C) > a, then set k = k + 1 and continue the cycle from Step 1. Step 4. In the opposite case, we select the previous step's linear program solution, p'e, as an upper bound on the optimum value of (( ) in problem (3.1) and set Uae = U.A The algorithm presented above allows us to obtain an upper bound on the optimal value of the objective function in the general problem (3.1). Moreover, when the initial problem (1.1) can be reduced to an unconditional quantile optimization problem in the form (2.10), we propose to use the stochastic quasi gradient procedure for quantile function optimization [6],[7] to obtain an asymptotically exact solution. This procedure can be expressed in the following form: Zk+l = pU(Zk - qk(Z)pk), z0 E U, (3.8) where U = {z: Alz < be}; pu(z) is projector to the set U; pk is a nonrandom sequence of step multipliers; k is the number of the procedure step. The quantile 12

function DO,(z) stochastic quasi gradient qk(z) is the following random vector: i n k(Z) = 2/~ r [ (Z1.-.,. Zj '+ /k,... Zn+) - (Zl., Zj - / k,... Zn+l]ej (3-9) tPk j1vor where /k is a nonrandom decreasing sequence term; n is the dimension of vector z; z, j - = 1,n, are random variables, which have uniform distribution on [zj/3k, Zj + O/k]; 4O(z) are independent quantile function estimates, obtained using a r sized sample of random variable N(z, w); ej,j = 1, n, are basis vectors. To find the stochastic quasi gradient, qkk(z), we propose to use the following statistics of the random variable ((z, w): Oa1(Z) = L[eac] ceiw~~) ~= 0[r~ ~(3.10) a2(z) = -(7 - ( -1i)((/ + ln(r) + ln(1 - r)) where [ra] is the whole path of the Ta value; ID[,,] is the [Tra]th term in the variational sequence of the random variable D(u, w) sample, which have size r; D>, -D,_ are extreme right terms of the variational sequence;,/ is the Euler constant. For the 4X1j(z) statistic, it is nesessary to use a sample size at least several times lager than TC = [1-] +1, and to build the empirical cumulative distribution function of the random variable 0(u, w). On the other hand for a2(z), it is sufficient to use the sample size r = T,, and we need only the extreme members of the sample to obtain this statistic. So r2(z) allows us to estimate high confidence level quantile function using small size samples. Statistical propeties of the estimates (3.10) were analyzed in [5],[6],[9]. Conditions of the strong convergence of uk to uc are considered in [5],[7]. 13

4 Conclusion. Using quantile optimization techniques, we have presented the nontraditional approach for the solution of linear probabilistic constrained programming problems. The presented guaranteeing solution algorithm allows us to obtain an improved upper bound on the optimal value of the initial problem objective function, using linear programming solution techniques. Moreover, in some cases, the initial stochastic programming problem with probabilistic constraints can be reduced to an unconditional quantile optimization problem, i.e. a quantile optimization problem without additional probabilistic restrictions. The algorithm Presented for this reduction allows selection the class of such initial problems. For this class of problems, we have proposed also to apply a stochastic quasi gradient procedure to obtain an asymptotically exact solution of the equivalent problem. Practical application of these methods to probabilistic constrained programming problem for water supply system design was considered in [15]. It shows, that the guaranteeing solution is close to an asymptotic exact solution, and illustrates the fact, that the guaranteeing solution tends to it. when a increasing [11]. The computational requirements to obtain the improved guaranteeing solution depend on the confidence level a only by checking the condition P(C) > a under each improving iteration, where C is a polyhedral set of the random vector w. However, we can regulate the number of iterations by choosing the decrement of the confidence sphere radius. On the other hand, the closeness of the guaranteeing solution to an asymptotically exact one allows us to obtain a 14

good approximation of an exact solution and a good starting point for the stochastic quasi gradient algorithm and for other gradient methods, which were mentiond in the introduction. So, application of these quantile optimization methods can be used effectively to solve the initial probabilistic constrained programming problems, especially for high reliability level systems. References. 1. A.Albert, Regression and the Moore-Penrose Pseudoinverse (Academic Press, New York, 1972). 2. I.Deak, "Multidimensional integration and stochastic programming," in:Yu.Ermoliev and R.J.B.Wets, ed., Numerical techniques for stochastic optimization (SpringerVerlag, 1987). 3. I.Deak, "Computation of multiple normal probabilities," in: P.Kall and A.Prekopa, ed., Recents results in stochastic programming, proceedings (Springer-Verlag, 1980). 4. A.V.Fiacco and G.P.McCormic, Nonlinear programming: sequantial unconstrained minimization technique (Wiley, New York, 1968). 5. A.I.Kibzun, "Probabilistic optimization problem,"Department of Industrial and Operations Engineering, Technical report 91-34, University of Michigan, Ann Arbor, 1991. 6. A.I.Kibzun and V.Yu.Kurbakovskiy, "Guaranteeing approach to solving quantile optimization problems," Anals of Oper. Res., v30, 1991. 7. A.I.Kibzun and V.Yu.Kurbakovskiy, "Numerical methods for runway space mini15

mization under random disturbances," Engineering Cybernetics, 1, 1991. 8. A.J.King, "Stochastic programming problems: example from the literature," in: Yu.Ermoliev and R.J.B.Wets,ed., Numerical techniques for stochastic optimization ( Springer-Verlag, 1987.) 9. V.Yu.Kurbakovsky, "Accelerate method for high confidence level quantile estimating," in collection of works VNIISI: Nonhomogeneous system dynamic, vol 14 -VNIISI, Moscow,(1989) (in Russian) 10. R.S.Liptser and A.N.Shiryayev, Statisties of random processes 2 Applications (Springer-Verlag, New York, 1978.) 11. V.V.Malyshev and A.I.Kibzun, Analysis and synthesis of aircrafts high accuracy control (Mashinostroenie, Moscow, 1987), (in Russian) 12. J.Mayer, "A nonlinear programming method for the solution of a stochastic programming model of A. Prekopa," Survey of Math. Progr., North Holland Publishing Co., New York, v 2 (1980) 129-139. 13. J.Mayer, "Probabilistic constrained programming: redused gradient algorithm implemented on PC," International institute for applied system analysis, A 23-61, Laxenburg, Austria,(1988). 14. K.G.Murty, Linear programming (Wiley Sons, 1983). 15. A.V.Naumov and V.Yu.Kurbakovsky, B.M.Muradov, "Applying quantile optimization methods to water-supply system design," Department of Industrial and Operations Engineering, Technical report 91-29, University of Michigan, Ann Arbor, 16

1991. 16. R.Penrose, "A generalized inverse for matrices," Proc. Cambridge Philos. Soc. 52, 1956. 17. A.Prekopa, "Numerical solution of probabilistic constrained programming problems," in: Yu.Ermoliev and R.J.B.Wets, ed., Numerical techniques for stochastic optimization (Springer-Verlag, 1987). 18. A.Prekopa, "Eine Erweiterung der sog," Methode der Zulassigen Richtungen, Math. Operationsforschung und Statistik 5, 1977. 19. A.Prekopa and P.Kelle, "Reliability type inventory models based on stochastic programming," Math. Progr. Stady, 9 (1978),43-58. 20. A.Prekopa and T.Szantai, "Floid control reservoir system design," Math. Progr. Stady, 9, 138-151 21. T.Szantai, "A computer code for solution of probabilistic - constrained stochastic programming problems," in:Yu.Ermoliev and R.J.B.Wets, ed., Numerical techniques for stochastic optimization (Springer-Verlag, 1987). 22. A.F.Veinott, The supporting hyperplane method for unimodal programming (North Holland Publishing Co., New York, 1981, pp 183-210). 23. G.Zountendijk, Methods of fiasible directions, (Elsevier Publishing Co., Amsterdam and New York, 1960). 17