:;t i"'I W: ON THE INTERICACY OF CHECKING THE UNBOUNDNESS OF CONVEX FUNCTIONS Khaled S. Al-Sultan Department of Industrial & Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 and Department of Systems Engineering King Fahd University of Petroleum and Minerals Dhahran 31261, Saudi Arabia Katta G. Murty ~"i'U^~~~~ ~Department of Industrial & Operations Engineering "\,) The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 91-8 March 1991 ~....',,,,,,,^ y'.. f" 1- /.- *,'' / i (. t' g!! 5' /^i /A r'i"sdz*

Abstract We consider a differentiable convex function O(x) defined over an unbounded open convex subset r C Rn. We investigate methods for checking whether O(x) is bounded below on r. We present an example which shows that the classical steepest descent method may produce a sequence of points {x'} with {0(xr)} converging to a finite value as r - oo, even when 0(x) is unbounded below in r. We also give examples to show that several other common properties that hold when r is bounded,may not hold in the unbounded case. Finally we present a method for this problem and pose some open questions. Keywords: Differentiable convex function, checking unboundedness, steepest descent method. 2

1 Introduction The problem of checking whether a differentiable convex function defined over an unbounded convex open subset r of R" has been considered in [ 1 ]. An algorithm for this problem has been proposed there, but that algorithm may fail under some conditions. Even when r = R, the descent based classical unconstrained minimization methods fail to solve this problem for several reasons which are presented below with illustrative examples. A differentiable convex function defined over R" may have a global minimum and yet there may be a half-line in R" along which it only has an infimum and not a minimum. As an example, consider n = 2 and fi(xz, x2) = (max{e"' - x2,0})2. Every point (x, x2) where x1 > 0, x2 > 1 is a global minimum for fi(zl, 2). However, on the half-line L = {(xl,x2) = (A,0): A > 0} this function has only an infimum of 0 which is not attained on the half-line. When a classical unconstrained minimization scheme encounters a half-lf like this, it is unable to continue any further. On might be tempted to conclude that if a differentiable convex function defined on R" has a finite infimum (but not a minimum) on some half-line, then it must be bounded below on Rn. This could be false as illustrated by 3

the function f2(x, x2) = e-~ - x2 defined on R2. f2(xlZ 2) has an infimum of 0 on the half-line L defined earlier, but is unbounded below over the whole space. Also, a differentiable convex function may have different infima along different half-lines. For example, consider f3(x1, x2) = e-"1 + (x - 1)2 defined 0 1 on R2. Consider the half-lines L3,1 = {x = + X(; > 0}, L3,2 = {x = +; A > 0}. The infimum of f3(x) on L3,1 is 1; and that on L3,2 is 0, this is the global infimum. Most of the classical methods for unconstrained minimization aim to reach a point where the gradient of the function vanishes. When applied on a differentiable convex function, suppose the method finds a half-line along which the gradient vanishes in the limit. Unfortunately, this provides no guarantee that the function is bounded below. As an example, we consider the fLction f4(x) =- log(xi + y) in one variable xl, defined over xl > -y, with 7 apositive constant f4(xl) -- -oo as xl -- oo., even though d, = Ir -+ 0 as l — + oo. Perhaps, the first method developed for unconstrained minimization is 4

the famous steepest descent method. Proofs of convergence of the steepest descent method in the literature are usually based on the assumption that there is a finite point where the gradient vanishes,and that the sequence of points obtained under the method lies in a compact set; hence these proofs do not throw any light about the convergence behavior of the method when there is no finite point where the gradient of the function vanishes. And the classical steepest descent method is unable to continue if it reaches a point at which the function only has an infimum but not a minimum in the steepest descent direction. To show that this can happen when the steepest descent method is applied on a differentiable convex function, we consider the function fs(xl,X2) = e-"1 +log(x2+ 100)-()/(x2 + 1) in the region defined by x2 > -1 in R2. fs(xl, x2) is a convex differentiable function in this region. Let x5 > -1, and define 2 = (z1,0). Verify that V7f5() = (-e-,O) = y, and that on the half-line Ls = ({ + Ay: A > 0}, f5(x) does not have a Jon miniurm, but an infimum of 0. A WTCcould consider the following modified steepest descent method. It proceeds exactly as in the classical steepest descent method doing optimum line searches in the negative gradient directions. If it hits a point xz satisfying the property that the function has only an infimum and not a minimum in the 5

steepest descent direction at xr, then take a predetermined fixed step length (say A) in that direction, call that point Xr+l, and continue the method in the same way with xz+l as the next point. When we apply this modified steepest descent method on f5(xl, x2) defined above, beginning with x as the initial point, we verify that it generates a sequence of points going off to infinity along w4h the half-line Ls. The infimum of f5(x) on the half-line Ls is O,and f5(x) converges to it along this sequence The gradient of f5(x) also converges to 0 along the sequence. And yet over the whole space f5(x) is unbounded below. So, this modified steepest descent method fails too. For differentiable convex functions,this example is very surprising,as its gradient converges to 0 along the half-line L5,the function itself has an infimum on this half-line; and yet it is unbounded below over the whole space. 2 An Algorithm We now consider the problem of checking whether the differentiable convex function 0(x), is bounded below over R'. We suggest the following method which is guaranteed to solve this problem. Let e be the vector of all ls in 6

Rn. Let {cr,: r = 1,*..} be a strictly monotonically increasing sequence of positive numbers diverging to +oo. And let e be a small positive tolerance. For r = 1, in the rth step consider the following problem minimize 0(x) (1) subject to -are < x < +ae. Using the algorithms in [ 2 ] based on the ellipsoid method, it is possible to compute in finite time, a point x' which is e-optimal to (1) in the sense that 9(x) > (zxr) - e for all x in the box which is the feasible region for (1). This method then generates the sequence of points {Xr: r = 1,2, }. Clearly, @(x) is bounded below iff the sequence {8(x): r = 1,2,- } is bounded below. One difficulty with this method is that as the method progresses, each additional step takes longer to carry out. The method in [ 2 ] to solve (1) basedCa the ellipsoid method, depends on a volume reduction argument, so it requt r more effort as a, increases. At the moment it is not known whether there is a method which can generate a descent sequence {xr: r = 1,2,.*.} for this problem with {9(x'): 7

r = 1, 2,. } diverging to -co if O(x) is unbounded below, such that the comA putational effort (measured in terms of operations such as function, gradient or hessian evaluations, or line searches, etc.) of getting the next element, xr+1, in the sequence, is independent of r. 3 Acknowledgement We thank R. Chandrasekran for bringing the algorithms discussed in [ 2 ] for convex functions to our attention. 4 References [1] P.M. Camerini, S.J. Chung, and K.G. Murty, "On Checking Unboundedness of Functions," Arabian Journal of Science and Engineering, 1991, to appear. [2] M. Gr6tschel, L. Lovasz, and A. Schrijver, Combinatorial Optimization and the Ellipsoid Method, 1985, Springer, Berlin. 8