Introduction Decision theoretic models, such as expectation models, multi-attribute utility models, or time-discounting models, evaluate decisions or decision strategies with a real number —d index of preference such as a utility, a present value, risk, expected value, etc. Most prominent decision theoretic models are linear decomposition models that base their evaluation of choice alternatives (a generic term for gambles, multi-attributed outcomes, and consumption streams) on a weighted additive integration of subjective or objective input parameters, which the decision maker or experts provide by means of simple choices or judgmental tasks. For example, the subjective expected utility model evaluates gambles by combining subjective probabilities of events and utilities of decision outcomes into expected utilities. One of the peculiarities of linear optimization models is the flatness of their evaluation function in the area of optimal choice alternatives. (We implicitly assum-nc- here and in the following that chcdi.ce alternatives have a continuous or dense numerical description as vectors, decision functions, stopping rules, probability cutoffs, etc.) A suboptimal choice does not seriously hurt the decision maker as long as the alternative selected is not grossly away from the optimum. This type of insensitivity is closely linked to a second type, which is often found in decision analysis settings. Variations of model parameters like importance weights or subjective probabilities seldom produce drastic changes in the model evaluation function. A set of quite different parameter values may lead to the selection of the same choice alternative; and even if the use of a wrong set of parameter values leads to a different decision, the 1

first type of insensitivity will guarantee that the loss in expected value as calculated by means of the model with the correct parameters will be rather small. Some researchers (Yntema and Torgerson, 1961) have even argued for an insensitivity across models. According to their results different models should-under some mild conditions-lead to similar evaluations and decisions. Although there are doubts about insensitivity across models (see Fischer, 1972) the evidence for the two other kinds of insensitivity is substantial. In expectation models v. Winterfeldt and Edwards (1973) generalized scattered findings of flat expected value functions as functions of decisions and decision strategies. In multi-attribute utility theory Fischer (1972) demonstrated the insensitivity of multi-attribute utility functions against variations in parameters like importance weights and single dimension utilities. But up until now the evidence for flat maxima was based on more or less general examples. The questions remained whether or not flatness is a necessity and what model characteristics cause it. Another problem with the arguments for insensitivity in those examples is the concept of flatness itself. A function may look flat, but that can easily be fixed by stretching the units of the.ordinate and compressing the units of the abscissa. Flatness is not a mathematical, but a psychological concept. 5% loss may be substantial for one decision maker and negligible for another. These arguments call for two kinds of research on the flax maximum phenomenon: first, a mathematical analysis that proves the inevitability of restricted forms of the evaluation functions, given certain model characteristics, and second, an experimental psychological analysis that shows whether or not these restrictions can be interpreted as flatness. 2

This report presents the mathematical foundation of the flat maximum phenomenon. Integrating some theorems from statistical decision theory it shows that the nature of all linear optimization models imposes severe restrictions on the model evaluation function. The mathematical proofs produce two further important and practical results: they establish an equivalence between model insensitivity against variations in choice alternatives and against variations in parameter values; and they present the tools for a general and simple approach to sensitivity analyses. Some examples from statistics, psychological modeling, and decision analysis demonstrate the use of the concepts and methods developed. Why Evaluation Functions Are Restricted The most severe restriction on a function is, of course, the specification of its functional form and parameters, which determines each point of its graph. At the other extreme one may only know that f is a function. Between these extremes there are more or less severe confining properties such as convexity, continuity, boundedness, number of minima and maxima, etco Assume, for example, that all we know about the function y = f(x) is (a) it is defined for 0 - x - 1 and bounded between y = 0 and y = 1; (b) it is strictly convex; (c) it is continuous; (d) it has a unique minimum. Figure la gives some examples of graphs of functions which satisfy (a) - (d). Figure lb illustrates some inadmissible cases. 3

Insert Figures la and lb about here This section will present the mathematical proof that evaluation functions in linear optimization models have confining properties like the ones discussed in this example. The substance of our argument are three theorems from satistical decision theory, which are proven in Ferguson (1967) and DeGroot (1970). The arguments and proofs are quite technical, but all theorems have a simple intuitive meaning, and except for theorem 3 they seem self evident. Rather than boring the reader with messy mathematics, we will rely on self-evidence, whenever possible, and confine ourselves to interpretation. The reader interested in more mathematical detail should consult the two references citedo For illustration a scoring rule example will accompany all theorems and proof arguments o We want to study the behavior of the model evaluation function U which is defined over a set of choice alternatives X = [x,yz,......]. For example, X may be a set of gambles, decision functions, or multi-attributed outcomes; U may be a utility function or an expected utility function. In our scoring rule example X will be a set of probability estimates which are gambles by the definition of a scoring rule; U is the subjective expected value (SEV) of those gambles. The application of a linear decomposition model to such a choice situation requires each x to be described as an n-tuple of elements xi, characterizing x for a specific aspect or state S of the choice situation. We assume therefore that x has the following representation:

S S S S 1 2 i n x = (x1, x2,......... For example, x may be a gamble in which one receives a dollar amount xi if event S. occurs; or a multi-attributed outcomes with value x. in attribute Si, 1 1 or a cash flow in which one receives a dollar amount x. at time S.. Note that 1 1 by labelling we implicitly let the number of states be finite. This finiteness of the state space will be our first assumption (Al) for the further mathematical development Linear decomposition models go further by defining utility functions ui within each state S. so that each choice alternative can now be characterized by a vector of single state utilities: S S S. S 1 2 i n u(x) = (u1(x1), u2(x2),.....,ui(xi),.....,Un(xn)) According to our second assumption (A2) these utilities are bounded, ie., m ui(xi) - M for all i and x e X, and some real m,M. Furthermore, in linear optimization models a weight vector w from a parameter set W associates with each state S a weight w which can be interpreted as a subjective probability, an importance weight, or a discounting rate. In expectation, time discounting, and multi-attribute models we can assume that n w. 0 and Z w =1. i i i=l The linear model evaluates choice alternatives x now by computing the scalar product w o u of the vectors u and w, or more simply as the weighted average: 5

n U(,w_) = Z wou(u(x) (1) i=l1 That alternative x* is optimal, which maximizes U, i e., the decision rule of linear optimization models is: "choose xx with U(x*,w) > U(x,w) for all x c X" We will define U*(w) = U(x*,w) (2) i.e., UI is the maximal attainable utility for a specific weight vector we In statistical decision theory x* would be called a Bayes decision with respect to the prior distribution w. Let us interpret the previous paragraph in the scoring rule situation, assuming a simple two state case, in which S and S are two mutually exclusive and exhaustive events, wl and w2 are the associated true subjective probabilities 2 (SP's). The set of choice alternatives X is a subset of the real plane R, namely the tuples (xl,x2) with < x. < 1 and x + x = 1. The xi's are interpreted as the stated probabilities of the events Si. Since x1 = l-x the choice set can be totally characterized by the real numbers between 0 and 1. Scoring functions ul and u2 are defined for each state S such that U(x,w) = SEV(x,w) is maximal if x = w. We will specifically analyze the quadratic scoring rule in which: lour whole argument will be based on maximazation. The dual argument based on minimization is basically the same. 6

uL(xl) = l-(l-xl) (2a) 2 2(x2) = 1-x (3b) Schematically the scoring rule paradigm is represented in Table 1. Insert Table 1 about here Here, of course, U*(w) has a very clear interpretation: U*(w) = U(x*,w_)= U(w,w). Before we enter into a discussion of the behavior of U and U*, we need to state two preliminary theorems, which will establish a relation between the parameter set W and the choice set X. THM 1 Assuming that the state space if finite (Al) and that the ui(xi) are bounded (A2), there exists for every w_ W at least one x c X such that U(x,w) = U*(w). We will define the subset of X, which contains those elements x which are otpimal under w as X, and elements of X as x. A similar theorem is proven in - w w w Ferguson (1967). It seems self evident for finite X: you just order the x's according to their U-values (all of which are finite by Al and A2) and choose the x with the maximal U. The second theorem is more sophisticated and, in fact, substantial work is based on it in decision theory. To state it, we first have to introduce the notion of dominance (here in a somewhat wider sense than usual). We call a choice alternative x dominated, if there exist other alternatives y and z and a real number a (0 < a < 1) such that 7

(1) ui(x ) < au (yi) + (1-a)u (zi) for all i i —i ii (4a) and (2) u (x ) < au.(yi) + (1-a)u(z ) for some i. (4b) The set of non-dominated alternatives is called admissible. We label the admissible subset of X as X, and we will assume in the following that X = X (A3). THM 2 Given Al, A2, and A3, there exists for all x c X at least one w c W such that U(x,w) = U*(w). Similarly to theorem 1 we will define the subset of W which contains those parameters w which would make x an optimal choice W, and elements w c W we X -- X will call w o This theorem is rather difficult to prove and requires a sub-x stantial number of "lemmas" such as the famous separating hyperplane theorem. The idea of theorem 2, however, is simple: admissible choice alternatives are potential candidates for optimal choices. Theorems 1 and 2 allow us to step freely from the parameter set W to the choice set X and back in our analysis of U as a function of both, w and x. The main purpose of these theorems here is to establish an equivalence between parameters and choice alternatives for the insensitivity analysis. Both theorems have a simple interpretation in our scoring rule example. Since here X = W and, by definition of a proper scoring rule X = X, the theorems say that for each true subjective probability vector w, there is an optimal probability estimate, aLrnl. -t')r each estimate x there is a subjective probability vector w which irould make this estimate optimal. In fact, we already knew that, since the unique value x = w was the best estimate in the SEV sense. 8

After these preliminary theorems we are now able to study the restrictions on U and U*. By the definition of U* and by the properties of linear opti:mization models, we know that (1) the range of U* will be the restricted range of W, (2) U* has to go through all the corner points [sup (ui(xi)J, wi = 1]. X But our third theorem imposes a much more severe restriction on U*: THM 3 Under Al and A2 U* as defined in (3) is a convex function of w, i.eo, U*[av + (l-a)w] < aU*(v) + (l-a)U*(w) for all 0 < a <1,,w Wo The proof is rather simple, and it is presented here, since the convexity of U* is not at all self evident. For a different version of the proof, see DeGroot (1970)o Consider the vector av + (l-a)w. From theorem 1 we know that there is at least one x such that U[x,av + (l-a)w] = U*[av + (l-a)w]. (5) By definition of U U[x,av + (l-a)w] = [av + (l-a)w_] ou(x) = avou(x) + (l-a) w_ou(x). (6) The latter equality follows from the distributivity of "o." Again by theorem 1 there exist _ and z c X such that U(S,v) = U*(v) (7) and 9

U(z,w) = U*(w). (8) Since U*(v) > U(x,v) = vou(x) (9) and U*(w) > U(x,w) = wou(x) (10) it follows by substitution that U*[av + (l-a)w] = aU(x,v)+ (l-a)U(x,w) < aU*(v) + (l-a)U*(w). (11) What does this theorem mean in our scoring rule example? Defining U*(w) as U*(w1), we see that U* is severly restricted through the boundaries and by convexity. Figure 2a gives some examples of graphs of U* functions which might have been generated by some scoring rule (actually, U* is equivalent to some Insert Figures 2a, 2b, and 2c about here.__-.-.-.___ _______ — ____________ —_____ scoring rule)o Figure 2b shows inadmissible graphs. Figure 2c shows the U* function for our quadratic scoring rule. The interpretation of convexity in this example is very intuitive: the more certain you are about the events Si, the better your optimal decision will be in terms of SEVo We know now that U* is a restricted function of w, but what about U as a function of x? With theorems 1 and 2 it becomes simple to step from U* to U. U has two arguments, w and x. We know that U is linear in the w 's, thus as a function of w U defines an n-l dimensional hyperplane. In the scoring rule 10

case U is a line as illustrated in Figure 3. Insert Figure 3 about here What do these lines, planes, and hyperplanes have to do with U*? First, U is defined on the same space W on which U* is defined. Second, U(x,w_) and u*(w) have at least one point in common, namely the point [U*(w ),w ]. Third, — X - U* is everywhere at least as large as U, i.e., U*(w) > U(x,w) for all w_ W and x c X. This last fact follows by simple contradiction. If U* was not at least as large as U for all w and x, then there would exist some x and w such that U(x,w) > U*(w), which contradicts the definition of U*(w). Therefore, as a line U is a tangent to U*, as a plane it is a tangent plane, and as a hyperplane it is a tangent hyperplane to U*. Figure 4 clarifies these concepts in our scoring rule example. Figure 4 also exemplifies how the restrictions on U and the possible losses AU are determined totally by the shape and the slopes of U*. All losses which Insert Figure 4 about here may be encountered in a choice situation (whether they are due to a suboptimal choice or the use of a wrong set of parameter values) are differences between U* and some hyperplane tangent to it. Assume in a two state case you could construct U* without restrictions and you wanted to make losses around a value z as large as possible within the boundaries of u.. You probably would construct a U* function which looks somewhat like in Figure 5. But by convexity of Insert Figure 5 about here __________________________ 11

U* this shape is inadmissible. The convexity of U* will make losses in the area of the optimal choice alternative small. This intuitive interpretation of the restrictions on the behavior of U around its maximum through the convexity of U* can also be expressed mathematically. Since U is a tangent hyperplane to U*, it can be totally determined by n-l slopes and one pointo Assuming that U* is differentiable at w, the actual formula for U in terms of U* is consequently: U * W +n-l U(xw_) = U*(w ) +Z=7 d (w- w ) (12) — X i=l i ix where di = du*/dw. w, i.e., the directional slope of U* evaluated at w How much do we stand to lose by the choice of an nonoptimal alternative? Assume that w is the true weight vector, y is the optimal choice alternative, -y but instead we choose x ~ y. Since by definition U(y,w ) U*(w ) (15) -y -y and n-l U(x,w ) U*(w ) + d ( ) (14) -Y7 -^x i=l i iy ix we will lose n-l AU = U*(w ) - U*(w ) - di(W - w ) (15) -Y - i=l i iy ix The convexity of U* puts limits on the differences between the U*'s as well as on the slopes d. Since, in addition (w. - w, ) cannot exceed 1 (and will tpical lyl remain small typically be much smaller) the loss AU will remain small. 12

How much will we lose if we base our decision on a parameter value v when, in fact, the true value is w? We would choose x such that U(x,v) = U*(v) (16) We will receive n-l U(x,w_) = U*(v) +l d.(w.- v.) (17) i=l (17) and consequently we will lose AU = U*(w) - U*(v) -A d. (w,- v ). (18) i=l 1 1 1 Two general expressions may be helpful for limiting purposes: the maximum possible loss is determined by AU = max (U*(e )-U*(f )-d + d (19) max k, -k - k I where e and f are the unit weight vectors with e = 1 and f = 1. See -k k Figure 6 for illustration. Insert Figure 6 about here But this loss would only result from an extremely foolish choice. By choosing y such that the value U(y,w ) is the minimal point of U* (in decision theoretic terms y is the minimax strategy), we can reduce the maximum possible loss to AU = sup sup u.(x)) - U*(w ). (20) minimax Xi -yl X~~~~~~~~~ See Figure 7 for illustration.

Insert Figure 7 about here By now it should be clear how to do a sensitivity analysis with the tools developed. The first step is to construct the function U*. Often this can be done explicitly. If an explicit solution is not possible or too difficult and time consuming, one can help oneself with the following procedure: first, plot the cornerpoints U*(e ), then determine U*(w) where w is the "least favorable" weight vector which would make a minimax choice optimal. Then find some other points of U* and exploit the convexity property to approximate the whole function. Alternatively U* can be approximated by plotting some U - lines. This procedure can be done graphically in two state cases. In cases with a larger number of states computer aid is needed. Equations (19) and (20) give some boundary losses, and equation (15) determine for each particular case the potential losses. In general: the flatter U* as a function of w, the flatter U as a function of x will be around its maximum. To summarize this section: First we established a relation between the parameter set and the choice set in two therorems by making three assumptions. We assumed that the state space is finite (Al), that the single state utility functions are bounded (A2), and that the choice set is admissible (A3). Then we showed that under Al and A2 in linear optimization models the function U* is severely restricted by its boundaries and through convexity. Finally, we demonstrated the restrictions on the actual evaluation function U as a function of UJ* and outlined a general approach to sensitivity analysis using the properties of U*, l-lI

The next section will give some examples to demonstrate the concepts and methods developed. Examples A Signal Detection Example We assume a simple two state signal detection situation in which a datum d may be sampled from either of two normally distributed populations S or S2. 1 2 These distributions have equal variances s = 1 and different means ml and m2. Two decisions can be made upon observing d: (1) al: d was sampled from S1, or (2) a2: d was sampled from S2. The prior probability for sampling from Si is wi, with w = 1 - w2. Payoffs are 1- for correct decisions, 0 for incorrect decisions. The choice set X here is the set of real valued decision functions x, which are cutoffs along the possible real values of d (x is in this case related to the usual likelihood ratio criterion 3 by x = lnP/d'). x is evaluated by a simple expected value model. To formulate the problem in the format of the preceding section, we construct the expected value matrix, where expectations are taken over the random variable d within each state Si. This matrix indicates for each x the expected 15

Insert Table 2 about here amount of money the decision maker stands to lose under S. (see Table 2). The 1 expected values are defined as EV(xjS ) = Pr(d < xS) (21) EV(xJS2) = Pr(d > xS2). (22) As in the scoring rule example, we have in this paradigm a 1:1 mapping from prior probabilities wi into the choice alternatives x. As it is well known P*, the optimal likelihood ratio criterion for the payoffs given is i* = wl/(l-wl) (25) and consequently x* = In [wl/(l-w)]/d' (24) where x* is the optimal cutoff value under wl, or wl E Wx. Insert Figure 8 about here Figure 8 shows the U* function as determined from Table 2. U*(w ) = w Pr(d < x*/Sl) + (l-wl)Pr(d > x*/S2). (25) On the abscissa we have ordered the x*-values under wl to show how they are related. Assume now that w =.5 is the true prior probability, but instead of 1 16

x* = O we choose some other x' = + oo which would be optimal under wl =1. ligure (" demonstrates the possible loss AU we expect in this case. We see how the flatness of U* prevents this loss from being large. v. Winterfeldt and Edwards (1972) showed in a direct analysis of the U-function, that U is generally flat in signal detection situations. A Multi-Attribute Example Assume that we have two attributes on which we evaluate riskless options, say job offers. Attribute S1 may be salary, S2 may be staff benefits. We have five offers, each of which has been evaluated by a utility function u in each Insert Table 3 about here attribute (see Table 5). We can immediately delete x since it is dominated: -'3 1 1 u(z) = u(x ) + - u() = (8, 10) (26) 2 -2 2 ' i.e., u (z) > (x ) and u2(z2) > u2( ) (27) 1 1 1 131 2 2- 2 132 All other alternatives are admissible. U* in this case will be piecewise linear, and its construction is rather easy. We just plot all the functions U(xiw ) = ul (xl) + (l-wl)u2(x ) (28) (see Figure 9), Naturally U* is defined by the line segments of U such that U(x w) > U(xw) for allj. (29) 17

(in Figure 9 marked by the solid line). Assume now that we choose x for Insert Figure 9 about here w = 1/2. Figure 9 also indicates what we will stand to lose. Similar analyses can be done with any matrix like the one in Table 3, as we find them in time discounting models or simple decision analysis problems. For more than two states graphical representations become impossible, and computer aid is needed. In those cases one should use the approach of bounding losses by the slopes and points of U* as sketched in the previous section. 18

Re ferenc es DeGroot, M.H. Optimal Statistical Decisions. New York: McGraw-Hill, 1970. Ferguson, ToS. Mathematical Statistics, A Decision Theoretic Approach. New York, Academic Press, 1967. Fischer, G. W. Four Methods for Assessing Multi-attribute Utilities. Technical Report No. 037230-6-T, Engineering Psychology Laboratory, The University of Michigan, 1972. Yntema, D. Bo and Torgerson, W. S. Man-computer cooperation in decisions requiring common sense. IRE Transactions on Human Factors in Electronics, 1961, HFE-2, 20-26. v. Winterfeldt, D. and Edwards, W. Costs and Payoffs in Perceptual Research. Technical Report No. 011313-1-T, Engineering Psychology Laboratory, The University of Michigan, 1973. 19

Table 1 Schematical representation of the scoring rule situation with quadratic scoring functions (SEV(x1) = w[l-(l-xl)2] + (l-wl)(l-x12)). r-4 CD UD r-4 x Co Q) -H 0o.rt.r4D A K True w w2 SP's W2 States S S _ _ _ _ 1 2 0.0 0 1 x 1-(l-x )2 l-x 2... * * 0.5.75.75 1.0 0.I 0, 0.... ] 20

Table 2 Expected value table for the two state signal detection situation (m = -.5; m2= +.5) Prior w w Prob. 1 2 States S1 S2 rd o11 x 0 rt4 0 0) 0J K -00 X 'O + 00 -r 0 Pr[d < xS 1] 1 - 1 Pr[d > x|S2].69 0. TO ] 21

Table 3 Multi-attributed outcomes by their single attribute xi described utilities O 0O 0 -o 0 Importance w w 1 2 Weight 1 Attribute S1 S2 x 2 12 1 x -2 6 11 x 7 10 3 x 10 9 4 x 12 5 5 22

Figure la Graphs of functions which satisfy (a) - (d) f (x) I, Figure lb Graphs of functions which do not satisfy (a) - (d)f (x),\ x I I x

Figure 2a Graphs of hypothetical U*-functions which satisfy the boundary conditions U*(O)=k and U'(1) = 1 and convexity. U*(w ) I k <C — z_ HLU F-I L < SD: Er WEIGHT w I I Figure 2b Graphs of non-admissible U* functions UI LJ I 2' a 0 < - -- r -.. X (w Li 5 D I WEIGHT w,

Figure 2c U*-functions in our scoring rule example 1.0 LL 3-0.9.D.-0 -I* LJ CO0"I - LJ 0.6 LLI > r 5 8 0.4,, 0.3 (n 0.2 0.1 u (w 1) t 0 I I I I I I I I I 0.I 0.2 0.3 0.4 TRUE SUBJECTIVE 0.5 0.6 0.7 0.8 PROBABILITY OF 0.9 1.0 SI,Wl

Figure 3 The lines defined by U(x,w) in the scoring rule example 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 TRUE SUBJECTIVE PROBABILITY OF SI, w,

Figure 4 Demonstration oF the relation between the hyperplanes U(x,w) and U*(w).i.nl -:he scoring rule example 1.0 ~0.9 x-Og II 0,8 0 u 0.7, 0.6 - 31 0.5 xI v 0.4 II o-0.2 %n 0.1 UJ n O.I 0.1 0.2 0.3 0.O TRUE SUBJECTIVE PROBABILITY OF S, wl

Figure 5 Inadmissible U* function (with foundary values k and 1) which would make the loss due to small deviations from an optimal choice severe. U(x,w) U(y,w) k I 1 I I Wlz W2y WEIGHT w( 5 1.0

Figure 6 Graphical determination of maximum possible loss A U in a two state case. max U(x,w) U*(o) U*(l) U*(w) U(x, w) AU max U(x, W) 0.5 WI

Figure 7 Graphical determination of the maximum possible loss under a minimax choice y (AUminimax) minimax sup sp{Upi(xi) AU minimax I*(w) U(y, w) l 0 WIy.5 1 WEIGHT wI WI

Figure 8 U and U' functions in the signal detection example (m = -.5; m2 = +.5) 3 X __ %x0 D U-. LLJ 0 -C) 0 LL CL lLJ WL.5 1.0 WI X =-co X*=O N X* =+0 PRIOR PROBABILITY OF s,

MULTIATTRIBUTE UTILITY U FOR EACH CHOICE ALTERNATIVE.x AND WEIGHT VECTOR w,U (xl, 3 r O:1 Ix 0 lx r' m oG I — i Ix (it C P, a 31 -CD rt I r' (D (D ID Hr PJ I.J r H' CD OCD

Distribution List Director, Engineering Psychology Programs Code 455 Office of Naval Research 800 North Quincy Street Arlington, Virginia 22217 Defense Documentation Center Cameron Station Alexandria, Virginia 22314 (5 cys) (12 cys) Director, ONR Branch Office Attn: Dr. C. Harsh 495 Summer Street Boston, Massachusetts 02210 Director, ONR Branch Office Attn: Dr. M. Bertin 536 S. Clark Street Chicago, Illinois 60605 Office of the Chief of Naval Operations, Op-095 Department of the Navy Washington, D. C. 20350 Dr. John J. Collins Office of the Chief of Naval Operations, Op-987F Department of the Navy Washington, D. C. 20350 CDR H. J. Connery Office of the Chief of Naval Operations, Op-987M4 Department of the Navy Washington, D. C. 20350 Dr. A. L. Slafkosky Scientific Advisor Commandant of the Marine Corps Code AX Washington, D. C. 20380 Mr. John Hill Naval Research Laboratory Code 5634 Washington, D. C. 20375 Office of Naval Research Mathematical Sciences Division Code 434 Department of the Navy Arlington, Virginia 22217 Office of Naval Research Code 437 800 North Quincy Street Arlington, Virginia 22217 Director, Attn: Dr. 1030 East Pasadena, Director, Attn: Mr. 1030 East Pasadena, ONR Branch Office E. Gloye Green Street California 91106 ONR Branch Office R. Lawson Green Street California 91106 Director, Naval Research Laboratory Technical Information Division Code 2027 Washington, D. C. 20375 Director, Naval Research Laboratory Attn: Library, Code 2029 (ONRL) Washington, D. C. 20375 (6 cys) (6 cys) Office of Naval Research Code 463 800 North Quincy Street Arlington, Virginia 22217

Dr. Heber G. Moore Hqs., Naval Material Command Code 03R4 Department of the Navy Washington, D. C. 20360 Chief of Naval Material Prog. Admin. Personnel & Training NAVMAT 03424 Department of the Navy Washington, D. C. 20360 Commander, Naval Electronics Systems Command Command and Display Systems Branch Code 0544 Washington, D. C. 20360 Commander, Naval Air Systems Command NAVAIR 340F Washington, D. C. 20361 CDR James E. Goodson Bureau of Medicine and Surgery Operational Psychology Branch Code 513 Department of the Navy Washington, D. C. 20372 LCDR Curt Sandler, MSC Naval Safety Center Code 811 Norfolk, Virginia 23511 CDR Robert Wherry Human Factors Engineering Systems Office Naval Air Development Center Johnsville Warminster, Pennsylvania 18974 Dr. Gerald Miller Human Factors Branch Naval Electronics Laboratory Center San Diego, California 92152 Mr. James Jenkins Naval Ships Systems Command Code PMS 302-43 Washington, D. C. 20362 Naval Ships Systems Command Code 03H Washington, D. C. 20362 Commander, Naval Supply Systems Command Logistic Systems Research and Design Division Research and Development Branch Washington, D. C. 20376 Bureau of Medicine and Surgery Human Effectiveness Branch Code 713 Department of the Navy Washington, D. C. 20372 Mr. A. Sjoholm Bureau of Personnel Personnel Research Div., PERS A-3 Washington, D. C. 20370 Human Factors Engineering Branch Code 5342 Attn: LCDR R. Kennedy U. S. Naval Missile Center Point Mugu, California 93041 Human Engineering Branch, Code A624 Naval Ship Research and Development Center Annapolis Division Annapolis, Maryland 21402 Dr. Robert French Naval Undersea Center San Diego, California 92132 Mr. Richard Coburn Head, Human Factors Division Naval Electronics Laboratory Center San Diego, California 92152

Dean of Research Administration Naval Postgraduate School Monterey, California 93940 Mr. William Lane Human Factors Department Code N215 Naval Training Equipment Center Orlando, Florida 32813 U. S. Air Force Office of Scientific Research Life Sciences Directorate, NL 1400 Wilson Boulevard Arlington, Virginia 22209 Dr. J. M. Christensen Chief, Human Engineering Division Aerospace Medical Research Lab. Wright-Patterson AFB, Ohio 45433 Dr. Walter F. Grether Behavioral Science Laboratory Aerospace Medical Research Lab. Wright-Patterson AFB, Ohio 45433 Dr. J. E. Uhlaner Director U.S. Army Research Institute for the Social & Behavioral Sciences 1300 Wilson Boulevard Arlington, Virginia 22209 Dr. E. R. Dusek, Director Individual Training & Performance Research Laboratory U. S. Army Research Institute for the Behavioral & Social Sciences 1300 Wilson Boulevard Arlington, Virginia 22209 Dr. Jesse Orlansky Institute for Defense Analyses 400 Army-Navy Drive Arlington, Virginia 22202 Mr. Luigi Petrullo 2431 N. Edgewood Street Arlington, Virginia 22207 Commanding Officer (3 cys) Naval Personnel Research and Development Center Attn: Technical Director San Diego, California 92152 Dr. George Moeller Head, Human Factors Engineering Branch Submarine Medical Research Lab. Naval Submarine Base Groton, Connecticut 06340 Lt. Col. Austin W. Kibler Director, Behavioral Sciences Advanced Research Projects Agency 1400 Wilson Boulevard Arlington, Virginia 22209 Chief of Research and Development Human Factors Branch Behavioral Science Division Department of the Army Washington, D. C. 20310 Attn: Mr. J. Barber Dr. Joseph Zeidner, Director Organization & Systems Research Lab. U. S. Army Research Institute for the Behavioral & Social Sciences 1300 Wilson Boulevard Arlington, Virginia 22209 Technical Director U. S. Army Human Engineering Laboratories Aberdeen Proving Ground Aberdeen, Maryland 21005 Dr. Stanley Deutsch Chief, Man-Systems Integration OART, Hqs., NASA 600 Independence Avenue Washington, D. C. 20546 Capt. Jack A. Thorpe Department of Psychology Bowling Green State University Bowling Green, Ohio 43403

Dr. Eugene Galanter Columbia University Department of Psychology New York, New York 10027 Dr. J. Halpern Department of Psychology University of Denver University Park Denver, Colorado 80210 Dr. S. N. Roscoe University of Illinois Institute of Aviation Savoy, Illinois 61874 Dr. William Bevan The Johns Hopkins University Department of Psychology Charles & 34th Street Baltimore, Maryland 21218 Dr. James Parker Bio Technology, Inc. 3027 Rosemary Lane Falls Church, Virginia 22042 Dr. W. H. Teichner Department of Psychology New Mexico State University Las Cruces, New Mexico 88001 Dr. Edwin A. Fleishman American Institutes for Research 8555 Sixteenth Street Silver Spring, Marylan 20910 American Institues for Research Library 135 N. Bellefield Avenue Pittsburgh, Pa. 15213 Dr. Joseph Wulfeck Dunlap and Associates, Inc. 1454 Cloverfield Boulevard Santa Monica, California 90404 Dr. L. J. Fogel Decision Science, Inc. 4508 Mission Bay Drive San Diego, California 92112 Psychological Abstracts American Psychological Association 1200 17th Street Washington, D. C. 20036 Dr. Irwin Pollack University of Michigan Mental Health Research Institute 205 N. Forest Avenue Ann Arbor, Michigan, 48104 Dr. W. S. Vaughan Oceanautics, Inc. 3308 Dodge Park Road Landover, Maryland 20785 Dr. D. B. Jones Martin Marietta Corp. Orlando Division Orlando, Florida 32805 Mr. Wes Woodson Man Factors, Inc. 4433 Convoy Street, Suite D San Diego, California 92111 Dr. Robert R. Mackie Human Factors Research Inc. Santa Barbara Research Park 6780 Cortona Drive Goleta, California 93017 Dr. A. I. Siegel Applied Psychological Services 404 East Lancaster Street Wayne, Pennsylvania 19087 Dr. Ronald A. Howard Stanford University Stanford, California 94305

Dr. Amos Freedy Perceptronics, Inc. 17100 Ventura Boulevard Encinco, California 91316 Dr. C. H. Baker Director, Human Factors Wing Defense & Civil Institute of Environmental Medicine P. 0. Box 2000 Downsville, Toronto, Ontario Canada Dr. Paul Slovic Department of Psychology Hebrew University Jerusalem, Israel Dr. Cameron R. Peterson Decision and Designs, Inc. Suite 600 7900 Westpark Drive McLean, Virginia 22101 Dr. D. E. Broadbent Director, Applied Psychology Unit Medical Research Council 15 Chaucer Road Cambridge, CB2 2EF England Journal Supplement Abstract Service American Psychological Association 1200 17th Street, N. W. Washington, D. C. 20036 Dr. Bruce M. Ross Department of Psychology Catholic University Washington, D. C. 20017 Dr. David Meister U. S. Army Research Institute 1300 Wilson Boulevard Arlington, Virginia 22209 Mr. John Dennis ONR Resident Representative University of Michigan Ann Arbor, Michigan Dr. Victor Montgomery Department Rockville, Fields College of Psychology Maryland 20850 Dr. Robert B. Sleight Century Research Corporation 4113 Lee Highway Arlington, Virginia 22207 Dr. Howard Egeth Department of Psychology The Johns Hopkins University 34th & Charles Streets Baltimore, Maryland 21218 Capt. T. A. Francis Office of the Chief of Naval Operation, Op-965 Room 828, BCT #2 801 North Randolph Street Arlington, Virginia 22203 Dr. Sarah Lichtenstein Department of Psychology Brunel University Kingson Lane Uxbridge, Middlesex England

SECURITY CLASSIFICATION 'OF T-H!i k, HA:, f.:, — Lt;t.;,l;ITr7.''i..........................READ INSTU CTIONS REPORT DOCJ'UMENTATION PAGEfREDiSRCO __ _REP__ORT DOC_ _ _ _i_ ____ ___ _.__I _ _D'__PA_ _EBEFORE CO'MPLETING FORM. t. REPORT NUMBER 2. GOVT ACCESSION NO. 3. RECIPiENT'S CATALOG NUMBER 011313-4-T 4. TITLE (and Subtitle) 5. TYPE OF REPORT & PERIOD COVERED Technical FLAT MAXIMA IN LINEAR OPTIMIZATION MODELS Technical 6. PERFORMING ORG. REPORT NUMBER None 7. AUTHOR(S) 8. CONTRACT OR GRANT NUMBER (s) N00014-67-A-0181-00 49 Detlof v. Winterfeldt and Ward Edwards 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK Engineering Psychology Laboratory AREA & WORK UNIT NUMfEP'S Institute of Science g Technology N 197-021 University of Michigan ARPA Order No. 2105 Ann Arbor Michigan 48105____... 11. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Advanced Research Projects Agency 2 November 1973 1400 Wilson Boulevard 13. NUMBER OF PAGES Washington, D. C. 22209.. 14. MONITORIi,.C; AGENCY Ni:,AN AND ADDRESS 15. SECURITY CLASS (of this report) (f different from Controlling C hice) Engineering Psychology Programs Unclassified Office of Naval Research Department of the Navy 15 CHEDULE Arlington, Virginia 16. DISTRIBUTION STATEM1ENT /(rj i:;,C.:.rt) Approved for public release; distribution unlimited. 17. DiSTRi3UTl; -i S AT,'-., -;....::: i: l-:. '.; 2 0. i:.''t::t j!'.m:t'',c.r' 18. SURLEME.4ENiTARY NOT: 5 None 19. KEY \'0.D$ (,-;.':.ie t:.": ';.';';.;.;.'. *. K jr Linear optimization Flat maxima Utility Scoring rule Signal detection a 20. ABSTRACT (C,:.;'/;, ' m: a?, ',; ' *.C'...,'s ',:.;.,: ', '.^"...7/ 'A..'. ht\,, S.":- /;I/to,n^ he',t) Expected value functions as functions of decisions and decision strategies are flat around their maxima. This so called flat maximum phenomenon has been discovered in sensitivity analyses in virtually all decision theoretic paradigms. But until now most of the research on flat maxima explored more or less general examples and limiting considerations. Two basic questions remained unanswered: what are the mathematical reasons for the restricted shape of the evaluation functions; and can these restrictions be interpreted as flatness in a psychological sense? While the DD AFO 3 1473 EDC;T,. -F 1 irV 65 IS OBSOLETE I JAN 73 Unclassified SEC-:..RITY CA A-,:~:.. w? Wl p n g7 i lr'lIu.!.,- " r... - u

- --- - - - - -%I.er", -r /IY/11 1 -1), F"!"rlp,41 SEC:URITY CLASSIFICATION OF f II1"ltr'i a a..7ci"-F — second qestion calls for psychological experimentation, the first question can be answered with mathematical tools. The present article shows that the mathematical characteristics of linear optimization models impose severe restrictions on the functions evaluating choice alternatives such as gambles, multi-attributed outcomes, or consumption streams. The course of proof of this argument provides a helpful tool for sensitivity analyses in decision theory. The concepts and methods are demonstrated in examples from statistical decision theory, psychological modeling, and applied decision theory. I I SECURITYf CLA SSIFICATICNF' OF THIS PA3E (/Wha ntl, I:P Ltt'rc')

UNIVERSI OF MICHIGAN 3 9o15 03530 0386