A Tie Breaking Rule for Discrete Infinite Horizon Optimization Sarah M. Ryan Department of Indusffial Engineering University of Pittsburgh Pittsburgh, PA 15261 James C. Bean and Robert L. Smith Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, MI 48109-2117 Technical Report 89-14 April 1989

A TIE-BREAKING RULE FOR DISCRETE INFINITE HORIZON OPTIMIZATION * Sarah M. Ryan Department of Industrial Engineering University of Pittsburgh Pittsburgh, Pennsylvania 15261 James C. Bean Robert L. Smith Department of Industrial and Operations Engineering The University of Michigan Ann Arbor, Michigan 48109 April, 1989 Abstract We study discrete infinite horizon optimization problems without the common assumption of a unique optimum. A method based on solution set convergence is employed for finding optimal initial decisions by solving finite horizon problems. This method is applicable to general discrete decision models which satisfy a weak reachability condition. The algorithm, together with a stopping rule, is applied to solve capacity expansion problems, and computational results are reported.'This work was supported in part by the National Science Foundation under Grant ECS-8700836 to The University of Michigan. 1

In sequential decision problems with indefinite horizons, the length of an appropriate planning horizon is an important issue. Data far in the future is difficult to forecast accurately, and yet myopia must be avoided. Recently, progress has been made in proving the existence of, and then discovering, forecast and solution horizons. These are horizons long enough to guarantee that the optimal initial decision or decision sequence found is optimal over any longer horizon (including the infinite horizon). The decision maker may implement this decision or sequence with confidence, and then uncover subsequent decisions in a rolling horizon fashion. Solution horizon procedures typically consist of solving finite horizon problems, increasing the horizon until some stopping criterion is satisfied. Two questions then arise: 1. (Solution Horizon Existence) Is there a horizon long enough to guarantee optimality? 2. (Solution Horizon Discovery) How long a horizon is needed to know that optimality is guaranteed? Nearly all solution horizon existence and discovery results have required a unique optimal initial decision (see Bean and Smith 1984; Hopp, Bean, and Smith 1987; Bes and Sethi 1988). However, Ryan and Bean [1987] show that in discrete decision problems, such a requirement may be difficult to meet. Moreover, discrete decisions arise in many problems, including production planning, capacity expansion and equipment replacement. Recently, Schochetman and Smith [1987] began to attack problems with multiple optima by studying the convergence of sets of finite horizon optimal solutions. They showed that if these sets converge as the horizon is lengthened, then a solution horizon may be found by making appropriate selections from the sets. In this paper we force set convergence by a suitable construction of the finite horizon sets. Rather than merely finding optimal solutions, 2

our algorithm finds the sets of solutions optimal to their own state. Similar ideas appear in Bean and Smith [1986] and Chand and Morton [1986]. We provide a simple structural condition under which these sets converge. Further, we derive an easily implemented selection, or tie-breaking rule, for selecting one solution from each finite horizon set. The sequence of selected solutions converges to an infinite horizon optimal strategy. These constructions lead to a straightforward tie-breaking algorithm for choosing one of potentially many optimal strategies. Our stopping rule, while not requiring uniqueness, may still fail when there is more than one optimal initial decision. However, as illustrated by an example, even when the algorithm fails to stop, the optimal strategies can be recognized readily by examining the finite horizon sets. When applied to regeneration point problems, the algorithm generalizes the forward algorithms of Shapiro and Wagner [1967] and Bean and Smith [1984]. Section 1 is a mathematical statement of the problem and assumptions. In Section 2 we review concepts of set convergence and apply them to discrete infinite horizon optimization. The construction of finite horizon solution sets, conditions under which they converge, and the algorithm and stopping rule appear in Section 3. In Section 4 we apply the algorithm to capacity expansion problems and discuss the results of computational tests. Finally, Section 5 contains conclusions. 1 Problem Definition and Assumptions WVe model the infinite horizon sequential decision problem as in Bean and Smith [1986] with an infinite directed decision network (JV, A, C) where n is the set of nodes or decision points, A is the set of arcs or decisions, and C: A - R is a cost function. We impose three structural assumptions on (/, A). First, we require that there be a 3

unique root node with in-degree one. Second, we assume that all node out-degrees are nonzero and uniformly bounded. Finally, we assume that the cumulative in-degrees of all nodes are finite, where the cumulative in-degree of a node is the sum of its in-degree and all indegrees of nodes from which there is a directed path to that node. From these assumptions we can number the nodes A = {0, 1,2,...} such that (i,j) E A only if i < j (Skilton 1985, p. 230). Hence the node numbers can serve as a surrogate for time. Moreover, it follows that there is a directed path from the root node 0 to each node i which can be continued over the infinite horizon. A path through the network, (io,il,...), where io = 0, represents a feasible strategy r = (7rl, 7r2,...) in which the nth decision r, = (in, in+). Let Tin be the set of decisions available after n - 1 decisions have been made. We assume the decisions in IIn are indexed by a subset of {0, 1,..., M}, where M is uniform over n. Associated with each node i is a time Ti, called a decision epoch, such that i < j if and only if Ti < Tj. We assume Ti - oo as i - oo. Denote the set of feasible strategies (or paths) by II C xHIIn. We define a metric on x= {0, 1,..., M} as follows: for 7r, r E x= 1{0,,... M}, 00 (r,7 ) / -7n n=l where f < Ml1 Under this metric, the closeness of strategies is measured by agreement in early decisions. This metric induces a topology on II which is identical to the product topology; therefore II is a compact metric space (see Bean and Smith 1986). Let ik(r) denote the kth node visited by strategy ar, where io(ir) = 0. The cost of 7r is given by 00 fi - E C(in (7), in+ (7r)). n=O 4

We assume that f, is uniformly convergent over r E II. The problem we wish to solve is: f min f.r rEn The minimum exists since II is compact and f, is a uniformly convergent sequence of continuous functions and therefore continuous over X E H. A strategy X* is termed infinite horizon optimal if it minimizes fr. We will also refer to these as optimal strategies. An optimal initial decision is an initial decision for some optimal strategy. Let II* and H* denote sets of optimal strategies and optimal initial decisions, respectively. The solution horizon approach involves solving finite horizon problems. We define the T-horizon cost of ir as f,(T) - E C(in(r), in+I()). {n\Tin( _)<T} We also define the following finite horizon sets of strategies:' (T)- {r E II|r E arg min/f,(T)} [I(i) { 7r|7r 6 arg min C(in(7r),in+l(7r)) ) { li=ik('r) for some k} n<k H(T) {17rr E lI(in(7r)), n such that Ti_,(,) < T < Ti(,)}. The set H1*(T) is the set of T-horizon optimal strategies, while II(i) is the set of strategies optimal to node i. The set l1(T) is the set of strategies optimal to their own node at or just beyond time T. We will refer to this as the set of T-horizon efficient strategies. In a discrete time capacity expansion or production planning problem, these are the strategies optimal to their own capacity or inventory level at time T, respectively. Note that II*(T) C 11(T). A subscript of k on any set of strategies will denote the corresponding set of kth decisions. A solution horizon is a time, T, such that for T > T, II;(T) = {r;}, for some fixed 7r; E II. A general solution horizon is a time, T, such that for T > T, H;(T) C H;. Most 5

algorithms in the literature have required the existence of solution horizons. In this paper, we provide an algorithm that selects an optimal initial decision from 11; in the presence of a general solution horizon. 2 Set Convergence in Discrete Infinite Horizon Optimization Previous authors (Bean and Smith 1984, Schochetman and Smith) have found solution horizons by showing that if II* = {Ir*}, then an arbitrary choice 7r(T) E 11*(T) converges to nr as T - oo. If optimal strategies are not unique, a natural extension is to seek convergence of the sets Ih(T) to the set Hi. We have found such convergence to be rare, and have studied instead the convergence of II(T) to II*. To lay a foundation for this development, we first review and apply some concepts of set convergence for closed subsets of a compact metric space. Definition: Let II(T) C I. We say that II(T) is a T-horizon set if membership in n(T) is determined by decisions made at or before time T (inclusive). Schochetman and Smith introduce set convergence in infinite horizon optimization, using the Hausdorff metric for closed and hence compact sets. As this metric is built up from the strategy metric, p, set convergence is closely related to individual strategy convergence. To establish and then exploit convergence of a sequence of finite horizon closed sets {II(Tn)} to the optimal set II*, we will use the following results for the discrete case. R1. Set convergence means that subsequential limits are limits. Define liminf II(T) to be the set of all r E II such that there exists rn E nI(Tn) for all n such that 7r -. Define limsup II(T) to be the set of all r E 11 such that there exists a subsequence 6

{Tk} of {T,} and rk E H(Tk) such that Irk -+ r. Then liminfH(T,) C limsup I(T,). Schochetman and Smith show that II(Tn) - l H* in the Hausdorff metric if and only if limsup (T,) = liminfII(T) = I*. R2. Set convergence is equivalent to early decision agreement. lI(Tn) -n IH if and only if, for any L, there exists NL such that if n > NL, then 1. for any r E II*, there exists r r IIE(Tn) such that irn = rk, 1 < k < L, and 2. for any ir" E H(Tn), there exists ir E II such that irk = r, 1 < k < L. In particular, if II(Tn) - II* and Trk = Xk* for all ir E II*, 1 < k < L, then there exists NL such that if n > NL, then irT = r;, 1 < k < L, for all rn E II(T). R3. Set convergence implies convergence of nearest-point selections. Let p be a point for which there is a unique ir E I1* minimizing p(p, 7r). Then p is called a uniqueness point for II*. Let sp(II') be a strategy minimizing p(p, ir) over ir E II'. The function sp(') is called a nearest-point selection. From Schochetman and Smith, 11(Tn) -- I1* implies sp(II(Tn)) Sp(I*). In the following, we show that the finite horizon and infinite horizon sets of strategies involved in solving an infinite horizon problem are necessarily closed. Next we define a unique lexicographic order for strategies and show that it is identical to the one obtained by ordering according to distance from the point 68 0, measured by p. These results, together with the continuity of p, imply that 6 is a uniqueness point for I* and interpret se(-) as a simple tie-breaking rule, analogous to one used for resolving degeneracy in linear programming. Lemma 1 The sets II* and 1(T), where H(T) is any T-horizon set, are closed. 7

Proof: II* is closed since f, is continuous in 7r and II is compact. For II(T), note that there are a finite number of partial strategies up to time T. If II(T) is finite then it is closed. Suppose that II(T) is infinite and that ir is a cluster point of 11(T). Let {7r7n}o= be such that rn E H(T) for all n and r" -e Ir in p. Then {Irn}oI is Cauchy, which implies that for some N, {Irn, n > N} are in agreement with one another and with -r up to time T. Therefore ir E II(T). Thus II(T) contains all its cluster points, and II(T) is closed. a Definition: Let a = (al, a,...) and b = (bl,b2,...). We say that a -. b (a is lexicographically smaller than b) if and only if a $ b and, if no is the smallest n such that an 7 bn then ao < bno. A strategy 7* is the lexico minimum of 1' if *-< r for all r 6 1', ir / T. Lemma 2 For any closed set 1' E 1, se(H') is the lexico minimum element of 1' and therefore is unique. Proof: We will show that: 1. p(O, r) attains its minimum on E II', 2. p(O, ir') < p(, 7r2) if and only if 7r1 - 7r2. (1) and (2) imply that p(O, 7r) attains its minimum at the unique lexico minimum point of n'. 1. Follows immediately from the continuity of p and compactness of II. 2. Suppose 7rl -< r2. Then for some no we have Tl = 1r2 for n < n, and u7r < 7r 8

By assumption, 7r1 - r2 > -M for n > no. Then O0 p(0, ) - p(B, x') E n(72 _ 7r) n=no M'~no+l 1 > Ono _ M >O since < M+1 1-13 M+li Hence, 711 -< r2 p(o, 71) < p(, r2). Now suppose 7r'1 7r2. Either 7r1 = 7r2 or 7r >_ 7r2. If 7rl = 7r2 then p(O, 7r) = p(O, 72). Suppose 7r' > 7r2. Then by the same argument as above, p(O, 71) > p(O,r2). Hence 7r1 $ 712 p(, Tr1) > p(, ir2). Lemmas 1 and 2 imply that we can apply result R3 whenever any sequence of Tn-horizon sets, {n(Tn), n E N}, converges to I*. To summarize, we have: Theorem 3 If {ll(Tn)} is some collection of Tn-horizon sets, then the lexico minimum element of II(Tn) converges to the lexico minimum element of l* whenever H(Tn) - nI*. Proof: Follows from result R3 and Lemma 2. m Assuming 11(T,) - II*, Theorem 3 suggests the following algorithm: Solve increasing length finite horizon problems, identifying for each Tn the lexico minimum element of H(Tn). The sequence of strategies thus generated converges to the lexico minimum infinite horizon optimal strategy. 9

3 A Tie-Breaking Algorithm Having seen that a tie-breaking algorithm can follow from set convergence, we now establish conditions for set convergence to occur. Various concepts of reachability (McKenzie 1976) have been used to identify solution horizons in the presence of a unique optimum (Lasserre 1986, Bean and Smith 1986). We show that a reachability condition guarantees the convergence of the efficient sets II(T) to 1I* even in the absence of a unique optimum. This convergence leads to a tie-breaking algorithm for identifying solution horizons. Further, reachability sometimes can be determined simply from the problem structure. Let g(i) be the minimum cost from the root node to node i and g'(i) be the minimum cost from i through the infinite horizon. Let g(li) be the cost of a minimum cost path from the root node over the infinite horizon which passes through node i. By the principle of optimality, g(li) = g(i) + g'(i). As in Bean and Smith [1986], define weak reachability as follows: Definition: A sequence of nodes {in}, in - o00o, is weakly reachable if, for all e > 0, there exists an N, such that for all n > No, there is a j(n), an i', and a path from ij(n) to i' at cost c, with g(Jin) < g(ii') + e, where j() is a node for some optimal strategy, i*() - oo, and Cn -- 0. Bean and Smith show that g(in) - f as n -oo if and only if {in} is weakly reachable. Let i(7r,T) be the node for xr at its first decision epoch greater than or equal to T. Thus i(7r, T) represents the head of the decision arc for ~r that ends at or just after time T. Let {Tn} be any sequence of finite horizons such that Tn - oo. Theorem 4 If all sequences {ik} C Af, ik -- oo, are weakly reachable then II(Tn) -- II*. 10

Proof: We will show that 1. lim sup fi(Tn) C H, 2. nII C liminf I(Tn). (1) and (2) imply that limsup H(Tn) = liminf I(Tn). The result then follows from result R1 in Section 2. 1. Suppose 7r E limsupii(Tn). Then there exists {Tk} C {Tn} and 7k E ii(Tk) such that rk -- ir. Let ik = i(r,Tk). Then ik -, oo and g(ik) = f/(Tk). From the uniform convergence of fk, we have fri(Tk) - f.. But (ik) - f since {ik} is weakly reachable. Hence f, = f, and 7 E II*. 2. Suppose r E II*. By the principle of optimality, ir is optimal to each node it traverses. Thus r E II(i,) for each n where i, = i(r, Tn). Hence, ar E f(Tn) for all n and r E liminf fI(Tn). Lemma 5 If g(i) < g(j) whenever Tj < Tj, then all sequences {in} C N, in -- oo, are weakly reachable. Proof: Choose any {(i} C A/, in -~ oo, and any e > 0. Let {in} be {in(7r)} for some r E I*. Let j(n) = max{jiT; < T,), 11

and in= j(n)+lThen Cn = C(i(n), in)+l) - O as n - oo since f < oo. By the uniform convergence of f, over r E II, there exists N, such that Jg'(in)l < c/2 and Ig'(i')l < e/2. Then for >N_ Thn >,, g(in) - g(i) = (in) - g(i) + 9'(in) -'(i) < C, since 9(in) - g(i') < 0 by hypothesis and g(in) - g'(in) < Ig'(in) + lg'(i') < e. Hence g(lin) < g(li') + e. * Theorem 6 If g(i) < g(j) whenever T, < Tj, then Ir(Tn) -- II*. Efficient set convergence, therefore, follows from a simple structural property: that the minimum cost to a node is monotonic in the time to reach that node. In general, this property will only hold for a restricted class of problems where the decision network has seen pruned sufficiently. For example, in the next section, we show that it holds in a general model of capacity expansion. However, in general weak reachability must be established directly (see Bean and Smith [1986] for several applications where weak reachability holds). In the meantime, either this result or Theorem 4, together with Theorem 3, yields the following corollary. Let #r(Tn) and tr* be the lexico minimum elements of II(Tn) and II*, respectively. Corollary 7 If either of the conditions of Theorem 4 or Theorem 6 are satisfied, then Tr(Tn) -X 7rT* 12

We can now present an algorithm for the case when efficient sets converge to the optimal set. Let I(T) be the set Iu(T) with, for any node n, ties between strategies optimal up to n broken by choosing the lexico minimum. Note that kr(T) is its lexico minimum. Tie-Breaking Algorithm 1. Let {(Tn})1~ be any sequence of finite horizons with Tn -4 oo. Set n - 1. 2. Solve the Tn-horizon problem to obtain II(Tn). 3. If for each 7r(Tn) E H(Tn) and for all 1 < k < L, 7rk(Tn) = fk(Tn), then stop. Otherwise, set n +- n + 1 and go to step 2. Theorem 8 Suppose the condition of Theorem 4 or 6 is satisfied. Then (i) If each strategy in H* has the same first L decisions, {7r,,.r*,..., 7L}, then the algorithm will stop in finite time. (ii) If the Algorithm stops at Tn, then irZ = 7rk(Tn), 1 < k < L, where lr* is the lexico minimum of l*. Proof: (i) By hypothesis, HI(Tn) -* II. Then, from result R2 in Section 2, we have 7k = =k, 1 < k < L, for each r" E fl(Tn) and each Xr E I1. Thus, since 7r(Tn) E II(Tn) C fl(T,) and i(Tn) E fI(T), we have 7k(Tn) = rk(Tn) = r;, 1 < k < L. (ii) For simplicity, we present the proof for L = 1. By the principle of optimality, r* C H(Tn). Let n* be the node for 7r* at or just beyond time Tn. Then rl(Tn) and r; both initiate paths that are optimal to n*. By the definition of I(Tn), f1(Tn) < ir;. Now suppose til(Tn) < ir;. Then a new strategy, #7, could be formed 13

by following ir(Tn) to n* and continuing with 7r* beyond n*. Then 7* E I* and 7 -< I*. This contradicts the definition of 7r*. * Though uniqueness of the optimal initial decisions is sufficient for the algorithm to stop, it is not necessary. The next subsection gives an example in which the algorithm stops in the presence of infinitely many optimal strategies. Further, the minimal solution horizon may be shorter than that discovered by the Algorithm. We can distinguish between two types of horizons. A forecast horizon for the first L decisions is a finite horizon at which the algorithm's stopping rule is satisfied. It is called a forecast horizon since data beyond that time are irrelevant to the optimality of the first L decisions. A solution horizon for L is a time T such that if T' > T, then lrk(T') = 7r*, 1 < k < L, for some r* E H*. From Theorem 8, the existence of a forecast horizon follows from uniqueness of the first L optimal decisions. From Corollary 7, a solution horizon exists regardless of uniqueness. For long enough T, the first L optimal decisions of *r(T) agree with those of r*. However, in practice the solution horizon may be discovered in retrospect, once a forecast horizon has been found. The minimal solution horizon for L is no longer than the corresponding forecast horizon. Due to the lexico minimum selection rule, whether it is shorter depends on the (arbitrary) assignment of indices to decisions. If leading optimal decisions have low indices, the solution horizon is likely to be relatively short; if they have high indices, the two horizons will be the same. Similarly, the extraction of fI(T) from li(T) also depends on the indices assigned to decisions. The minimal forecast horizon might be identified by testing the stopping rule for every possible numbering of initial decisions in each iteration. We did not implement this enhancement in our computations. 14

3.1 Regeneration Point Problems The tie-breaking algorithm is simplified when applied to regeneration point problems. Recall that a decision epoch, T, is a regeneration point if the feasible decisions and costs beyond T are independent of the sequence of decisions up to time T. A problem formulation has regeneration point structure, or is a regeneration point problem, if the decision epochs for all strategies are regeneration points. Then the decision network may be aggregated so that there is at most one node for each point in time. Further, without loss of optimality, the network may be pruned so that g(i) < g(j) for T, < Tj. Examples include the discounted knapsack problem of Shapiro and Wagner and production planning problems with the Wagner-Whitin property. Assume that the maximum time between any two nodes on the same path is bounded by r. Then II(T) = UT'[TT+r] II*(T'). Also, letting r*(T) be the lexico minimum strategy in n'*(T), it follows that H(T) = {7r*(T'), T' E [T, T + r]}. The tie-breaking algorithm stops when (r;(T'), 7r(T'),...,,7r(T')) is the same for each T' E [T, T + r]. This stopping rule is similar to that of Bean and Smith [1984], but will be satisfied more often due to the tie-breaking scheme. Shapiro and Wagner showed that the infinite horizon discounted knapsack problem is solved by following a "turnpike" of least average cost decisions. If there is more than one such decision, then II is the (infinite) set of all possible sequences of them. However, in the knapsack problem, one can show that II*(T) - IIf (Ryan 1988). Then by Result R3 in Section 2, i*(T) -- 7r*, therefore 7r;(T) = rr; for all T sufficiently large, where 7r*(T) and t' are the lexico minimum T-horizon and infinite horizon optimal solutions, respectively. Then 7r;(T') = 7r(T) for all T sufficiently large and T' e [T, T + r]. Hence, for T sufficiently large, all solutions in II(T) have the same initial decision, so that the tie-breaking algorithm's 15

stopping rule is satisfied for L = 1. In fact, the tie-breaking algorithm is a generalization of the Shapiro-Wagner algorithm. 4 Application to Capacity Expansion We now apply the convergence results and algorithm to a general capacity expansion model. As in Bean and Smith [1985], suppose we are given a continuous demand function D(t), which may be satisfied by any of a finite set of replicable facilities, indexed by i = 1,..., n. Facility i incurs a fixed installation cost F, and provides X, units of capacity. No undercapacity is allowed and costs are discounted. The case when D(t) = dt is equivalent to the discounted knapsack problem. We can assume without loss of optimality that a new facility is never deployed until all existing capacity is exhausted. Therefore, restrict II to contain only strategies with this characteristic. The nodes in the decision network then represent installation epochs. Bean and Smith further argue that if T, < Tj and g(i) > g(j) for some pair of nodes i and j, then i, resulting from a situation of low capacity at high cost, cannot be an installation epoch for an optimal strategy. Once all such dominated nodes are pruned from the network, the condition of Theorem 6 is immediately satisfied. Therefore ii(Tn) H- I* and 7r(Tn) -- ir. 4.1 Computational Experience The Tie-Breaking Algorithm can be implemented as a traditional forward dynamic programming procedure, with little additional bookkeeping. We tested it using telephone capacity expansion problems from the literature, randomly generated problems, and a specially-designed example. Dominated nodes were pruned from the network as they were discovered by the dynamic programming procedure. 16

For the four telephone link capacity expansion examples in Bean and Smith [1985], the algorithm identified forecast horizons for the initial decision that were comparable to those previously computed. In addition, the first ten optimal decisions were discovered with forecast horizons of approximately 50 years or less. For the exponential demand example of Smith [1979], the forecast horizon for the first decision was eleven years, and the first nine decisions were uncovered with a 40-year horizon. Thus, for problems appearing in the literature, the tie-breaking algorithm performs as well as previous algorithms, but also uncovers more of the optimal strategy. To further test the algorithm while simulating real capacity decision problems, we generated five random sets of nine facilities each. The capacities were generated uniformly between 0 and 100,000 units of capacity. Facility fixed costs were assigned according to the Dixon-Clapp relation (Yaged 1975): Ft = KX`-, where 7 is an economy of scale factor, and K is a constant (set to 1 for convenience). For these randomly generated facilities, the indices assigned to decisions were unrelated to the relative sizes of the facilities. Thus, a lexico minimum selection is completely arbitrary. As noted previously, there is no way to predict whether solution horizons will be strictly shorter than the corresponding forecast horizons. To construct the first type of demand function tested, we generated incremental demands randomly between 10,000 and 50,000 units. Thus the average facility of 50,000 units would exhaust in one to five years. Each set of facilities was tested twice, giving a total of ten test problems. The interest rate was set to 10.5 percent, as in the literature, and the economy of scale factor to 0.5. In all ten problems, the optimal solution (up to the first ten decisions) was to always install the largest (and eventually least average cost) facility. 17

Figure 1 shows forecast horizons in number of decisions implemented for each L, 1 < L < 10. The best case, worst case, and average over all ten test problems are displayed for comparison. The Lth set of bars from the bottom gives the horizon length needed to discover the optimal Lth decision. For L = 1, the forecast horizon ranged between 6 and 12.5 years, with an average of 9 years. In order to normalize over the random facility sizes, we report the horizons in terms of the average number of installations required to reach them efficiently. That is, the horizontal axis gives, rather than T, the average number of decisions up to time T for strategies in II(T). An alternate way of interpreting the graph is that, for example, on average for L = 5, when eight installations are required to reach a given time horizon efficiently, the first 5 facilities installed by any lexico minimum efficient strategy are optimal. Notice that, after a delay of approximately four installations, the forecast horizons increase with a linear slope of nearly one with the number of decisions uncovered. In some instances, solution horizons were considerably shorter than the corresponding forecast horizons. Figure 1 also displays solution horizons for the average case. In order to judge the algorithm's performance when the optimal strategy contains a variety of decisions, we designed a second type of random demand function. We started with a sine wave with amplitude 50,000 and period 20 years. In an application such as along a telephone link, incremental demand may be negative as traffic is diverted to other parts of the network. To this cyclic pattern we added random annual increments of between 0 and 25,000 units. The interest rate was held at 10.5 percent, but the economy of scale factor changed to 0.3 to provide a wider gap between costs of large and small facilities. Once again, each facility set was tested with two demand functions for a total of ten test problems. In the optimal strategies, large facilities alternated with smaller ones in an unpredictable sequence. Figure 2 shows the best, average, and worst case forecast horizons for the cyclic demand 18

. i x.. l Uncovered _or__ _ Average Decisions Implemented Figure 1: Forecast horizons for the linear demand function, measured by the average number of decisions implemented by efficient strategies. The best, average, and worst case forecast horizons are shown as well as solution horizons for the average case. 9 _~ 19 Decisions Im plemente FoastHorn f. — =-so — n by - -- The best,..verage, a - - s 19

_ ------- Wor_ Decisions _! Forecast Horizons Uncovered st 6 _ _ _ Worst, Average;25~~~~~ i_ )~~* Best 4 g_5 -I Solution Horizons 3 p - - - ] Average 2 -. --- Average Decisions Implemented Figure 2: Forecast horizons for the cyclic demand function. function, measured again by efficient facility installations. Though they are less smooth than for linear demand, they once again increase approximately linearly in the number of optimal decisions discovered, with a slope slightly more than one. The figure also gives a comparison of the corresponding solution and forecast horizons for the average case. The unevenness of the successive forecast horizons can be attributed to the cyclic demand. As the horizon length, T, increased through years of declining demand, the sets 1i(T) typically remained unchanged. Subtracting capacity was not allowed, and there was no reason to add more. Then, as demand started to increase again, strategies in 11(T) were re-evaluated. As demand grew rapidly, several successive optimal decisions might be determined all at once with a small increase in T. The longest forecast horizons occurred with a facility set in which the capacities of the 20

two largest facilities differed by less than 0.5 percent. Most of the computational effort was spent resolving ties between these two facilities, leading to solution and forecast horizons of nearly 50 years for the first decision. Such a situation would be unlikely in practice. For the other four facility sets, forecast horizons for L = 1 were between 7.5 and 28 years and the corresponding solution horizons were between 0 and 9 years. We also tested the algorithm with an example specially constructed to defeat the stopping rule. In this example D(t) = e~t - 1, a special case of exponential demand as studied by Smith [1979]. There are two facilities, with X1 = 2, F1 = 1, X2 m 0.1052, and F2; 0.3314. The interest rate is set at 0.4. Let 7r1 = (1,1,1,...) be the strategy of installing facility 1 indefinitely. Let 7r2 = (2,1,1,1,...) be the strategy of installing facility 2 once, then facility 1 indefinitely. One can show that both 7r1 and 7r2 are both infinite horizon optimal. Note that decision epochs for ir1 and 7r2 never coincide: decision epochs for 7r1 occur when D(t) = X2n for some integer n and those for r2 occur when D(t) = 0 or X1 + X2m for some integer m. We can also show that each strategy is optimal to its own decision epochs and nonoptimal to the other strategy's decision epochs. It follows that fI(T) = {1,2} for all T, and the tie-breaking algorithm's stopping rule is never satisfied. Details are contained in Ryan [1988]. However, the algorithm is still informative when applied to this example. The finite horizon efficient sets quickly settle into a pattern, allowing recognition that both optimal strategies are at least near-optimal. Moreover, the lexico minimum efficient strategies (underlined) converge to (1, 1,...). Efficient sets for selected finite horizons are shown in Table I. Notice that nonoptimal leading sequences soon disappear from i(T) as T increases. 21

T=1 T=5 T=10 T=15 T=20 T=25 21 2222221 2222221 21222221 2111221 21111122222222221 1 222221 222221 2122221 211121 2111112222222221 22 22221 22221 212221 21111 211111222222221 2221 2221 21221 1111 21111122222221 221 221 2121 2111222 2111112222221 21 21 211 211111222221 1 1 11 21111122221 2111112221 211111221 21111121 2111111 111111 Table 1: Efficient sets, II(T), for an example where the stopping rule fails. 5 Conclusions Solution horizon methods for solving infinite horizon problems have been burdened by the requirement of a unique optimum. As shown by Ryan and Bean, and illustrated by our example, such an assumption may not be true or easily verified. The tie-breaking algorithm is an efficient approach for finding solution horizons in the presence of multiple optima. When the stopping criterion is not met, the nature of the set convergence allows a decision maker to intelligently select so as to approximate them by solving finite horizon problems. In future research, we hope to develop bounds on the nearness to optimality of finite horizon efficient solutions. Such bounds will allow the derivation of near-solution horizons. Acknowledgement We would like to thank Greg Miller for computer implementation of the tie-breaking algorithm, funded under NSF Grant ECS-8700836. 22

References Bean, J. and R. L. Smith. 1984. Conditions for the Existence of Planning Horizons. Mathematics of Operations Research 9, 391-401. Bean, J. and R. L. Smith. 1985. Optimal Capacity Expansion Over an Infinite Horizon. Management Science 31, 1523-1532. Bean, J. and R. L. Smith. 1986. Conditions for the Discovery of Solution Horizons. Technical Report 86-23, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109. Bes, C. and S. Sethi. 1988. Concepts of Forecast and Decision Horizons: Applications to Dynamic Stochastic Optimization Problems. Mathematics of Operations Research 13, 295-310. Chand, S. and T. Morton. 1986. Minimal Forecast Horizon Procedures for Dynamic Lot Size Models. Naval Research Logistics Quarterly 33, 111-122. Hopp, W., J. Bean and R. L. Smith. 1987. A New Optimality Criterion for Nonhomogeneous Markov Decision Processes. Operations Research 35, 875-883. Lasserre, J. 1986. Detecting Planning Horizons in Deterministic Infinite Horizon Optimal Control. IEEE Transactions on Automatic Control AC-31, 70-72. McKenzie, L. W. 1976. Turnpike Theory. Econometrica 44, 841-844. Ryan, S. M. 1988. Degeneracy in Discrete Infinite Horizon Optimization. Ph.D. dissertation, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan. 23

Ryan, S. M. and J. Bean. 1987. Degeneracy in Infinite Horizon Optimization. Technical Report 87-5, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2117. To appear in Mathematical Programming. Schochetman, I. and R. L. Smith. 1987. Infinite Horizon Optimization. Technical Report, Department of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2117. To appear in Mathematics of Operations Research. Shapiro, J. F. and H. M. Wagner. 1967. A Finite Renewal Algorithm for the Knapsack and Turnpike Models. Operations Research 15, 319-341. Skilton, D. K. 1985. Imbedding Posets in the Integers. Order 1, 229-233. Smith, R. L. 1979. Turnpike Results for Single Location Capacity Expansion. Management Science 25, 474-484. Yaged, B. 1975. Economies of Scale, Networks, and Network Cost Elasticity. IEEE Transactions on Systems, Man and Cybernetics 5, 30-39. 24

UNIVERSITY OF MICHIGAN I 111111111llllllllllll lllll 3 9015 04735 3605