THE UNIVERSITY OF MICHIGAN COMPUTING RESEARCH LABORATORY1 PERFORMABILITY MODELS AND SOLUTIONS David G. Furchtgott CRL-TR-8-84 JANUARY 1984 Room 1079, East Engineering Building Ann Arbor, Michigan 48109 USA Tel: (313) 763-8000'Any opinions, findings, and conclusions or recommendatioas expressed in this publication are those of the author.

I" 1111 * C./ / ^/;t q/^

PERFORMABILITY MODELS AND SOLUTIONS by David Grover Furchtgott A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer, Information and Control Engineering) in The University of Michigan 1984 Doctoral Committee: Professor John F. Meyer, Chairman Professor Daniel E. Atkins Professor Arch W. Naylor Associate Professor Robert L. Smith Professor Bernard P. Zeigler, Wayne State University

( David Grover Furchtgott 1984 All Rights Reserved

ABSTRACT PERFORMABILITY MODELS AND SOLUTIONS by David Grover Furchtgott Chairman: John F. Meyer A principal goal of computing system evaluation is the measurement of the system's ability to perform. Measures such as performance, reliability, and effectiveness are often employed, but such metrics are often not suitable when evaluating systems in the increasingly important class of degradable systems. Among the measures proposed for such systems is "performability," which is simply the probability measure of the system performance variable. Classical performance and classical reliability are specialized cases of performability. To be effective, performability evaluation requires tractable techniques of solution. This dissertation concerns the modeling, calculation, and use of a system's performability. Specifically, we examine two broad classes of performability models: (1) those wherein system performance assumes values in an arbitrary, finite set, and (2) those systems where performance is continuous and identified with "reward". For the first class of models, a methodology for relating low-level behavior to high-level system performance is formalized. Behavior is characterized in terms of finite arrays of variables having finite domains. A calculus is developed for manipulating such arrays, and algorithms are described for deriving the set of low-level behaviors which result in each system performance level. The probability of each performance level (and hence the performability) is obtained by

calculating the probability of the corresponding set of behaviors. Also described and illustrated is METAPHOR, a computer package implementing these algorithms. For the second class of performability models, a general method for determining the proba. bility distribution function of the performance variable (i.e, the performability) is derived. The result is an intergal expression which can be solved either analytically or numerically. Examples of both types of solutions are given, and procedures implementing the numerical solution and included within METAPHOR are described.

TABLE OF CONTENTS DICTIONARY OF PRINCIPAL SYMBOLS................................................................ ix LIST OF DEFINITIONS................................................................ xxvii LIST OF TABLES....................................................................................................... xxii LIST OF FIGURES........................................................................................... xx iii LIST OF KEY PROPOSITIONS........................................ xxxiv LIST OF ALGORITHMS............................................................................................. xxxvi LIST OF APPENDICES.............................................................................................. xxxviii CHAPTER 1. INTRODUCTION................................................................................................1 1.1 Problem Statement...................................................................................... 1 1.2 Thesis..........................................................................................................3 1.3 Research Objectives.....................................................................................3 1.4 Dissertation Organization............................................................................5 2. BACKGROUND AND LITERATURE SURVEY..................................................6 2.1 Introduction.................................................................................................6 2.2 Degradable Systems..................................................................................6 2.3 The Evaluation Procedure...........................................................................8 2.4 A Taxonomy of Measures and Models of Degradable Systems (Table 1)...... 23 2.4.1 Introduction.................................................................................... 23 2.4.2 The Classifications (Column 1)........................................ 24 2.4.3 Components of the Models (Columns 3-5)....................................... 25 2.4.4 The System Description Function (Columns 6-7)............................. 26 v

2.5 Description of the Structure of Degradable Systems (Table 2)..................... 28 2.5.1 Introduction........................................................... 28 2.5.2 Representation and Acquisition of the Capability Function (Columns 2-5)........................................................... 28 2.6 Determining the Probability Measure of System Performance (Table 3)...... 29 2.6.1 Introduction.................................................................................... 29 2.6.2 Calculating the Measure (Columns 2-5)........................................... 29 2.7 Structure-Based Models............................................................................... 29 2.8 Other Reliability Oriented Models............................................................... 31 2.9 Performance Oriented Models...................................................................... 32 3. FUNDAMENTAL CONCEPTS AND RESULTS.................................................. 36 3.1 Introduction................................................................................................. 36 3.2 Motivation................................................................................................... 37 3.3 An Informal Introduction to Performability................................................. 38 3.3.1 Introduction.................................................................................... 38 3.3.2 The Accomplishment Set................................................................ 39 3.3.3 The Trajectory Space.................................................... 40 3.3.4 The Capability Function.............................................................. 41 3.3.5 The Performability Model.................................................... 42 3.3.6 Solving for the Performability......................................................... 42 3.3.7 Constructing Performability Models................................................ 43 3.3.8 The Model Hierarchy....................................................................... 48 3.3.9 Difficulties of Employing Moments Rather than Performability...... 52 3.3.10 Notes for Section 3.3: An Informal Introduction to Performability......................................................................................................... 56 3.4 Models Having Finite Performance Variables............................................... 66 3.4.1 Basic Concepts................................................................................ 67 3.4.3 Algorithms for Calculating Trajectory Sets................................... 70 3.4.4 METAPHOR-A Performability Modeling and Evaluation Tool........................................................................................................ 73 3.4.5 Examples......................................................................................... 75 3.5 Models Having Continuous Performance Variables..................................... 76 3.5.1 Basic Concepts................................................................................ 77 3.5.2 Reward Models and Nonrecoverable Processes................................. 79 3.5.3 Solution of Finite-State Nonrecoverable Processes........................... 79 vi

4. DISCRETE PERFORMANCE VARIABLE (DPV) METHODOLOGY................. 81 4.1 Trajectory Sets: Basic Notation and Operations.......................................... 81 4.1.1 Notation and Terminology.............................................................. 81 4.1.2 A Calculus of Trajectory Sets......................................................... 87 4.2 Discrete Functions and the Representation of Trajectory Sets..................... 96 4.2.1 Discrete Functions......................................................... 96 4.2.2 Discrete Functions and Capability Functions.................................. 98 4.2.3 Alternative Representations of Trajectory Sets................................ 101 4.3 Calculation of Trajectory Sets......................................................... 122 4.3.1 Representation of Discrete Functions Within METAPHOR............. 123 4.3.2 Notation.......................................................................................... 132 4.3.3 Algorithms....................................................................................... 139 4.4 METAPHOR-A Performability Modeling and Evaluation Tool.................. 169 4.5 Examples.................................................................................. 170 4.5.1 Simple Reliability Network Example............................................... 170 4.5.2 Simple Air Transport Mission Example........................................... 172 4.5.3 SIFT Computer Example................................................................. 172 4.5.4 Dual-Dual Example......................................................................... 173 5. CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY............. 175 5.1 Introduction................................................................................................. 175 5.2 Reward Models............................................................................................ 178 5.2.1 Definition of Reward Models................................................ 178 5.2.2 An Example of a Reward Based Performability Model.................... 180 5.2.3 Determination of Reward Rates....................................................... 182 5.2.4 Nonrecoverable Processes................................................................ 184 5.2.5 Acyclic Processes................................................... 187 5.3 Reward Model Solution................................................................................ 191 5.3.1 A Partition of the Trajectory Space................................................ 191 5.3.2 Notation.......................................................................................... 191 5.3.3 The Approach......................................................... 192 5.3.4 Formulation of FrlU.................................................. 195 5.3.5 -Resolvability......................................................... 196 5.3.6 An Algorithm for Determining C........................................ 208 y 5.3.7 Conditions for v Being -Resolvable....................................... 209 5.3.8 Solution of Finite State Acyclic Reward Models.............................. 212 vii

5.3.9 Closed-Form Solutions..................................................................... 235 5.3.10 Recursive Formulation of F U.............4....................................... 242 5.3.11 N um erical Solutions........................................................................ 250 6. CO N CLU SIO N S................................................................................................... 257 6.1 Contributions...............................................................................................- 257 6.2 Further Research......................................................................................... 258 APPEND ICES........................................................................................................... 261 BIBLIO GRAPHY........................................................................................................ 414 viii

DICTIONARY OF PRINCIPAL SYMBOLS Symbol Definition Page 0 index denoting the top level of a model hierar- 48 chy 0 least element of a lattice 105 (A, p) performability probability space 57 A accomplishment set 39 A accomplishment set 98 a an element of the accomplishment set A 39 A finite set of non-negative integers 96 a weight of a cube 111 a weight of an anticube 113 a, ith accomplishment level 67 [a C^h lkh:p x r array representation of disjunctive normal form 116 of the function f AN lattice formed by the direct product of N 105 latticesA ix

A' tentative accomplishment set 44 ATC air traffic control 85 B BCPERF 26 B Boolean lattice 103 6P set of measurable accomplishment levels 57 B2, free Boolean lattice on r+ 1 generators 104 [bfijnh g: Nk lattice exponent 127 [bktnlijknhg:r x x x pi x Nj 4 array of lattice exponents 127 Bk in the example of Section 3.3.9, a measurable set 53 of accomplishment levels: {a I a > h k} [bhAjg:Nj binary vector denoting the subset Cjk of Uj 117 C array of lattice exponents 116 c(x) cube function 111 ( Cj,)lI x * complement of a cube function 124 (Cijknhi) lattice exponent 127 x

(C(k)nh) the jkth exponent of the hIh cube function hav- 127 ing weight an [(Ciknh)jknh:r x x I x p, array of lattice exponents 127'I CA Ip x r array of lattice exponents 116 c(x)e complement of the exponent of a cube function 124 c(x) d depth of a node within a tree 144 DOMAIN(f) the domain of the function f IMAGE(f) the image of the function f d(x) anticube function 112 div integer division 145 EXTEND function that extends a tentative trajectory 46 u' E U' so that the trajectory is a total function E [X] the expected (mean) value of the random variable X basic model event space 58 ]gt.~ tr~t-level basic model event space 63 J set of measurable trajectories U 57 xi

F array representation of disjunctive normal form 116 of the function f F disjunctive normal form of a discrete function f 115 I integer function 96 f negation of a discrete function f 113 FCS flight control system 85 4N sI -level trajectory event space 64 4 the jth projection of f 98 F ( ~ uk) the distribution of Y for system Sk given the 243 YIU, k;k sequence U f the density of the sojourn time in state i, condi- 243.+J- ",X+i,~ X. tioned on the history of the process G conjunctive normal form of a discrete function f 115 g system description function 26 h in the example of Section 3.3.9, the length of the 53 utilization period T h random variable relating the base probability 59 space (fQ,,P) to the trajectory probability space (A,,p) hi random variable relating the sth-level bate pro- 64 bability space (fiL, P, pi) to the ith-level basic trajectory probability space xii

k in the example of Section 3.3.9, the fraction of 53 the utilization period ~~~~~I ~greatest element of a lattice 105 m index denoting the bottom level of a model 48 hierarchy; there are m + 1 levels in the hierarchy a mod b a modulus b, i.e., the remainder when a is divid- 145 ed by b N number of entries in the value vector of discrete 102 function f NAV navigation 85 Nj number of elements in Uj 117 P trajectory set 90 p vector of the number of cube functions having 127 weight p, p number of array products in a trajectory set 90 p performability 57 PERF performance set 25 P basic model probability measure 58 P = [PO P'' Pl vector of the number of cube functions having 127 weight p5 xiii

pc complement of an array product P 94 p70 array of the number of cube functions compris- 135 ing 70 1P7o1 array of the number of cube functions compris- 135 ing a0 p7 array of the number of cube functions compris- 138 ing 7i [p,~.Ii array of the number of cube functions compris- 138 ing'oy Pi ith-level basic probability measure 63 p O array of the number of cube functions compris- 134 ing rK \(p.CO\*l ~ array of the number of cube functions compris- 134 [P trajectory ing Kco _Jokosi array of the number of cube functions compris- 134 ing j,jkotKi Jo, o; array of the number of cube functions compris- 134 lPfl)fl:NII, ing EjotCn Pr trajectory probability space probability measure 57 Pr distributive lattice 103 Pr free distributive lattice on r+l generators 103 xiv

Pr' ith-level trajectory probability measure 64 PSIM in the example of Section 3.3.9, the performabili- 53 ty of the simplex computer PTMR in the example of Section 3.3.9, the performabili- 53 ty of the triple modular redundant computer Psi performability distribution of system Si 242 Q state space of the base model 40 q structure 26 Q* level-i composite state space 82 Q' tentative state space 46 r number of attributes (components or rows) of an 82 array product 1R set of real numbers 59 r, number of level-i composite components 83 [R\kr x * array product (trajectory set) 90 1R" n-dimensional space of real numbers 59 Rii subset of Qii in an array product 89 8 number of columns (components or rows) of an 82 array product xv

STRUC structure set 25 SIM in the example of Section 3.3.9, the simplex com- 53 puter Si a system with i+ 1 states which degrades in the 242 state sequence ui = (ui, ui_l,..., U) OBSER time base 25 t observation time 26 T parameter set of the base model 40 t time t E T 40 Ts level-i basic state space 82 To level-i basic utilization period 82 Ts level-i composite utilization period 82 Ti level-i parameter set 62 TMR in the example of Section 3.3.9, the triple modu- 53 lar redundant computer (USPr) trajectory probability space 57 u trajectory 26 u trajectory in the trajectory space U 40 xvi

U trajectory space 40 u, basic trajectory at level-i 82 U level-i basic trajectory space 82 u b,. level-i basic component k 83 u (tk) level-i basic observation at time tk 83 elu (t)lk: component trajectory of the ith basic component 85 uris tk a basic variable 85 us component trajectory of the th basic component 85 u b(t) level-i basic component j observed at time tk 84 {U1,2, ~ ~,Ur} trajectory set 88 u trajectory 88 ujk]r xe trajectory 88 UCONS set of consistent trajectories 47 up basic trajectory 83 uC composite trajectory 83 xvii

u c composite trajectory at level-i 82 U' level-i composite trajectory space 82 uk Ilevel-i composite component k 83 UINCONS set of inconsistent trajectories 47 u C(t) level-i composite observation at time tk 83 u *) tk a composite variable 85 u component trajectory of the jth composite com- 85 ponent u (tt) level-i composite component j observed at time 84 tk Ub level-i basic trajectory space 98 ~U;^~~ llevel-i basic model trajectory space 99 U — basic model trajectory space 99 Uc level-i composite trajectory space 98 U' level-i trajectory space 98 Ui ith Cartesian component of set U 97 Uji ith Cartesian component of the jth Cartesian 97 component of the set U xviii

U; jk level-i basic trajectory space of component p 99 during phase k,jk^ level-i composite trajectory space of component 99 p during phase k ( U',, Pr') ibh-level trajectory probability space 64 Ui level-i trajectory space 48 U' level-i trajectory space 82 Ui the level-i trajectory space along with all the 50 basic trajectory spaces of higher level models u tj behavior of component i during phase j at level 69 of abstraction k I uj'(t) ]j:(r, +, + ) trajectory observation at time tk 85 u'(tk) trajectory observation at time tk 85 [uij(r, + + 1) matrix expansion of level-i trajectory 83 I u(tk) ljk:c, 6 x a matrix expansion of level-i trajectory 83 ujt the jkth entry of trajectory u 88 [ ujk ljt:(ri + p, + 1) x s, matrix expansion of a level-i trajectory 84 u' A partial low-level trajectory 45 xix

U' tentative trajectory space 45 us sample function of the process X^ (the ith-level 63 base model) uw, sample function of the process X (the base 58 model) vi all the remaining time in [0, t] after accounting 212 for the intervals (v,, v.,..., vi+ ) visl all the remaining time in [0, t] after accounting 210 for the intervals (v,, Vn-_,..., vi) modified bytheindexi 211 vA. i v. modified by the index i 211 v, v. modified by the index i 211 vu (Vn.... Vi+l, Viil, i, vo) 211 V; (v",.., Vi, vi_,,,i_2,.. o) 211 W vector of cube function weights 116 IWi i[n bj]k lAg:jN,, jkh: r x x p array representation of a discrete function 128 wk weight of cube function k 116 [lwkk:p vector of cube function weights 116 W wi for v 212 t txx

Wi w' for v 211 t t c ~W~~u ~the vector of wt corresponding to vu 212 wu the vector of wi corresponding to vu 211 X base model 41 X base model 58 {X~,, X1,., Xm} model hierarchy 65 Xs level-i basic process 63 Xb(w) sample function of the process Xb (the itlevel 63 base model) Xbt, level-i basic random variable 63 Xc level-i composite process 63 Xc(w) sample function of the process Xc (the its-level 64 composite model) Xc to level-i composite random variable 63 X' level-i model 62 X' level-i model 64 (Cj) lattice exponentiation 108 z) xxi

t. ~ the projection on i 86 I[ijij:r x an (r + 1) X (s + 1) dimensioned matrix 82 [zij r x an (r + 1) X (8 + 1) dimensioned matrix 82 [zi,, an 8 + 1 dimensioned vector 81 lz, z1,...,*, an 8 + 1 dimensioned vector 81 Ix, ii: an 8 + 1 dimensioned vector 81 Xi random variable of the base model 41 (X,') performability model 42 X(w) sample function of the process X (the base 58 model) Y system performance 59 YSIM in the example of Section 3.3.9, the performance 53 of the simplex computer YTMR in the example of Section 3.3.9, the performance 54 of the triple modular redundant computer Ad array representation of the hth partial cube func- 145 tion at depth d [(A^A))jk (r + p, 1) X, array representation of the hth partial cube func- 145 tion at depth d xx. i

(A.)) = Ui full cube function 145 4, null array product 112 * the null array 91 7t tentative capability function 45 - capability function 41 - capability function 57 - the capability function 100 rF array representation of y0 135 array representation of 7o 135 r' array representation of 7i 138'7 the level-i based capability function 99.-I inverse of the capability function - 42 [I rFc Ji array representation of yi 138;i level-i-based capability function 50 7[1 inverse of the level-i-based capability function 51 7' function mapping an inconsistent trajectory to $ 47 xxiii

K0 array representation of Kt 134 [K8.) nl array representation of Kco 134 [ j knh:(r + po+ 1) X oX X p^ K3oko array representation of ~JokoK,(u) 134 CUK^^;1 s":nam ple function of the process XI (the Ith-level 64 composite model) X in the example of Section 3.3.9, the failure rate 53 of each computer X, parameter for state ui 242 ( P) basic model probability space 58 n basic model sample space 58 (, *pi,) ti-level basic model probability space 63 t h-level basic model sample space 63 the empty set 91 structure function 27 Q+^~~ ~structure function 30 pi number of level-i basic components 83 xxiv

(,:AA, projection of A on i 86 (j(Kj+) th component function at observation k 87 (k(Ki+l) projection of K on jk 86 AB direct product of set A by set B 26 ~ place-holding state used to extend those trajec- 46 tories up E U' that are not defined over the entire set T AC set complementation I[ru value vector of the discrete function f 102 [f(u)lN value vector of the discrete function f 102 a > b a can be any non-negative (or zero) number 220 o exceeding b AB the direct product of set A by set B ~a-b a - b if a > b and 0 otherwise 231 a ~b'A - B set difference, i.e., the set containing those elements of set A that are not in set B ~* ~the full set (or universe) 91 I T I the size of the set T 59 xxv

A X B Cartesian product of sets A and B, i.e., 64 {(a,b)l aE A, bE B) " Cartesian product: U0 X U1 X * X U, 97 X U, r v- Or a> eat 62) smallest u-algebra containing the product of the 64 a-algebras (. x 3 = {A X BIA e,BE(} U finite set of non-negative integers 96 xxvi

LIST OF DEFINITIONS In the text, defined terms are usually denoted by ialies. bottom-level model, 48 absorbing state, 188 canonical disjunctive form, 120 accomplishment level, 39 capability function, 100 accomplishment levels, 98 capability function, 41 accomplishment set, 39 capability function, 57 accomplishment set, 98 capability-based system description function, 27 antiatom, 113 chain, 105 anticube function, 112 classical performance measures, array product, 89 24 atom, 112 classical reliability measures, 24 base model, 41 coherent structure function, 30 base model, 58 complement of a lattice element, 103 basic model trajectory space, 99 complement of an array product, basic state set, 63 94 basic trajectory, 83 complement of the exponent of a cube function, 124 basic trajectory at level-i, 82 complemented lattice, 103 basic variable, 82 component, 99 boolean expressions, 104 component trajectory, 84 boolean functions, 104 component trajectory of the jth Boolean lattice, 103 basic component, 85 boolean polynomials, 104 component trajectory of the jbh composite component, 85 xxvii

components, 82 equivalent lattice polynomials, 103 composite state set, 63 evaluated, 3 composite trajectory, 83 events, 57 composite trajectory at level-i, 82 exponentiation of a cube function, 122 composite variable, 82 exponentiation of a lattice excomposition of two lattice func- ponentiation, 122 tions, 122 exponentiation of a lattice exconjunction, 105 pression, 122 conjunctive normal form of a exponentiation of an anti-cube discrete function f, 115 function, 122 consistent trajectory set, 47 exponentiation of lattice exponentiation, 121 construction, 2 free Boolean lattice on r+ 1 gencost, 151 erators, 104 CPV methodology, 66 free distributive lattice on r+l generators, 103 cube function, 111 full set, 91 degradable computing systems, 7 general discrete function, 97 depth, 144 i-resolvability, 197 direct product of set A by set B, 26 implicant, 112 discrete function, 96 implicate, 113 disjunction, 105 inconsistent trajectory set, 47 disjunctive normal form of a integer function, 96 discrete function f, 115 interlevel translation, 50 DPV methodology, 66 interlevel translation, 65 empty anticube, 113 intersection tree, 145 empty cube, 112 join, 105 empty set, 91 join-irreducible element of a latequivalence of two lattice expres- tice, 112 sions, 120 xxviii

jt component function at obser- level-i trajectory space, 82 vation k, 87 level-i trajectory space, 98 Karnaugh chart, 103 level-i basic process, 63 lattice, 105 level-i basic state trajectory, 63 lattice exponentiation, 108 level-i basic trajectory space, 49 lattice expression, 108 level-i composite process, 63 lattice polynomial, 103 level-i composite state trajectory, level-i based capability function, 64 99 level-i composite trajectory level-i basic model trajectory space, 49 space, 99 level-i trajectory space, 48 level-i basic state space, 82 level-i trajectory space, 64 level-i basic trajectory space, 82 level-a-based capability function, level-i basic trajectory space, 98 50 level-i basic trajectory space of maximal disjoint cube function, component p during phase k, 99 153 level-i basic utilization period, 82 meet, 105 level-i composite state space, 82 meet-irreducible element of a lattice, 113 level-i composite trajectory space, 82 minterm, 118 level-i composite trajectory model hierarchy, 48 space, 98 model hierarchy, 65 level-i composite trajectory space of component p during phase k, mutually disjoint, 208 99 negation f of a discrete function level-i composite utilization f, 113 period, 82 non-structured based system level-i interlevel translation, 100 description function, 27 level-i model, 62 normalized throughput rate, 77 level-i parameter set, 62 null array, 91 level-i phase, 62 null set, 91 level-i state space, 62 parameter set, 40 xxix

parametrically equivalent states, structure function, 27 242 structure function, 30 performability, 40 structure set, 25 performability, 57 structure-based measures, 25 performability model, 42 structure-based system descripperformance level, 39 tion function, 27 performance set, 25 system description function, 26 performance set, 39 system description sets, 25 performance-oriented measures, system performance, 59 25 tentative accomplishment set, 44 phase, 31 tentative capability function, 45 phase, 85 tentative state space, 46 phase, 99 tentative trajectory space, 45 probability measure, 57 throughput rate, 77 projection of A on i, 86 time base, 25 projection of function f, 98 TMR, 53 projection on i, 86 top-level model, 48 ragged array, 127 totally ordered set, 25 reduction, 153 trajectory, 26 reliability, 40 trajectory, 40 reliability, 40 trajectory, 98 sample space, 57 trajectory observation, 84 set mapping, 57 trajectory observation at time t, simplex system, 53 85 solution, 2 trajectory set, 87 solving, 2 trajectory sets, 70 state space, 40 trajectory space, 40 state trajectory, 58 trajectory space, 98 xxX

triple modular redundancy, 53 user, 37 value vector of f, 102 Veitch chart, 103 weight of a cube, 111 weight of an anticube, 113 well-formed expression, 108 xxxi

LIST OF TABLES Table 2.1 A brief overview of many important evaluation measures and when they were first applied to computing systems.......... 10 2.2 An outline of the various techniques that have been proposed for relating the system structure to the evaluation measure.......... 15 2.3 Techniques proposed for describing the stochastic nature of a system 18 4.1 Dual-dual computer system results for four solution techniques.......... 174 xxxii

LIST OF FIGURES Figure 3.1 The model hierarchy.......... 49 3.2 PTMR(Bk) and PsM(Bk) as functions of k.......... 55 3.3 Block diagram for METAPHOR.......... 75 4.1 Lattice of the functions (f: {0, 1)2}...{, 1).106 4.2 Lattice of the functions {f:{0, 12}-*{0, 1)).......... 108 4.3 Example of a switching function with multiple maximal disjoint cube function representations.......... 154 4.4 Simple reliability network.......... 171 5.1 Transition diagram for the Markov model of the example of Section 5.2.2.......... 181 5.2 Decomposition of, (V ).......... 194 5.3 Markov state-transition-rate diagram for the case n-= 1.......... 237 5.4 Markov state-transition-rate diagram for the case n = 2.......... 239 5.5 Markov state-transition-rate diagram for the case n=- 3.......... 241 5.6 Performability plot for the multiprocessor/air conditioner example of Section 5.3.11.2.......... 255 xxxiii

LIST OF KEY PROPOSITIONS Proposition Theorem 4.1 (Discrete function representation).......... 119 Theorem 4.2 (Shannon"s first expansion theorem).......... 120 Theorem 5.1 (Necessary and sufficient conditions for X to be a nonrecoverable process).......... 185 Theorem 5.2 (Necessary and sufficient conditions for X to be a nonrecoverable process).......... 187 Theorem 5.3 (Sufficient and necessary condition for a process to be acyclic).......... 188 Theorem 5.4 (Every finite-state, acyclic process must have at least one absorbing state).......... 188 Theorem 5.5 Nonrecoverable models are acyclic.......... 189 Theorem 5.6 (Acyclic nature of the base models of nonrecoverable models).......... 189 Theorem 5.7 (i-resolvability and bounds of u).......... 199 Theorem 5.8 (Recursive definition of i-resolvability).......... 200 Theorem 5.9 (Conditions for vj = uw).......... 201 Theorem 5.10 If vu is i-resolvable then vu is not j-resolvable for j y4 i......... 201 Theorem 5.11 vu is (n+l)-resolvable if and only if y > r(u,)t........... 202 Theorem 5.12 (maximizing ur,(v, v,..., v,+l)).......... 202 Theorem 5.13 Every v. such that 7.(v) < y is i-resolvable for some i, where n+l>i>0........... 203 Theorem 5.14 (i for which there exist i-resolvable v ).......... 203 xxxiv

Theorem 5.15 (Some conditions for which there does not exist any vu such that vu is 0-resolvable).......... 205 Lemma 5.16 The C' are mutually disjoint........... 208 Lemma 5.17 The C cover C.......... 209 Theorem 5.18 The C partition C..........209 Theorem 5.19 (Sufficient and necessary conditions for v, to be i-resolvable).......... 212 Corollary 5.20 (Sufficient and necessary conditions for v, to be i-resolvable).......... 214 Lemma 5.21 (Sufficient and necessary conditions on the v, for y(v) < y).......... 218 Claim 5.22 (Conditions on v, v,_l,...,vi+ for v,+l to satisfy Eq. 5.105).......... 223 Lemma 5.23 (Sufficient and necessary conditions on the vi for,(v) > y).......... 224 Theorem 5.24 (Sufficient and necessary conditions for v. to be i-resolvable).......... 225 Theorem 5.25 (Characterization of Cj).......... 228 Theorem 5.26 (Compact characterization of C j)...... 232 Lemma 5.27 (Relationship between C;Yj () and (.......... 244 Theorem 5.28 (Recursive representation of F" u' ).......... 245 xxxv

LIST OF ALGORITHMS Algorithm Procedure 3.1 (Procedure for obtaining the performability of a system).......... 42 Procedure 3.2 (Procedure for constructing a performability model).......... 44 Algorithm 4.1 (Algorithm for constructing — 1).......... 141 Algorithm 4.2 (Basic algorithm for constructing y-1).......... 143 Algorithm 4.3 (Recursive algorithm for constructing and evaluating an intersection tree).......... 147 Algorithm 4.4 (Algorithm for determining a set of maximal disjoint cube functions).......... 155 Algorithm 4.5 (Algorithm for detecting whether two cube functions can be compressed).......... 156 Algorithm 4.6 (Algorithm for compressing two cube functions).......... 157 Algorithm 4.7 (Algorithm for obtaining the accomplishment level).......... 159 Algorithm 4.8 (Algorithm for obtaining the specification for level-O).......... 159 Algorithm 4.9 (Algorithm for obtaining the specification for level-1).......... 159 Algorithm 4.10 (Algorithm for obtaining 70 = icK).......... 159 Algorithm 4.11 (Algorithm for obtaining ci).......... 160 Algorithm 4.12 (Algorithm for checking that 70(u) is unique).......... 162 Algorithm 4.13 (Algorithm for checking that rl(u) is unique).......... 164 Algorithm 4.14 (Algorithm for checking that (ioko (u) is unique)......... 164 Algorithm 4.15 (Algorithm for checking that 70 or K, is total).......... 166 Algorithm 4.16 (Algorithm for complementing a set of cube functions).......... 166 xxxvi

Algorithm 4.17 (Algorithm for complementing a cube function).......... 167 Algorithm 5.1 (Determining the probability distribution function, Fy).......... 193 Algorithm 5.2 (Determining the Cy).......... 209 xxxvii

LIST OF APPENDICES Appendix A Manual entry for metaphor.............................................................................. 262 B Manual entry for meta_discrete........................................................................ 263 C Manual entry for meta_continuous................................................................... 267 D Structure of meta_discrete............................................................................... 270 E Structure of meta_continuous........................................................................... 279 F Session for the simple reliability network example............................................ 281 G Session for the simple air transport example..................................................... 287 H Input data for the SIFT computer example...................................................... 299 I Scenario for the dual-dual computer example................................................... 308 H Input data for the SIFT computer example...................................................... 312 K Solution of a simple 3-state, acyclic, nonrecoverable process............................ 372 L Solution of a simple 4-state, acyclic, nonrecoverable process............................ 382 M Recursively derived solution of a simple 3-state, acyclic, nonrecoverable process................................................................................................................... 405 N Recursively derived solution of a simple 4-state, acyclic, nonrecoverable process................................................................................................................... 409 xxxv iii

CHAPTER 1 INTRODUCTION 1.1. Problem Statement A principal goal of computing system evaluation is the measurement of the system's ability to perform. Reflecting that ability are several descriptors commonly used to characterize figures of merit for a system, including performance 111-141, reliability [51-171, and effectiveness [8]-[101. (See 111, Section 5] for a bibliography of literature about formal system evaluation published recently.) The characterizations listed are often studied separately, by stipulating that other descriptors are invariant. For example, reliability evaluations typically assume a system that has a constant performance rate. Similarly, performance evaluations often, though not always, suppose a system that never fails. A performance-reliability disjunction is suitable for systems whose ability to perform is either total or nonexistent when faults are present. However, to evaluate those classes of computing systems whose performance is "degradable," features of both performance and reliability must be blended. Recently, several combined performance and reliability measures for dealing with degradable systems have been suggested [12)-[28J. In particular, Meyer [291 has introduced a universal modeling framework called performability, which is well suited for the measurement of the combined performance-reliability. 1

2 To employ effectively such measures within system evaluations, we require proper generalizations of both discrete event-oriented models utilized in reliability evaluation (e.g., fault trees), and analytic models and solution methods employed in performance evaluation (e.g., Markov models). The formal description of performability [291 delineates in broad terms the quantities that the analyst requires. The description also suggests a basic method for arriving at those quantities. However, to be effective, a framework such as performability requires tractable techniques of solution. As an analogy, the formal characterization of differential equations, although useful for describing physical phenomena, would not be nearly so useful to engineers without techniques for solving such equations. In the area of reliability, almost all the literature during the past twenty years has concerned not the definition and nature of reliability, but the acquisition and application, i.e., the modeling, calculation, and use of a system's reliability. The thrust of this dissertation concerns the modeling, calculation, and use of a system's performability. Often, we shall speak of "solving" a performability model, or of the "solution" of a performability model. By "solving" a performability model is meant the following: Given a general performability model as discussed in Chapter 3 [FUNDAMENTAL CONCEPTS AND RESULTSI, one cannot always directly calculate the system's performability. Sometimes, by using suitable manipulations, one can obtain from the original model a description that allows direct calculation of the performability. For our purposes, such derived descriptions will usually take the form of an analytic equation, an integral equation, or a matrix of such equations. The process of carrying out these manipulations on the model will be called solving. while the results of solving will be called the solution. "Solving" can also involve solving differential or integral equations, although in this dissertation, we shall often consider a model "solved" when an integral equation is obtained. Construction of a model refers to the process of obtaining a model-a model cannot be solved until it has been constructed. Sometimes, a model may be simultaneously constructed and solved. Indeed, the methodologies proposed in this dissertation allow some degree of simultaneous construction and solution. A solution can

3 be evaluated by evaluating the equations comprising the solution. Using the analogy presented in the preceding paragraph, the study of modeling systems by means of differential equations has almost identical terminology. In summary, this dissertation addresses the questions of modeling a system to reflect its performability and solving that model. 1.2. Thesis The thesis of this study is that tractable techniques for solving certain classes of performability models exist and that these techniques can be applied to the analysis of degradable computing systems. 1.3. Research Objectives Some introductory work (Ballance, Furchtgott, Meyer, and Wu) [30)-[331 has been done concerning performability models and solutions for certain restricted classes of systems. The objective of this research is to extend that initial activity by continuing the development of techniques for analyzing degradable computing system performability. In particular, we seek the development of methods for solving performability models. More specifically, a) we shall generalize the analysis of performability models having discrete performance variables, and b) we shall extend the development of methods for solving performability models having continuous performance variables. With regard to a): 1This thesis addresses the specific problem of solving certain classes of performability models. (See section 1.2 [Thesis].) The above are major works of reference addressing the issue of solving specific performability models. More generally, the following list of available literature (excluding internal memoranda) more completely traces the development of performability at The University of Michigan (Ballance, Furchtgott, Meyer, Movagar, and Wu) (see Chapters 2 [BACKGROUND AND LITERATURE SURVEY] and 3 [FUNDAMENTAL CONCEPTS AND RESULTS]): [(lI, [291-[641.

4 i We shall describe a calculus for relating low-level structural behavior to high-level system performance. ii) We shall propose algorithms and heuristics for efficiently performing the calculations associated with the calculus. iii) We shall discuss a computer package we have written that implements those algorithms. iv) Employing that computer package, we shall analyze example systems. In the case of b): i) We shall develop the problem more generally in the context of reward models and nonrecoverable processes 1551, 159]. ii) We shall derive a general solution for the performability of a system modeled by a finite-state, acyclic, nonrecoverable process. This solution takes the form of an integral equation. The solution will be illustrated. iii) We shall derive a recursive form of the solution of ii). In addition, we will present specific examples of the recursive solution. iv) We shall discuss a computer package we have written that implements the solution. v) Using the above tools, we shall analyze a nontrivial example. vi) We shall begin consideration of still more general models. These developments constitute a significant extension of performability theory, and more broadly, of performance and reliability theory for degradable systems. Although not in the scope of this tess, the analysis will support a methodology for determining optimal system configurations, phasing strategies, and component values for degradable systems, Finally, as an aside, among other issues which this thesis does not attempt to address are the measure-tbeoretic basis of performability (the reader is referred to

5 Meyer [291 and Wu [591, [601), or developing new model types such as "extended" queueing networks or stochastic Petri nets (see, for example, Movagar [651 and Meyer 164], [661). 1.4. Dissertation Organization The remainder of this dissertation is organized as follows: The next chapter motivates in detail the research and presents a short review of the previous work in combining reliability and performance. Review is important to put the present work in perspective. Much previous work has been done in the general area of computer evaluation, but less has been done in the area of degradable computing system evaluation, and little has been done in the study of performability evaluation. Chapter 3 [FUNDAMENTAL CONCEPTS AND RESULTS] presents an overview, in non-technical language, of the fundamental concepts and results of performability and of the dissertation. Chapter 4 [DISCRETE PERFORMANCE VARIABLE (DPV) METHODOLOGY] addresses evaluations in which the performance of the system can be described by using a finite number of values, and Chapter 5 [CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY] investigates the case in which a continuum of values is required. Finally, Chapter 6 [CONCLUSIONS] summarizes the thesis results and suggests future research.

CHAPTER 2 BACKGROUND AND LITERATURE SURVEY 2.1. Introduction To understand the need for combined measures such as performability, consider the two major techniques usually employed for computer system evaluation: performance analysis and reliability analysis. Classical performance analysis presumes a system and environment whose structure and properties for all time are either constant (e.g., an M/M/1 queueing system) or recurrent (for instance, a system continually cycling through failure and repair). Thus, most models assume that a system attains an average steady-state behavior and that if several "copies" of the system were built, all would have identical performance. Reliability analysis, on the other hand, typically deals with determining the probability that a system will perform "successfully," where the notion of "success" is usually based on the internal structure of the system. Such an evaluation is usually concerned not with how well the system performs its function over the utilization period, but with whether the system performs above a given threshold during the utilization period. 2.2. Degradable Systems With the above meanings, performance and reliability analyses are not applicable separately when analyzing systems in the increasingly important class of degrdable 6

7 computing systems. A system is in the class if the total performance (or worth) of the system can vary from utilization to utilization because of changes in one of the following: 1) the structure of the system, 2) the internal state of the system, and 3) the environment. The variations listed are generally modeled by stochastic processes, and the length of the utilization period will usually, but not necessarily, be finite. Most of the degradable systems of interest to us are multi-processor computing systems used in applications requiring continuous availability (i.e., no downtime) during a finite interval of time. Examples of such applications include real-time control computers in aircraft, spacecraft, and nuclear power plants. Such systems can also be utilized in roles where availability is important but not critical-for example, in communications, manufacturing, business, banking, and airline reservation systems. Typically, degradable multi-processor systems operate as follows: when a fault is detected and isolated in a hardware or software module, the module is discarded. The performance of the system may decrease since certain non-critical computations may be performed more slowly or not at all. The systems thus trade decreased computing power in exchange for additional availability. Specific examples of degradable computers include the SIFT (Software Implemented Fault Tolerance) computer [671, [68] and the FTMP (Fault-Tolerant Multi Processor) computer [69], [70]. Other types of degradable systems include those in which performance, and, in certain cases, reliability, vary as a function of time, workload, and state (e.g., timesharing computing systems and distributed computing systems). Degradable systems also appear in disciplines other than computer engineering. As illustrations: (1) A set of machines in a smallfactory. As machines break down, the ability of the factory to produce a product will be decreased; the loss of key machines may bring the entire factory to a stop, while the loss of other machines may reduce throughput, increase

8 costs, etc. One concern is to maximize the factory's output while minimizing the number of key machines and hence the possibility of a complete factory shutdown. (2) An automobile with pneumatic tires. As air leaks from the tires, the driving efficiency (in terms of miles per gallon of gasoline) of the car decreases. Performance/reliability tradeoffs may exist if a tire with better initial efficiency looses air faster than a tire with a lower initial efficiency. Note that the performance degradation is continuous rather than discrete as in the previous examples. (3) An athletic team. The performance of the team degrades as the athletes grow tired or are injured. Injuries are more apt to occur to a tired athlete than a rested one. One could conceptualize such problems as maximizing the performance while minimizing the injuries of the team for the present game; injuries would be of concern since they would affect the team's performance in future games. Note that the performance degradation here is both discrete (an athlete is injured) and continuous (an athlete tires). A common characteristic of degradable systems is that performance and reliability present conflicting requirements, i.e., for a fixed amount of resources, a configuration that increases system reliability generally decreases system performance, and vice versa. Thus, degradable system design exhibits needs similar to system synthesis in which performance is to be maximized under cost constraints (e.g., see Kachhal and Arora [711 and Trivedi and Sigmon 1721). For instance, we may wish to maximize performance under reliability constraints or maximize reliability under performance constraints. The ability to examine the performance/reliability tradeoffs is the key to good engineering design of degradable systems. 2.3. The Evaluation Procedure Most evaluations of systems such as degradable computing systems use a basic, topdown, three-step procedure. Each of the three steps presents significant research problems. In this section, the steps of the procedure are identified, previous work by other authors in each of the three steps is discussed, and the results of this dissertation research are placed in the context of the evaluation procedure. To evaluate a system, the measure to be used must be determined. In particular, the metric must be defined and its properties described. Once the metric has been formalized, techniques for evaluating specific systems must be delineated. For the systems of interest, namely degradable computing systems, these techniques typically consist of: first, describing how the structure or composition of the system affects the chosen metric (the description may

9 have stochastic components), and second, characterizing how the stochastic nature of the system affects the system's structure. Based on those observations, the following procedure is fundamental to evaluating degradable systems: 1) Define the measure to be employed, 2) Specify how the measure reflects the structure of the system, and 3) Characterize the probabilistic nature of the system's structure. Table 2.1, Table 2.2, and Table 2.3 summarize' work to date on each of the above steps. Table 2.1 presents a brief overview of many important evaluation measures and notes when they were first applied to computing systems. Table 2.2 gives an outline of the various techniques that have been proposed for relating the system structure to the evaluation measure. Finally, Table 2.3 presents the techniques proposed for describing the stochastic nature of the system. Both Tables 2.2 and 2.3 also show relevant software packages. Because of the amount of information conveyed, all three tables are necessarily succinct. Most of the pertinent work to date with which we are familiar has been included. As the entries come closer to the topic of this dissertation, the table should become more complete. Sections 2.4 [A Taxonomy of Measures and Models of Degradable Systems (Table 1)1, 2.5 [Description of the Structure of Degradable Systems (Table 2)j, and 2.6 [Determining the Probability Measure of System Performance (Table 3)1 discuss Tables 1, 2, and 3. Sections 2.7 [Structure-Based Models], 2.8 [Other Reliability Oriented Models], and 2.9 [Performance Oriented Models] treat various classes of models. 1For an extensive, though unannotated, bibliography of work in the area of formal computing system evaluation (for the period 1977-1981), see Chapter 5 of Meyer, Furchtgott, and Movagar [11].

Model Componenta System description Probability measure of function First App!:ca::on to function Time Structure Performance KEY below) system performance References Analyzing Computing Measure C 3aification Measrure Name betSee (See KEY below) References l Base Set Set Systems g. ST.RUCmm-P9RV (See KEY below) OISKR STRUC PERF __ Clasical Performance measures Any set Any set I? Performance measure * Manly. For an overview, Many, i.lcJdng [74]Performance (e.g, throughput, (see measures at left) see [2], [31. [73] [761 Measures response time, turnaround time.)_____ Claical Reliahilty (also derived 10i iO, 1I 0. 1 structure funct:on f ProbO( Itl) = 11 Many. For some early Many. See von NeuReliability measures such as vai- formal work, see von man: 15 [77. Moore Measures lability, mean time Neumann 1956 (77], and Shannon 1956 [78] becteen failurer Moore and Shannon For the re.:a:;'l:y of IMTBVI) 1956 [78], and Birn- degrndab'e sys.ems baum, Esary, Saunders see fault-'o.eran; sys1961 [79] ters, see Boour.c:us For an overview, see Carter, aid Scenneide r Barlow and Proschan 1969 [BO] and Boun197 1(7] ci|s, C-re'r. essep Schneider, and Wadia 1971 161] "Cannibalizatia" Reli- (0 t0,1],I0,..... caTnibalized structure Prob 10T(U)= I] lHirsch, Meisner, and Hirsch, Me:sne, and O ability function _ T Boll 1968 [82 Boll 1968 [82] Reliability of "Nondi- [0} 0, 1)N (0, 1] structure function f Prob Io( U) s b], Postelnicu 1970 [83] Postel:n:cJ 170 [83 chotomic" structures b E [0, 1] Reliabilityof Multi- i0I i,,.,n^ [O, lj structure function 0 Prob [{(U) = 1] Murchland 1975 14] Murchland 1975 81] state Syste m Peudo-reliability [O [0, 1N 10,1]n relative performance of Tillman, lie, and Hwang system i, 71 and an im- 1976 [85] plicit structure function ITIE [ relative perStructure-Basd Weas- I formance of the urs ysystem]T( Tillman, Lie, and H[wang 1976 rsnap-shot) __ I [8 I|sl Measures oriented to- ol0] 0, I N 0, 1 [ structure function 0, Prob fi(U) = 1] Losq 1977 [16] losq 1977[16] wardsdegradable sy- not explicitly stated] tems (measures derived from reliability, e.g., availability, mean time between crashes, average processing power, F -oportion of time spent in degraded Table 2.1 A brief overview of many important evaluation measures and when they were first applied to computing systems

Model CompmnenLt nSystem descrption Probability measure of function Firs Application to Masurec Name Time Structure Performance (See KEY below) system performance References Analyzing Computing Base Set Set Syte g ~~p STRLWD - U -IKRF |(See KEY below) OR STRUC PR ___ Memare Caification _ ~ OperlstinMa fifecency 0 [0.1,, N I? workpower of lumped Elp] Troy 197715 1977115 statet ___ Job-Related Reliability 10 t0, I N O.11 [a job J, related struc- for job Jr. Minne and aatayaa e and layama ture function. no name Prob i U i S(J{) 1 1I79 18] 1979 [18) for the function given) U OrintedRliabi- 0. 0 trucure function Prob( =) CastilloandSiewore CaloandSiewiore User-Oriented Rehabilt- [01 10.11 1~0.11 [structure function. Prob 0( U) = 11 Castillo and Slewiorek Casullo and Slewlorek ty not explicitly stated 19600 21) 1980 [21] (Apparent Capacity and is the identity funcExpected Elapsed Time tion.] required to correctJy execute a given program.) Depeodatililty 10 [01. I. N[ 10.1....n dependability levels Ds Prob [Dt ].aprie and Mcdhaffer- Laprie and Medhaffer Kanoun 19o6180] Kanoun 19801 lal 1-8 An pected peTfor- 01 [0,1,..NI I? performance of state t. El P] Chou and Abraham Chou and Abraham 9trucatre-a d Mea- mue me re P 190 [20] 1980 [20] " kload Dependent 101 [0.11 0,1i l[structure function. Prob lJ(U) = 1] Castillo and Siewiorek CasLilloand Siewiorek Reliability Mesmure not explicitly stated. * 198 1123. 871 191 1231. 1871 is the identity function.______________________ ____________________ Perfrmnanee ReBlhai- 101 10.1,.. I? normalized perfor- Prob ["system perfor- Huslende 1981 1241 Holende 191 [24] ty mance of state i, 7(i) mance" ( U) b]. b b R (also derived meagres such as peric-mane availabibty. reqsonse time reliability, rexpon~ time availability, tkroucbput reliability) Phed Misian elia- 10. I... t IN 0.11 structure function Prob [ l(U) eq 1 Winokur andGoldstein Winokur Goldstein bilty_. 1969 l[ 109188] Other Reliabilty- Computatin-Bmsed Re- 10.1.1. 0,1, NJ [. 11 tolerance relation a Prob [ U is a a-success Meyer 1976 121 Meyer 197 112] Oriented Measures liab.liLy Reliability 101 10. IN 10.11 [structure function, Prob [0(U) = 11 Borgerson and Freita Borgeronand Freitas not explicitly stated] _ 1975 1131 197b 113 Table 2.1 - A brief overview of many important evaluation measures and when they were first applied to computing systems

If~~~~~~~~ Model Companent' System description System description Probability measure of function First Application to Hemire aiflcati Mes~ure~T Name ime Structure Performance (See KEY below) system performance References Analyzing Computing Measure CUUMACULKO MYeasure Name (See KEY belowr) Re:eecsAayzn optn Base Set Set Systems Pefrailt,STRtC^^ - PKRF (See KEY below) OBSKR STRUC PERY Perfornmabaity {0,1,..n n 0. 1. NJ {O,. AlM Prob 17(U)= b], Furchtgott 1977 [9) (Diarete Perfwrmance E 0, 1.., M eyer, Ballance, Variable)____ ____capability _____________________ Furchtgott, and Wu ______ Performsability 10.11 0,1,] n. function Prob -y(U) b. 6 EIR 1977136], Meyer 1978; Meyer 1980 51] manee Variable) (Casitinuous Pef 1_ _ _ _ __ _ _ _ 971tJ Computation Capacity [0.1] I. 0,..N I Total capacity of the Probl"total capacity of Beaudry 1977 17]. 90] i3eaudry 977117 0 Based Measurez system the system"( U) b]. (measures derived b b E from reliability, e.g.. <CE TU * Cr iabjity-ped JA uC computation reliability, where s the copuC~apeiitrB~Kd Neu- thr ci,^^a the compumean computation before failure, computa- tation capacity of state tion threshold, computation availability, capacity treshold) I~ WcwkUoad Model Perfol- [0,1] 0,1..AV? total reward E[ total reward Gay 191] Gay and Ketel- Gay 191] Gay and Ketel mance Ueamreu F 1r1 where r is son 19791191 son 1979119] (exrpected system tcD throughput, expected a reward rate, e.g., number of transitions C1, capacity of state i lost. through availabilb- j, throughput of state ____________________ ty, lost throughput) CapabilitylBased Mas- Bxpected Reward [0.1] 0,1,. R total reward Eltotal reward) De Souza 1980122] Dc Souza 1980 122] Wres IP U1~, ibi ET35j I UC Yt Ni b ffz C ga wbere yy is the yield (reward) rate is generated while state i is occupied and the next state is j; bonus reward bi is obtained when the transition is made. Solution techniques will also allow a discount rate a, though this is not discussed by _____ __________________ ______ eSoua_ ____ ________ __________________________De Souza 122]_ Table 2.1 - A brief overview of many important evaluation measures and when they were first applied to computing systems

Model Components System description Sysntemon decito'Probability measure of function rirst Application to Measure Name Time Structure Performance (See KEY below) system performance References Analyz Compu Base Set Set g.- SlW& M- ^ PH RF (See KEY below) ___________________ OBRXR STRUC PLRP __ Probabhilityf the Per- 10,1] {0,1,., I? assignment Prob I assignment is Cherkesov 1981 192) formance of an A~sgn- E Utc where cG is performed during the mwet * *C U total system lire the productivity of atate. Coo Index 10.1] (0,... N? system cost function probability distribution Krishna and Shin 1982 Knshna and Shin 198 ~(Probability of dynaic ~S(t) = C dt) function of S and El to- 128) 126) ~~~(PF~~~~~~~~~robabil~~~~~ity ofdynaic (~ta TA tSKlS tal system cost ) sysfailure, mean cost, tem does not miss any variance cost, modified where TK is the set hard deadlines] cost mean modified of tasks that the syscost, variance modified tem must perform, and Meaure Ciasnfication cost) Ct(t) is a cost function associated with the fol- lowing: task i, the number of times a task is executed, the time required to execute the task, and any "hard deadlines" associated with the task. Performance Related Jo,1 0.1 NI I? capacity cumulated be- Eicapacity cumulated] Arlat and Laprie 183 Arlt and aprie 193 Dependability (life- fore system down [27?] 27) cycle reliability, mean capcity cumulated be- i <C BUC fare system down) where ki is the performance index associated with state i. Performance meaures [0.1] 10.1. NR system performance Prob I a(t) E /,), I, a Cai and Adams 1983 Cai and Adams 1983 o(t) (viewed as a scaler (measurable) interval; 128] 1281 valued random pro- and E I a(t)) __ ____________________________________________ ~cess) Table 2.1 - A brief overview of many important evaluation measures and when they were first applied to computing systems

14 KEY: U is a random variable taking values in STRUCo~ Ut = Time in state i = ilrJ(t)(%)dt where l(( ) if U(t) = i ~Urt)(i,)- I otherwise Uij = time in state i when the next state is j = f IrJ(t)(i)IN,(t)(j)dt where Nr(t) = next state that U enters after time t = U(T). T = (~ c time U () U(t) and u > t Nij = number of transitions from state i to state j during OBSER Table 2.1 - A brief overview of many important evaluation measures and when they were first applied to computing systems

Method of repreaenting (obtaining) the function g or 9 -'( C), C ) Software Toob (See Table 1) Mleasure Name GCeneral Approach Specific Technique Common Arinmptias References References Program Name Queuing models System reaches a steady Many. For an overview, see Many, including 175), 176] a* Algebraic__state 12), 13)1, 73) ____ ______ Algebraic Operational Analysis System reaches a steady Denning and Buzen 1978 [931 * state aOeical ~ Perfaimance Simulation too Many Many, e.g..SCERT, CASE, GPSS, SIMSCRIPT, SIMUIlA ~~~Approximation ~Many. For an overview, see S:MPl/i Approximation~~111.73~~[21, 131. [731 Measurement a*~ * *. Truth Tables, etc. *~ * * __ Networks Coherent structure _ __ __ _ Developed by safety en- Lapp and Powers 1977 [95] FTS gineers at Bell Telephone La- Apostolakis. Salem, and Wu CA' Coherent structure boratories and The Boeing 19it7 96)I Company. See Haasl 1965 n Classical I4 Reliabilit Algebraic Fault-trees (event-trees) _______ __________ Reliability Phasing and coherent struc- Esary and Ziehms 1975 1971 CCC C ture Non-coherent structure Kumamoto and Henley 1978 * ** I98]_____________________________ CO charts Coherent structure Gateley and Williams 1978 Gateley and Williams 1978 GO ____ ________________________ __ ____________________[199] 19 "Cannibalization" Reliability Algebraic Truth Table Coherent structure Hirsch, Weisner, and Boll **l _______________________ ~~~~~~~~~~~~~~~~~~~~~1968 [82]1_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Reliabibity af"Nondichoton- Algebraic/ Functional description of the Coherent structure Postelnicu 1970 [8* *3) * ic" structures Approximation structure function _ Reliability of Multistate Sys- Algebraic Fault Trees (event trees) Coherent structure Caldarola 1978 100l * tems Paeudo-reliability Approximation Measurement/Conjecture f' Tillman, Lie, and Hwang 1976 1851 Measures oriented towards Algebraic Delineation of failure states'' Losq 1977 [161 *' degradable systems _____________ I_ Table 2.2 - An outline of the various techniques that have been proposed for relating the system strucute to the evaluation measure

MethLbod of rrepresenting (obtaning) thefuoctiim 9 (').9'l Sofltre Toola (Sec ible 1) Measure Namne General Approach Specific Technique Common AsumpLion Relerencee Reference* Program Ham Opcraticmal Kfeciency Algebrin (aicalatioit of lthe work- 7I Troy 1977 1l ~ ~ powers p. Job-Realed ReliabiliLty Algebraic riiJnicrat ion of thle sets Every S(J,) hins n iininial Mine and llalayariia 19791161 ~ _____ ___________________________ _ ___ S(J) elenient User-Oriented Reliability Algebraic'lie structure functionr is ( Castillo iid Siewiurck 189 *0 the idenrity furictioin No 1211 struct urui) description required Dependability Algebraic. Approxiiiatiuoi I)elileatiuoi of the system * iaprie nd MUJharfer- **~ ~ states ussocialed with each Kuraouni 1980 86) dependability level An expected performance Algebraic delineation of the perfor- o*f Chuou and Abrtilaham 190 1201 ~ * meamIUre niflces f klrUoad Dependent Reliabi- Algebraic The structure funclior is * Castillo anid Siewiorek 1911 I* * ity Meaure the identity functiori No 123) struclturtl description required Perfornancc Relvibiity Algebraic I)elirceation of the ioriiali7ed'f luslerinde 981 124) 1 perfornimance of states i. rTi Comaputation-Bwed Reliabili- Algebraic DvItecripi ionu of the tolerance None explicitly givein, exirri- Meyer 1976 1121 ~ ty relatioll a pie required coherent structure arnd finite utilization ReliabiiLy Algebraic Dlescriptioni of the structlure Coherent structure Borgersoi and Ireitas 1975 * function _ 4 13) Phase Miuaaaa Reliability Algebraic l)escriptioin of thile structure Wiiiokur and Goldstein 1969 * function # for each phase T1' 1811 IT} Coherent structure funcLion Table 2.2 - An outline of the various techniques that have been proposed for relating the system strucute to the evaluation measure

Method of rrepresenting (obtaning) bthe functioan g'(()' C. I Software Tools (See Table 1) Name Genera] Approach Specific Technique Common Aswumptions References References rogram Nam Performability Algebraic I1rajectory Sets Phasing FurchtgotLL 1977 1891 Furchtgott 19791411, 146], tITAPiOR (Diacrete Performance Vari- Furchtgott 1961 110] able) Graph of the inverse capabili- Decaying Process with three Meyer 1981156 ** performabiliby __ ty furiction ~/1 slates or less (Continuous Performance Algebraic Functional description of the Decaying Process with finite Furchlgolt 1961 102] Fuirchtgott 1963 (63] METAPHOR Variable) inverse capability function number of states Computation Capacity Based Algebraic Delineation of the computa- None explicitly given, exam- Reaudry 1977 17], [90] ~ * Measures tio.i cdpacity og of each state pie was monotonic, but this it~~~~~ ~ ~does not seem to be a restriction Workload Model Perfornance Algebraic Deliinention of the capacities ~j* Gay and Ketelison 1979 119] * * Measuresi; _ Kxpccted Reward Algebraic lDelineation of the yield rates De Souza 1960 (221 Vyi. bonuses bu. urid discount I __________________________________ _rate a Probability of the Perfor- Algebraic laplace transform (two di- Cherkesov 1981 [92] * -ance of an Assignment mensional laplace-Carson _______________________~___ ~transform) Coat Index Algebraic Description of the cost func- K Krishna arid Shin 1962 [261 * lions Ci and any hard deadlines associated with each _____________________________ ______________________________ task i Performance Related Depen- Algebraic/ Approximation Description of the cost uicne- too Ariat anrid Laprie 1983 127] * * dability tions Q and any hard deadlines associated with each ____________________________ _____________________________ task i Performance Measures Algebraic Description of the perfor- * Gai and Adams 1963 [28] ~ ___________ ____________ mance measure a(t) KEY: = Either not applicable or the author is not aware of any suitiable eritries Table 2.2 - An outline of the various techniques that have been proposed for relating the system strucute to the evaluation measure

MeLbod of calculating Lbthe probability meaure of system performance, i.e.. Prob[g'(C),. CEB Sotware Tools (See Tables I and 2) HEcwre Name General Approach Specific Technique Common Assumptions References RefeProgram am Claaical ~* $t* to ~ * *~~ Performance Direct integration of density s-i, t-i Many For some early formal ~ ~ functions over the time T work, see von Neumann 1956 [77), Moore and Shannon 1956 1781, and Birnbaum, Esary, Saunders 1961 [791 For an overview, see Barlow and Proschan 1975 [7] Truth Table (the sumr of the s-i, t-i See, e g., Woodcock 1968 Woodcock 1968 103] NOTYD probabil.ities of individual en- [103] Algebraic tries in the table of () Tie Sets (the sum of the pro- s-i, t-i K'nv, Case, and Chare 1972 ~ * babilties of each path set) (104 1~ Symbolic Manipulation s-i. t-i Parker and McCluskey 1975 ~ ~ Onsmvcal 10 ___ ___ ___ ___ Reliability Graph Theoctical s-i, t-i Satyanarayana and Pra- ~ ~ bhakar 1978 1106] Synthesis s-i, t-i Woodcock 1968 11031 "' ~ Inclusion-exclusion s-i, t-i See Barlow and Proschan * * _______________________________ 1975 [7] ________ ___________ flamrnmerslyand llandscomb Kongso 19721110] RELY4 1964 11071 oksI 19841107] _____ Locks 1975 [111] ] SPARCS Approximation WAS-14001975112 SAM Monte-Carlo * See Fussell and Arendt [108] Matthews 1977 113] MOCARS or McCormick 1109] for Erdmann, et a. 197 114] SPASM descriptions of many of the ______________~________________________________~______~________ _____~ software tools at right.. Table 2.3 - Techniques proposed for describing the stochastic nature of a system

MeLhod of repreaenling (obtaining) tbe function or f -'(C). C B Softare Tools (See Table 1) Name General Approach Specific Technique Commno Asaumptioas References References Program Name. Vesely and Narum 1970 [118] _?R} and KITT Blazek, et aL 1971 (117) TASRA Semanderes 1971 [16] SLRAFT Fussel], Henry, and Marshall MOCUS 1974 1119] The Boeing Company [1201 ARMM Pande, Spector, and Chatter- [:J-_ and MICSUP jee 19751121)j Van Slyke and Griffing 1975 A!.CUTS 1122] Bjurman, et al. 19761123) CARSRA Burdick, Marshall, and Wilson COMCAN 1976 1124)1 Leverenze and Kirch 1976 WAM-BAM Jensen and Bellniore 1969 11251 O (1151 Platz and Olsen 1976 126] FAUNET Clasical Cut Sets (1 - sum of the pro- CGate and Fussell 19771127] iACFIRE Reliability babiities of each cut set) See Fussell and Arendt 1108) Fussell, Rasmuson, and S. -RPOCUS or McCormick (109) for Wagner 1977 (126] descriptions of many of the 9software tools at right. Lambert and Gilman 1977 m&'ORTANCE (1291 Olmos and Wolf 1977 11301 PL-MOD Pelto and Purcell 19771131) MFAULT Vesely and Goldberg 1977 lANTIC [132] Wheeler, et al. 1977 133] FAUITRAN Erdrnann, feverenz, and WAM-CUT Kirch 1978 11341 Gateley and Williams 1978 GO [1.35] Rasmuson, et al. 10761136] COMCAN II Rasnmuson and Marshall 1978 FAI'RAM (137)__________________ ______________________________________________________________ Worrell and Stacik 1978113] Table 2.3 - Techniques proposed for describing the stochastic nature of a system

Method of repreaenting (obtaining) the function I or f'(C) (. 1l Sowre Too (See Table 1) Mcasure Name General Approach Specific Technique Common Awumptions Relerences References logram Name Roth 1967 11391 REi. oliricius. el aI. 1971 t811 lri' 70 Fleming 1971 1140)1 t..(OM! Uathur and AvizLenis 1970 CA'L'lntegrHl-differei'.iai equa- ~- - ~ Liorin' e-I Uaiiy Rennels arid Avzierils 1973 I1MS 11421 ___ Stiffler. ^ryanl. arid CGJ- CA^}'Clwical AJgb clone 197: 11431 Algebraic Reblablity Sifler. Bryanl. and Gc- CAR- il a _ cione 1979 1144) Np, and Avzenis 7r 178 145 AIIF'S 78 Markov nodes s-i. t-I Many Makum and Aviiejius 1982 AlI:S 81 hybrid (niarkov n odcl and * Trivedi Trivedi tlAIt im nulatlon) "CannibrllzaLion" leilablll- Algebraic Truth Table s-i, t- irsch, Me ister, and tlul ty L t II1es 1 Reliabilhty d "Nondichotom- Algebraic Di)rec cl rtegrtlion of the den- a-. t-i Postelnic 1970 831 *3 *' ic" structures sily furnciioiIs over the SLructurc funciioll <) Reliabiitly f MulLitaLe Syv- Algebraic DI)rect probtbihtIy calculation s-i. t-I Caldarola 1978 1 1001 || * tems 5sceudo-reliability Algebraic I)lrect probability calculatiorl a-i, t-i'l'ilirlna,.ie, aind Ilwan g 1976 Table 2.3 Techniques proposed for describing the stochastic nature of a system

Method of representing (obtaininig) the funcim f rm f' ( C). ('. i Software Tools (See Table 1) Name Gener-l Approach Specific Technique Common Assumptions References Refcrences Psogram Name Meawo rem oriented towarda Algebraic Steady-state solution of a s-i. L-i I.o3q 1977 1161 degadable aysems homaogenrorS. time-invariant Markov process Operatavaal Kfeciency Algebraic Direct rit-girtl ion of density -i. t-I Troy 19771,I too functions over specified interva of I Tie Job-Related Reliablnity Algebraic SteaJy- tire solution of a a-i, iI Mine nd tlvivyarni 1979 1181 homogeriroirls. timie-invarianL ____________________ ____________ Markov pro, eas User-Onented Reliability Algebraic Direclinegraiton of density s-i Castillo and Sicwiorekb9HO too* funcitioas over specified in- 12)1 Lcrvn's of time Dependability Algebraic Unrtov rtodels/ methodof a-i fLprie and Mcdhafler- (Costes, etfi 106t11471 SIll)_____ ___________________________________.________tages Katiound 19601981 ____________________________________ge Kanound1960lH8l An expected perforsnance Algebraic Direct integrat ion of density s-i, t-i Chou atid Abraham 19601201 ~*~ meinsrve funclions over specified in___________________________________ tervaias of time Warklad Dependent Rehala- Algebraic Direct iniegrai ion of density a-i Cistillo arid Siciiorek 11~61 ity Meamarem functions over specified in- 1211 tervals of tinie Performance Reliability Algebraic Direct irtrgrvtion of density S-i t-i lanirnde 1981 1241 functions over specified in____________________________________ ______________________ tervals of time Phased Eims Analysis Algebraic Direct probability calculation s-i. t-i Witioktir anril CdulJ.ciri 1960?** 5*5 Camputation-lamed Reliabili- Algebraic Direct iia-gration of density s-it-i Meyer 1970 1121 ~*~ ty functions over specified in_____ ________________________________ _______________ terVals of time BUliabiky Algebraic Direct probabilty calculation s-i-t-i Ilorgrron anid Freilas 197T ~*~ Table 2.3 - Techniques proposed for describing the stochastic nature of a system

Method of representing (obtaining) the function f or/f (C), CB Software Toolas (See Table 1) Measure Nome General Approach Specific Technique Common Aasumptions Referencea References Program Name Performnability Algebraic Trajectory Sets s-i, t-i Furchtgott 1977 [891) Furchtgott 1979 41], 146. METAPHOR (Discrete Performance Van- Furchtgott 1981 (101] Integration of density func- -i Meyer 1981 [-, b* tion over the graph of the inPerform nability verse capability function _ (Continuous Performance Algebraic Integration of density furic- too Furchtgotu 1981 1102] * Variable) ion over the Functional description of the inverse capebility function / I Comnputation Capacity Based Algebraic DIrect integration of density s-i, t-i Beaudry 1977 117], [90) * ** Meaurres functions over specified intervals of time Warkload Model Performance Algebraic Direct integration of density s-i. t-i Gay and Ketelson 1979 119 1 * Measures functions over specified intervals of time j xpemcted Reward Algebraic Direct integration of density s-1, t-i De Souza 1980 122] ~ ~~ functions over specified intervals of time Probability of the Perfor- Algebraic Inversion of a Laplace s-i, t-i Cherkesov 1981 [92 1 * mance of an Assignment transform or evaluation of moments from a Laplace transform Cot index Algebraic Direct integration of density s-i, t-i Krishna and Shin 1082 [28] * si functions over specified intervals of time Perfroniance Related Depes- Algebraic Direct Integration of density s-i t-i Arlat and iaprie 1983 127] Costes, etal. 19811147) SURF dability func tions over specified intervals of time Performance Meaasure Algebraic Direct probability calculation s-i, t-i Gai and Adams 1983 128] * ** = Either not applicable or the author is not aware of any suitable entries ~-i - failures of components are statistically independent t-i time-invariant statistical paranieters Thile 2.3 - Techniques proposed for describing the stochastic nature of a system

23 2.4. A Taxonomy of Measures and Models of Degradable Systems (Table 1) 2.4.1. Introduction To evaluate degradable systems, investigators have been examining ways of appropriately combining performance and reliability issues within a single measure. Many measures discussed in the evaluation literature can be employed to describe degradable systems. However, because of the deficiencies of most of the general purpose measures, many researchers have suggested a range of special-purpose measures expressly to handle degradable systems. Examination of these measures reveals a distinctive organization common to all the measures. This section proposes a taxonomy of the measures and models used to analyze degradable systems. In addition to being useful for describing previous work in the area of degradable system evaluation, the taxonomy also provides a means of placing the work of this thesis in perspective with related research. Measures and models are innately interrelated, and therefore separating the measure from the model is often difficult: to analyze a system using a particular measure, the analyst must employ a model that reflects the given measure. Typically, there is at least one class of models that embodies each measure. The usual nomenclature is to call the models by the same name as the measure, e.g., models reflecting the measure "reliability"are "reliability models." Since the models place real limitations on the applicability of the measures, the classification of measures propounded in this section are based on the models associated with the measures rather than on the most general possible definition of the measure. (Indeed, if each measure were defined to extreme generality, we would obtain a single, universal, but difficult to employ measure.) Tradeoffs exist for all the measures discussed. Typically, the qualities of a) ease of interpretating, modeling, and solving, and b) generality

24 are mutually conflicting, i.e., a measure that is advantageous regarding one of the above qualities may be poor with respect to the other. The headings of Table 2.1 summarize the material treated in this section. 2.4.2. The Classifications (Column 1) To date, the measures proposed to describe degradable systems can be classified according to four categories: 1) Classical performance measures 2) Classical reliability measures 3) Structure-based measures, and 4) Performance-oriented measures. Column 1 of Table 2.1 lists these four categories.2 For notational consistency, all countable sets in Table 2.1 have enumerations beginning at 0, while all real line segments are denoted by 10,11. Also, distinction is made between such finite sets as (0, 1}, {O, 1})N, 1,...,n}, and {0,..., n}N. Although such sets are equivalent in the sense of being finite, they are different in the kinds of techniques they allow for modeling. The above categories are based on characteristics of the models that are used to obtain the measures. These characteristics will be amplified in Section 2.4.3 [Components of the Models (Columns 3-5)1. Briefly, Categories 1 and 2 consist of the claasical performance meaaurec and classical reliability measures. These measures will not be discussed in depth within this dissertation. The amount of literature associated with these categories is huge (see, e.g., [21, [3], [731), and, as argued in Section 2.1 [Introduction], these measures are generally not sufficient for describing degradable systems. They are included in Tables 2.1-2.3 for completeness and to provide the reader with a familiar basis for calibrating the tables. 2The second column lists specific measures. The next five columns (columns 3-7) deal with the the corresponding model, the eighth column provides references, and the ninth column notes information regarding the first application of the measure to analyzing computing systems. Columns 3-5 will be explored in more detail in Section 2.4.3 [Components of the Models (Columns 3-5)1.

25 Category 3 measures are extensions of classical reliability measures. These measures are characterized by their corresponding models being based on a "structure function" (to be defined in Section 2.7 [Structure-Based Models]); hence measures in this category are called structure-based measures. Finally, the fourth class contains those measures that share characteristics with classical performance measures. These measures are called performance-oriented measures. Models used to describe these measures are typically multistate markov reward models. 2.4.3. Components of the Models (Columns 3-5) As mentioned above, the classification of measures is based largely on the characteristics of models utilized to describe the measures. This section discusses three major components of the models along with some relationships between the components. Based on these elements, the categories of Section 2.4.2 [The Classifications (Column 1)] can be more formally defined. The components that appear in most of the models are the following sets: 1) the observation set OBSER 2) the structure set STRUC 3) the performance set PERF. Call these sets the system description sets. There are no intrinsic restrictions on the sets STRUC and PERF. OBSER has the restriction that it be a totally ordered set. Informally, the time base OBSER is a minimal set of observation points, i.e., times at which the state of the system must be sampled to determine the system's performance. For example, if the set OBSER is a singleton set, then the system performance can be determined from a single observation, that is, a "snapshot." On the other hand, if the size of OBSER is larger, then the system performance at any given time cannot be determined just by studying the system at that time; rather, the system performance can be determined only by examining the behavior of the system in the context of the system at all the times in OBSER.

26 The structure set STRUC is a set of descriptors of the physical realization of the system (for the purposes of this nomenclature, STRUCT includes the environment). Usually, STRUC is the set of states of the system. The performance set PERF is a set of descriptors of the behavior of the system. 2.4.4. The System Description Function (Columns 6-7) The fundamental relation between OBSER, STRUC, and PERF is a function3 g:STRUCOBSER-PERF (2.1) Call g the system description function. There are no inherent constraints on g other than that g must be a well-defined function. The interpretation of Eq. 2.1 is as follows: u E STRUCOBSER is a function u: OBSER-*STRUC that defines the structure q E STRUC of the system at observation time t E OBSER, i.e., u(t) = q. The function u is called a trajectory. The map g then assigns to each function u a value in the performance set. That is, g(u) E PERF is the performance associated with trajectory u. Finally, there is a probability measure of system performance. Most of these measures are either probability distributions or expected values (means) of PERF (i.e., the system description function g interpreted as a random variable). The determination of this quantity is the goal when evaluating degradable systems. Note that the probability measure can be dependent on time in a manner unrelated to the observation set OBSER. OBSER merely indicates the number of sample points required to determine the system performance, not the times at which those sample points occur. For instance, suppose OBSER = {0}. Then for q E STRUC, BCPERF, Probig(q) e B] can 3The notation A8 denotes the direct product of the set A by set B, i.e., AB is the set of all functions from B to A: A" =- {u u: B-pA). Sometimes the notation B-A is used in place of A8.

27 depend on the specific time at which the observation takes place. Several categories of system description functions g can now be defined. Let g be a system description function such that I OBSERI = 1, i.e., g:STRUC-*PERF. (2.2) Then g is a structure-based syatem decacription function. That is, if the system performance can be determined by a single sample of the system's structure, then the system description depends only on the system's current structure, and hence the name "structure-based." As a simple example of a structure-based system description function, consider the following: Let OBSER = {0}, STRUC = {0,1,2,3}, PERF = {0,1}, and g(q)= 1 if q E {,2,3}(2.3) One interpretation for this example is that if g(q) = 1, then the system is operational. We can determine if the system is operational at a given time by simply observing what the structure (state) of the system is at that time. If I OBSER I > 1, then g is non-structure based. Let g be a structure-based system description function such that I PERFI = 2 and I STRUC I = 2N, N < oo. Then g is a structure function. A structure function is usually denoted ~. Let g be a system description function such that I OBSER > 1. Then g is a capability-bated system description function. With a system description function, the system performance requires some knowledge of the system's history. For instance, let OBSER = [0, 1], STRUC = {0, 1...,n}, PERF = IR (the real numbers), and g(u( * ))- fu(t)dt. (2.4) u(. ) is a trajectory of the system, u(t) is the system structure at time t, and g(u( * )) is the

28 accumulated value of the trajectory. u(t) could be called the "reward rate" of the system at time t and g(u( ~ )) would then be the reward accrued by the system during [0, 1]. Finally, column 7 presents the probability measure of the sytem performance. Usually, this measure is either the probability distribution function of the performance or else the first moment of the performance. The remaining two columns present references to early work in the measure, as well as early work on applications to computing systems. 2.5. Description of the Structure of Degradable Systems (Table 2) 2.5.1. Introduction As discussed in Section 2.4 [A Taxonomy of Measures and Models of Degradable Systems (Table 1)1, most system measures that can be employed to evaluate degradable systems induce a system description function g:STRUC OBSER PERF (2.1) Two fundamental questions arise: For a given measure g, 1) How can one represent g? 2) How can one obtain g? Table 2.2 encapsulates this information for the measures of Table 2.1. Also included are references to software tools useful for developing the system description function. 2.5.2. Representation and Acquisition of the Capability Function (Columns 2-5) There are three,eneral approaches for answering the above two questions: a direct measurement approach, an approximation approach, and an algebraic approach. The most accurate approach is the direct measurement. This technique is rarely used, however, because it is expensive and often infeasible. Most of the research to date has concerned algebraic and

29 approximation approaches. Approximation is useful if the algebraic underpinnings are well understood so that the effect of the approximation is known. Column 3 discusses the specific technique used to determine the system description function, while column 4 states common assumptions made in order to apply that technique. Many of the techniques (especially those requiring the delineation of functions or reward rates) are general, allowing the technique to be applied to a wide class of problems. The corresponding reference for the method of obtaining the system description function is given in column 5. 2.6. Determining the Probability Measure of System Performance (Table 3) 2.6.1. Introduction Once the the system description function has been obtained, some probability measure of that. function must be calculated. Table 2.3 summarizes many of the methods of calculating the probability measure for the entries of Tables 2.1 and 2.3. In addition, some software tools for calculating the probability measure are given. 2.6.2. Calculating the Measure (Columns 2-5) Columns 2-5 of Table 2.3 are identical to those of Table 2.2. The entries of column 5 refer to common assumptions regarding the parameters of the models. "t-i" indicates that the various components of the model generally have time invariant statistics, while "s-i" means that the components are usually statistically independent. 2.7. Structure-Based Models Classical reliability analysis techniques (e.g., Birnbaum, Esary, and Saunders 1791) stress the determination of system failure or system success. The state of the system is represented

30 by a vector X = (Xi,',X,) E {0, 1}", where n is the number of components in the system and X 1 if the ith component is functioning (2.5) 0 if the ith component is failed. A structure function 0:{O, 1}" - ({0, 11 relates the system state to system success and system failure. Usually, focus is restricted to coherent (i.e., monotonic and nontrivial) structure functions. Later work (e.g., Haasl [941 and Fussell, Powers, and Bennetts 11481) characterize coherent structure functions as fault trees or event trees. The probability of system success is called "reliability." Associating with each component a probability of success or failure, techniques for determining the reliability of a system have been developed. For complex systems, methods (based on minimal cut sets and minimal path sets) of computing bounds on the reliability have been found (Esary and Proschan) [1491. Other work has generalized the concept of structure functions to allow for a wider range of performance values. Hirsch, Meisner, and Boll [821 have studied degradable systems and have generalized the concept of structure function to allow any finite number of performance rates, called "levels of performance." Components of the system are either successful or failed; thus, the structure function is 0:{0,1)" -- (0,1, - *,M}. The authors define a "cannibalization" to be a state transition function describing how the system is reconfigured when a fault occurs. Combining these two functions, Hirsch et al. obtain a "cannibalized structure function," depicting the system's performance rate for a given system state. Introducing a random process describing the failure of system components, the determination of the probability distribution of the system's performance rate at a given time is investigated. By assuming that the cannibalized structure function is coherent, the distribution provides a measure of the worst performance rate experienced by the system. Postelnicu [831 has considered systems in which components, as well as the system itself, take on values of performance in the continuous range IO,11. A generalized structure function is introduced: d:[0, 11" - [0, J11. Bounds for the distribution of the system performance rate

31 are derived, as are limit distributions for iterated combinations. Tillman, Lie, and Hwang 1851 have proposed a measure called "pseudo-reliability" that is the system reliability weighted by the relative-performance. Pseudo-reliability provides a measure of the performance of a state along with the probability of achieving that state. Another generalization has been phased mission analysis, where the reliability of the system is based on several observations (called phases). In most of these studies, the system must perform at some minimum level during each phase in order to be successful. Winokur and Goldstein 1881, Bricker 11501, Esary and Ziehms 1971, and Pedar and Sarma 1251, 11511, among others, have investigated phased mission reliability. Although the theory for the class of reliability-based evaluation discussed above has been thoroughly developed, structure-based formulations are unsuitable characterizations of degradable systems. If the user is interested in the "worst case" behavior of the system and if the system has coherent properties, then analysis of the structure function is enough. However, as demonstrated in 1291, if system quality is based on performance criteria rather than structural criteria, structure functions cannot reflect the system's ability to perform, because the structure function yields a "snapshot" of the system performance (i.e., the performance rate at the point of the evaluation), rather than the total system performance for the mission; the structure function does not yield the total system performance for the mission. 2.8. Other Reliability Oriented Models Recognizing the problems inherent in simple extensions of classical reliability, researchers began exploring other methods of measuring a degradable system's ability to perform. Many of these measures are based on Markov processes, and most are reliability-oriented. Meyer 1121 has introduced the concept of "computation-based" reliability analysis, in which underlying criteria for success are based not on the system's internal state, but on the tasks the system must accomplish in the use environment. A formal "tolerance relation" on the set of all computations is used to specify "success," and the probability of all

32 computations being within tolerance is calculated. Thus, the system's structure can degrade without affecting the measure of the system's ability to perform. Borgerson and Freitas [13] first investigated the specific needs of degradable systems by determining the probability of each of several performance levels of a degradable system over a finite interval. Other authors have used Markov models to evaluate degradable systems; for instance, Laprie [152], Ng and Avizienis [1531, and Costes, Landrault, and Laprie, [1541. In terms of a computer system's structural state, Troy [151 has formulated the concept of "workpower" to quantify the amount of available processing capability, and has investigated the expected workpower ("operational efficiency") as a measure of the system's performance, provided the system does not fail. Thus, though no unified performance/reliability measure is presented, the desirability of considering the performance, as well as the reliability, of degradable systems is recognized. Partitioning a degradable system into several independent "resources" and modeling each such resource as a Markov process, Losq [16] has examined performance/reliability measures oriented towards system availability, e.g., "availability," "mean time between crashes," "average processing power," and "proportion of time spent in degraded mode." Availability is defined in terms of a threshold on the structure (i.e., a minimum number of fault-free elements), and the optimization of availability is also investigated. Although the models discussed in this section are somewhat more performance-oriented than the structure-based formulations, the measures are still primarily reliability-oriented. The study of degradable systems at this stage is more concerned with a system's survivability, regardless of its performance, than with its total contribution to the user. 2.9. Performance Oriented Models As the methodology for evaluating degradable systems progressed, researchers began investigating ways of specifically incorporating performance attributes into the evaluation. These methods frequently use special Markov, semi-Markov, and Markov reward processes.

33 Beaudry [171 has developed an analysis method in which the amount of achieved computation defines success. By considering how the computation capacity of the system changes with time, one can build a transformed Markov model in which state transition rates are based on the expected amount of computation achieved in a state, as opposed to the expected amount of time sojourned in a state. From the transformed Markov model, one can determine quantities such as "mean computation before failure," "computation reliability," and "computation availability." Mine and Hatayama [181 have considered systems in which various classes of jobs require the use of certain components within the system. Utilizing Markov models of both the system configuration and the job process, the "job-related reliability" (i.e., the probability of executing a job from a given job class) is examined. Chou and Abraham [201 have used a discrete state, continuous time semi-Markov process to model a general n-processor shared-resource system. The processors are repairable and a steady-state performance for the system is calculated as a function of n, the number of processors. Assuming that the cost is proportional to n, performance-to-cost ratios are calculated, and the optimal number of processors is determined. Recently, several researchers have focused on Markov reward processes (see Howard [1551, for example) to model the total performance of degradable systems. Generally, the reward structure attaches a value to the sojourn time in each state, along with a bonus for entering each state. The reward rates are associated with performance; hence the total reward is a measure of the system's total performance. Techniques are known for obtaining certain results from such models. In particular, expected values are typically obtained. Gay and Ketelson [191 have formulated Markov reward process-based "capacity" and "workload" models reflecting, respectively, the system's capability to do work, and the system's ability to satisfy the demands placed on it by the environment. The capacity model is similar to Beaudry's model, as are the derived effectiveness measures. Combining a capacity model (representing the reliability of the structure) with a birth-death model (represent

34 ing the arrival and processing of jobs) yields a workload model, from which can be derived measures such as "expected system throughput" and "throughput availability" (or "relative throughput"). Employing a reward process model, Castillo and Siewiorek 1211 have studied the measurement of the turnaround time of a computer job in a computing system having variable performance due to system workload and having fatal and nonfatal faults. Measures of "apparent capacity" and "expected elapsed time" (time required to execute a given program correctly) are obtained, reflecting the viewpoint of a user submitting jobs, as opposed to the outlook of the system's owner. Although the technique is not intended for degradable systems, the approach does account for reliability and variations in performance. De Souza 1221 has used a Markov reward process model to determine the expected operating costs (profits) of a computing system. In addition, a "cost reduction" measure is introduced to quantify the benefit of fault tolerance, and sensitivity analysis of the cost reduction is examined. Cherkesov 1921 has introduced a solution for the probability of accumulating a minimum amount of reward during a finite interval. Krishna and Shin [261 examine the probability distribution function, and more specifically the expected value, of the cost associated with a computer control system. Arlat and Laprie [271 consider a performance related reliability measure: mean capacity cumulated before system down. Gai and Adams [281 have discussed tradeoffs between "optimal" and "robust" response times. A general modeling framework for modeling performance and reliability is discussed by Meyer 1291. At its center is a measure called "performability," which is the probability measure induced by a random variable called "system performance." The concept of a "capability function" relating low-level system behavior and structure to high-level interpretations of performance is also introduced. Because of the relevance of performability to the research of this thesis, the first few sections of the following chapter summarize the basic points of performability. Precise definitions or a theoretical construction should not be expected. For technical

35 details, refer to Meyer 129] or Wu 159], where discussion aims at providing the reader with an initial orientation to the problem area.

CHAPTER 3 FUNDAMENTAL CONCEPTS AND RESULTS 3.1. Introduction The purpose of this chapter is to present the basic concepts of performability and the results of this dissertation. Underlying this dissertation is the concept of performability: how to obtain it and how to apply it. Section 3.2 [Motivation] examines the motivations in more detail. Section 3.3 [An Informal Introduction to Performability] summarizes the notion of performability in a non-technical manner. Some notes at the end of that section provide more detail for the interested reader. The development of performabilty given in Section 3.3 is somewhat different than given in previous expositions in that the treatment of Section 3.3 is more "constructive" in nature. Also presented in Section 3.3 [An Informal Introduction to Performability] are a procedure for constructing performability models and an example of a performability analysis demonstrating some of the advantages over moments which performability offers. Finally, in Sections 3.4 [Models Having Finite Performance Variables] and 3.5 [Models Having Continuous Performance Variables], the main results of the dissertation for finite and continuous accomplishment sets are reviewed. 36

37 3.2. Motivation When a complex computing system is analyzed, the analyst must consider several perspectives, or viewpoints, of the system. One perspective is a high-level, user-orientedl view of system behavior during the period of utilization. This is the viewpoint that most users ultimately consider to be the most important. This perspective addresses questions such as "how does this system benefit me?" and "what does the system do for me?" Other system perspectives, lower-level viewpoints, are concerned with how the system actually achieves its highlevel behavior. Of course, lower-level viewpoints are of great concern to the system designer and analyst, since only by studying the system in detail can it be accurately designed and analyzed. The high-level perspective should, if sufficiently high, have the advantages of being easily described and interpreted. The lower-level perspectives, if they are low enough, while not so easily described and interpretated, may give the system analyst or designer insight into the effects of the system's fundamental components and configurations. The lower-level perspectives may have additional characteristics not readily available at the high level, e.g., the existence of simple probabilistic descriptions of the system. Since both high-level and lower-level views of the computing system are, after all, views of the same system, there must be some relationship between them. In particular, we assume that if enough information regarding low-level behavior is known, then high-level system behavior must deterministically correspond to specific low-level system behavior. If the correspondence is known, then a probabilistic characterization of high-level behavior can be based on a probabilistic characterization of low-level behavior. Furthermore, reasonable design decisions at the lower levels can be made based on knowledge of how changes in lowlevel behavior will affect high-level behavior. IBy uetr, we mean the person or entity that actually benefits from the employment of the system and for whom the system is being evaluated. For example, if the system is a computer to be used aboard a commercial aircraft, then the "user" is the air line company that owns the aircraft.

38 On the basis of the observations presented above, this dissertation can be said to have two primary goals. (See also Section 1.1 [Problem Statement].) The goals are: for specific classes of interesting and important computing systems (called degradable computing systems, see Section 2.2 [Degradable Systems]), we desire 1) to develop methodologies for relating high-level and low-level behaviors, and 2) to apply these methodologies to the examination of design tradeoffs for degradable computing systems. 3.3. An Informal Introduction to Performability 3.3.1. Introduction This section provides a brief summary of performabilty. The intent is to present only a short, non-technical motivation and synopsis. A few additional technical points are delimited by notes at the end of Section 3.3 [An Informal Introduction to Performability]. A formal definition of performabilty has been developed by Meyer and appears in 1291. A second formal definition appears in Wu [59, Chapter 21. The reader interested in a complete exposition is referred to those two sources. The information in the notes is provided to connect this dissertation to the previous work, particularly 129) and [59).<NOTE 1>2 However, understanding of the technical points appearing in those notes is not necessary to understand this dissertation. Other authors have also investigated various aspects of performabilty. These authors include Meyer and Ballance 1371, Meyer 129), (331, [381, Furchtgott and Meyer 139), Meyer, Furchtgott, and Wu 132), [44], Meyer and Wu 1431, 155), Hitt, Bridgman, and Robinson [156), Pedar [151], Pedar and Sarma [25), and Wu [591. An important component of the two formal developments referred to above is the initial assumption of an underlying probability space describing the low-level behavior of the system. The rest of the framework is built bottom-up, based on the low-level probability space. The 2See the notes at the end of Section 3.3 {An Informal Introduction to Performabilityl.

39 development in this chapter is different than the developments of [291 and [591, due to pedagogical considerations. The distinction between the presentations is important, since the one in this section supports either a top-down analysis or an analysis in which the high-level perspective has been made into an equivalent, but lower-level, model. In the development presented here, the introduction of the low-level probability space is deferred as long as possible. Until that time, the analysis is deterministic. Only when knowledge about the stochastic behavior of the low levels is required is the low-level probabilistic description inserted. The analysis then continues bottom-up to meet the deterministic analysis above. The deferral of the probabilistic model affords the maximum possible deterministic simplification. In particular, the procedure allows the analyst to analyze the system structure and then insert many different probabilistic descriptions, without necessitating re-analysis of the system models. We emphasize that the top-down approach presented here is equivalent to the bottomup approaches of [291 and [591. 3.3.2. The Accomplishment Set Consider first the high-level outlook. The user has a performance-oriented frame of reference, and so a high-level perspective should reflect accomplishment and achievement. Outcomes discernible by the user can usually be described by a single set (either countable or continuous) of values. This set A is called the accomplihment ret, or less often, the performance eet. An element a of A is called an accompishmcnt level (or performance level). For example, in one of the simplest cases, A = ( success, failure ), and from the user's viewpoint, all utilizations of the system can be classified as having either accomplishment level "success" or accomplishment level "failure." The user, being interested in performance, desires to know how well the system will behave. Often, however, the user cannot know in advance at which accomplishment level his system will perform, since random happenings (e.g., component failures, environmental fac

40 tors) occur that may affect the system's performance. For instance, in certain applications the weather in which the system is used could influence the system's accomplishment level. Because the user cannot predict system outcomes with certainty, he must rely on a probabilistic description of performance outcomes. One such probabilistic characterization is called performabiity.<NTE 3> A formal development of performability is presented by Meyer in [291, 1381, 1471; earlier, less precise discussions of performability are given by Meyer in 1341, [35], [1571. As a special case, when the accomplishment set A { success, failure } or any other two-valued set, the probabilistic description is called reliability (see [71, for example). When the performance variable takes some value a E A with probability 1 (modeling steady-state mean values) then the description is usually called performance. 3.3.3. The Trajectory Space A second perspective in analyzing a computing system is a low-level, detailed, structural view of the components comprising the system, along with the behavior of the environment in which the system operates. In contrast to the single set A representing the high-level, performance outlook, the description of this structural perspective is usually much more complicated. In the most general case, the history over the entire utilization of each component and each environmental factor must be specified to characterize fully this perspective.<NOTE4> Let U be the set of all possible low-level "histories" or outcomes of the system. These histories are represented by functions u: T.-Q (3.1) from a set T called the parameter set to a set Q called the state space. Hence U Q. Call U the trajectory space of the system, and call a specific history u E U a trajectory.<NOTE > The usual interpretation of a trajectory u is that at time t E T, the sys

41 ter assumes state u(t).3 There may be a stochastic description of the trajectory space U. This description takes the form of a stochastic process - (x, t E T} (3.2) where Xt: T-+Q. Each trajectory u E U is a sample function of the process X. The process X is called the base model. 3.3.4. The Capability Function 7 Every low-level outcome u E U results in a corresponding high-level performance a E A. There is thus a function r: U-A (3.3) relating trajectories to accomplishment levels. y(u) = a signifies that the low-level trajectory u will be interpreted by the user as having accomplishment level a. As an illustration, a particular behavioral occurrence u of the components and environment of a computing system may be interpreted by the user as a "success." In that case, -(u) = success. - is called the capability function4 and is introduced by Meyer, Furchtgott, and Wu in 1159] (see also 11601, where the function is called 1, and [89]). The capability function is an extension of the concept of a structure function (see Section 2.7 [Structure-Based Models]). The capability function plays an important role in performability analysis. The determination of the capability 31n the continuous performance variable methodology (Chapter 5 [CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY]), T is continuous and u is represented as a function over T. In the discrete performance variable methodology (Chapter 4 [DISCRETE PERFORMANCE VARIABLE (DPV) METHODOLOGY]), T is often finite and so a trajectory can be represented as a vector. In such cases, one often denotes trajectories in bold face type, e.g., U and u. 4The symbol 7r was chosen to represent capability because both c (for capability) and 7 are the third letters of their respective alphabets. c was not used because in the area of fault-tolerant computing, c is commonly employed to denote the important quantity "coverage"[l58j.

42 function and its inverse is, therefore, a significant component of this dissertation. 3.3.5. The Performability Model A performability model is a two-tuple (X, y). (See Section 3.3.3 [The Trajectory Space] for a description of X and Section 3.3.4 [The Capability Function y) for a description of y.) 3.3.6. Solving for the Pe-formability To obtain the probabilistic description of the accomplishment set A, the following twostep procedure can be employed. Procedure 3.2: (Procedure for obtaining the performability of a system) For each (measurable<NOTE 6>) set BCA of accomplishment levels, 1) Determine r7-(B) = {u I (u)E B} (3.4) i.e., the set of all trajectories u that result in an accomplishment level in the set B, and 2) Determine the probability of the set of trajectories y-'(B). The above procedure<NOTE 7> has been delineated in [891 and 1301. It is also discussed in [32] and [291. In simpler form, it is the procedure employed in classical reliability evaluation [7, chap. 21. Step 1) addresses goal 1) of the previous section: "to develop methodologies for relating high-level and low-level behaviors." Hence, much of this dissertation deals with methodologies for obtaining r-F1B). Step 2) will be treated only as is necessary for dealing with goal 2): "to apply these techniques to describe design methodologies for degradable computing systems." Briefly, step 2) consists of specifying the base model X (see Eq. 3.2), and then determining

43 the probabilities of the trajectories'7-1(B) (Eq. 3.4). Two classes of accomplishment sets A are investigated in this dissertation: finite and continuous. Sections 3.4 [Models Having Finite Performance Variablesj and 3.5 [Models Having Continuous Performance Variables] present an overview of the results of those investigations. 3.3.7. Constructing Performability Models Procedure 3.2 of Section 3.3.6 [Solving for the Performability] specifies how to solve a performability model (see Section 3.3.5 [The Performability Model]). Implicit in that procedure is the existence of the model. In practice, however, the model must be constructed before it can be solved. Often, especially when A is continuous, we shall know' a priori (and hence U and A) and X, i.e., we shall know the performability model. In other cases, particularly when A is discrete, we will initially not know either U or even A, and so clearly we cannot possess y and X. Suppose that A, U, and 7 are not initially specified and that A and U are finite; this section presents a procedure for constructing performability models. This constructive procedure has not been explicitly described before. As discussed in Section 3.3.2 [The Accomplishment Set], the choice of the accomplishment set A is based on the user's viewpoint. The critical question regarding the accomplishment set is: "For a given total system, how are the accomplishment levels A chosen?" The accomplishment levels are selected by analyzing the concerns of the user and how those concerns might be influenced by the object system, which in the case of this dissertation, is a degradable computing system. If, however, accomplishment levels that are not influenced by the object system are chosen, then r; significant harm is done, since the analysis will eventually discover this lack. Hence, the person choosing the accomplishment set need not be intimately familiar with the total system being analyzed. Indeed, the accomplishment set A can be chosen by someone other than the analyst, e.g., the user could select the set in which he is

44 interested. The other major components of the performability model are the trajectory space U and the capability function y relating U to A. Once the accomplishment set A is chosen, a critical question is: "For a given total system, how are the trajectory space U and the capability function a chosen?" The answer is not as straightforward as the selection of the accomplishment set. If U and y are initially not known, two points should be emphasized regarding their determination: 1) Rather than simply choosing a trajectory space, we shall derive a trajectory space, based on our knowledge of the total system, and 2) In the process of deriving the trajectory space, we shall simultaneously derive the inverse of the capability function, i.e., —' (see Eq. 3.4). The model construction procedure is stated below in general terms: Procedure 3.3: (Procedure for constructing a performability model) 1) Characterize a "tentative" or "first pass" accomplishment set A'. A' is "tentative" because it may include accomplishment levels which are not possible. We assume that A' does contain all accomplishment levels in which the user is interested, i.e., that A' CA. A' will be refined to arrive at A. Call A' the tentative accomplishment set. a) Choose the tentative accomplishment set A'. b) Decide which subsets of A' are desired to be measurable.<NOTE 8> 2) Characterize the accomplishment set A, the capability function Y, and the trajectory space U: a) Based on A' and the structure of the system, determine a "tentative" or "first pass" inverse capability function (9 )- and a tentative trajectory space U: i) For each a E A', determine those low-level behaviors that "correspond to" a.

Characterize behaviors as a function u':DOMAIN(u' )-IMAGE(u' ) (3.5) Denote this relation (7 )-. Call' the tentative capability function. We will not be specific about what is meant by "corresponds to." This is part of the modeler's art. Loosely, u' "corresponds to" a means that an occurrence of low-level behavior characterized by u' will be interpreted by the user as accomplishment a. See Section 3.3.8 [The Model Hierarchy] for a discussion of a hierarchy of models that can make the determination of the relation 7 easier.' is a function from a set of functions {u' } to A'. Observe that we make no restrictions on those functions. In particular, the domains of the various functions {u' } can be different. In addition,'7 need not be onto, i.e., there can be accomplishment levels a E A' for which there no u' such that 7' (u ) = a, and so ('/ )`-(a) =. (Note also that no conditions are made requiring 7 to be measurable.) ii) Set V to the set of all behaviors that correspond to some accomplishment level a E A, i.e., U= - U{,' I - (' ) a). (3.6) a e A iii) Set A to the range of A, i.e., A = 7' (U ) = {a 7 (u' ) = a for some u E U' }. (3.7) Call U' the tentative trajectory space. b) Find U and 7-1 by augmenting U* and (7 )-:

46 i) Set T to the union of the domains of all the functions in U, i.e., T - U DOMAIN(u' ). (3.8) t' E L ii) Set Q' to the union of the images of all the functions in U, i.e., Q' = U MAGE(u ). (3.9) Call Q' the tentative state space. We wish to extend the functions in U' so that every function has the same domain and range. To do this, we will augment Q' with a special element t that will serve as a "placeholder." If a function u' E U' is not defined at some time t E T, then u' (t) will be redefined to be t. The following two steps carry out this extension. iii) Augment Q' with t to form Q, i.e., Q = Q' U {t (3.10) iv) Set U to the set of all functions u: T-+Q, i.e., U = QT. (3.11) v) For each function u' E U, form a new function EXTEND(u' ) where EXTEND(u )(t) -' (t) if u' (t) is defined (3.12) T otherwise.1 Each function EXTEND(u' ) has domain T and range Q. Sometimes, the domains of all the functions u' E U will already be identical. In such cases, it is not necessary to augment Q' with f and so Q = Q', U = Ut, and EXTEND is the identity function.

47 vi) Set UCOS = {EXTEND(u' ) u' E Ut }. Uao contains all functions u: T- Q that are consistent in the sense that u can be related (via 79 ) to some accomplishment level. vii) There may be some behaviors that are inconsitent, i.e., some behaviors may have no corresponding accomplishment levels.5 These are all the functions u:T- Q not in UCONS Call this set UINCONS UINONS = U - UaOS (3.13) viii) To any function u E UINcrOm, assign an arbitrary accomplishment level'y (U).<NOTE > ix) Set the capability function y: U —A such that Y' (u) if u e UcoNS?(U)=.(u) if u E UINCoNS (3.14) 7 (u) if u6 UINOONS Sometimes UINCOo will be empty. In such cases, we do not need to make any arbitrary assignments, and so y =?. c) Based on the measurable sets of accomplishment levels and the capability function 7, choose the measurable sets of trajectories.<NOTE o> 5Note that the analyst is completely free to state which functions are consistent and which are inconsistent. For example, inconsistent behaviors can be those that 1) are logically inconsistent, i.e., that, due to the structure of the system, are impossible regardless of the stochastic description of the the trajectory space (e.g., see Naylor and Meyer [1611), or, 2) if the analyst has prior knowledge of the eventual stochastic description of the trajectory space, are probabilistically "inconsistent," i.e, the probability of all such trajectories is sero. A

48 3.3.8. The Model Hierarchy 3.3.8.1. Definition of a Model Hierarchy As discussed in Section 3.3.4 [The Capability Function 1), the capability function a/ (see Eq. 3.3) relates low-level trajectories to high-level accomplishment levels. The determination of the capability function and its inverse is an important step in the evaluation of a system's performability. [See step 1) of the procedure in Section 3.3 [An Informal Introduction to Performability].] However, directly obtaining y-' is difficult since 1) We do not initially know the trajectory space U, and 2) The "distance," in terms of how behavior of low-level components affects total system behavior, may be great. To lessen the "distance," we introduce between U and A additional sets which facilitate determining -y'- by allowing a gradual refinement of the relationship between U and A. Each such set describes a model called a level. The collection of these intermediate models is a model hierarchy. The concept of applying such a model hierarchy to performability analysis is due to Meyer [341. An early version of the techniques and methodology described in this section is presented by Furchtgott [891. The notation is extended and the concept coupled to an underlying probability space in Meyer, Ballance, Furchtgott, and Wu [30]. That extension also appears in Meyer 129]. <NOTE 11> If there are m + 1 levels in the hierarchy, then level-0 is the least detailed model, at the top-level model of the model. Level-m is the most detailed model, the bottom-level model. The level-i model (i = 0,1,..,m) is represented by a set of trajectories CU, called the level-i trajectory space. <NOTE 12> (See Figure 3.1.) At each level, the trajectory space IU can be split into those components which can be composed in terms of the next lower level, and those which are basic to the level. The former is the level-i composite troajectory paee UIc and the latter is the level-i baneic trajectory spaee

49 A.A l0 Level-O U1 Level- 1 7 S KIm - {CmUm-1 Level- (m-1) /cm UTM Level-m Fig. 3.1 The model hierarchy

50 Ub: U'= Uc x Ub. (3.15) The level-(i-1) (i =,..., m) composite trajectory space U-' is related to the level-i trajectory space U' by a function called the 0i-intertevel translation6 rci:,:' -- U-('. (3.16) In the case i = 0, K,: U~ —A. (3.17) Note that we can express the capability of a system in terms of higher level (less detailed) observation points than the bottom model. Specifically, let U' denote the level-i trajectory space along with all the basic trajectory spaces of higher level models: u = U X U^1 X..- X U' (3.18) where UO = U~ and Um = U. Define the level-i-based capability function:,' U. A (3.19) inductively as follows: if i = 0 and u E jO, then 7o( u)= o(U), (3.20) and, for i>0, if (u, u' ) E U' where u E U', u' E Ub X Ub X... X U-, then -,(u, u' ) =' 7y,-l(,(u), u' ). (3.21) 6The symbol tc for the interlevel translation was suggested by R. A. Ballance: each interlevel translation is actually a little "kappa-bility" function.

51 If i = m, then ym = 7. 3.3.8.2. Examples of a Model Hierarchy A complete example of a model hierarchy can be complex. Two examples have been well documented in the literature [29j, [321. Though the examples are extremely relevant to this thesis, because of their availability and bulk, they have been omitted. The interested reader is referred to those other works. An example by Furchtgott appears in [891. The same example appears in Meyer, Ballance, Furchtgott, and Wu [301, and Meyer 129j, [381. A second example appears in Meyer, Ballance, Furchtgott, and Wu [311, Furchtgott and Meyer 1391, and Meyer, Furchtgott, and Wu [32j, [401, 1441; this latter example, in a much simplified form, is also the basis of the example by Pedar 11511 and Pedar and Sarma [25j. A small example and two complex examples also appear in Hitt, Bridgman, and Robinson [1561. 3.3.8.3. Constructing a Model Hierarchy Consider the two-step procedure outlined in Section 3.3.6 [Solving for the Performabilityl. We can now state how the first step is carried out, i.e., how to determine, for a measurable BCA, y7-(B). Beginning with the level-0-based capability function, we have'(B) = Kc'(B), (3.22) and we proceed iteratively. Thus, if y,-ll(B) has been determined, then lU~ ~~~ l ~~~~~(3.23)'yi1(B) = LU_ I() ( (u) ) (3.23) (u,u' )E7(B) where (c'l(u), ) = {(,u' ) I K() = u}. When'yl(B) = y-1(B) is reached, the procedure stops. To avoid manipulations of individual trajectories, actual implementations of the procedure utilize decompositions of 7-'(B) into characterizations of sets of trajectories,

52 i.e., subsets of the trajectory spaces. Section 3.4 [Models Having Finite Performance Variablest discusses such representations when the accomplishment set A is finite, while Section 3.5 [Models Having Continuous Performance Variablesl discusses such representations when A is continuous. The second step of Procedure 3.2 is to determine Pr(y-'(B)). Again, implementations of the procedure seek to determine the probabilities of entire sets of trajectories in a single calculation, as opposed to dealing with single trajectories. Algorithms supporting the above procedure have been implemented in a computer programming package called METAPHOR 1411, [461. Section 4.4 [METAPHOR-A Performability Modeling and Evaluation Tool] discusses METAPHOR. 3.3.9. Difficulties of Employing Moments Rather than Performability Often in performance analysis and occasionally in reliability analysis, moments are employed to describe a system's characteristics (e.g., mean throughput rate, mean response time, mean time between failure). (See, for example, Table 2.1.) Using moments is certainly attractive, since when compared to distributions, moments are often both easier to determine and simpler to interpret. However, the first few moments of the performance variable Y frequently do not yield enough information regarding the behavior of degradable systems. This deficiency has long been realized in reliability analysis 181J. We claim that for the evaluation of degradable systems, the performability distribution is the desired measure. This is not an obvious statement and requires justification. We shall show by example that the system having the best ability to perform is generally not indicated by the moments of the random variable denoting performance. The steps of the formal procedure of Section 3.3.7 IConstructing Performability Models) will not be explicitly detailed in this example since they are straightforward and would obscure the focus

53 of this section. Consider the following example: suppose we are designing a computing system whose user is concerned with system uptime during a finite utilization period T = [0, hi. Specifically, the user desires a system that will likely be operational for at least some fraction of the total time, i.e., h'k, where k E [0, 11. If the system is up for either more or less time than h'k, the user places no value on the difference of time. We consider using either a single (simplex) computer (call this system SIM) or three computers in a triple modular redundant (TMR)7 configuration (system TMR). The computers fail permanently with an exponential distribution having rate X, and we assume the voter cannot fail. The user is concerned with the amount of system uptime, and so we let the accomplishment set A = [0, hi represent the amount of system uptime. Note that the accomplishment set is continuous in this example. The performance Y takes values in the accomplishment set A, and we are interested in the measurable set Bk {= a E A a > hk). (3.24) The following results are easy to derive: -XA. k PIM,(Bk) = e, k E 10,1 (3.25) -2X Ak -SXA. k PTMR(Bk) = 3e -2e, k E [ol (3.26) -Xh EYM l = - e (3.27) TMR 77 (tple modlr d cy) is method o incri system ibili t the pense o incorporating extra components into the system. Three identical units provide their output to a voter; the result of the majority of the units is taken to be the result of the system. Hence, the system can tolerate the failure of a single module (and, in the case of compensating failures, the failure of two modules).

54 -ax -2X 2e 3 5 (3.28) E[YM = -3X 2X 6X p(Bk) has the interpretation "the probability that the system is up for at least a fraction k of the utilization period, i.e., the interval availability is k" Given values for X, h, and k, we can choose either system SIM or system TMR by one of the following two criteria: 1) the system with the highest expected system uptime (availability) El Y, or 2) the system with the highest probability of having a fraction of uptime greater than k, i.e., the highest performability value ps(Bk). Let X = 10-3 and h = -4 = 1386. Then EIYSMI = E[YTMRI = 750 and the first moment criterion favors neither system. Figure 3.2 shows PTMR(Bk) and PrM(Bk) as functions of k. We see that for k < Xhn 0.5, PTW(B) > PSIM(Bk) for k = 0.5, psIM(Bk) = pT(B) for k > 0.5, psI(B) > PTMR(Bk) Hence, if using the performability criterion, we choose either TMR or SIM, depending on the value of k. For example, if k =.8, then we choose SIM, while if k =.1, then we choose TMR. Thus, for the above values of X and h, the two criteria differ. Since ps(Bk) is a measure of the probability of the system having the desired uptime, and that probability is the user's interest, the more reasonable choice is the answer provided by Ps(Bk). First moments, in this case, do not indicate the proper system. Thus, we see that moments generally do not suffice even for non-degradable systems.

55 PS (Bk) 1.0 \ PTMR (k) { (\ PSIM (k) 0.4 0.2. 0 0 0.2 0.4.6 0.8 1. Fig. 3.2 PTMR(Bk) and PSIM(Bk) as functions of k

56 3.3.10. Notes for Section 3.3: An Informal Introduction to Performability Note 1: The information provided in the notes of this section is strongly based on the development of Section 2 of 1291. The work of [29), while precisely defining performability, does not address the issue of constructing performability models. In particular, the definition makes use of quantities that are assumed to be known; 1291 does not concern the acquisition of those quantities. The emphasis in this set of notes is on constructive techniques. Specifically, the innovation presented in these notes is a procedure for constructing a performability model. (See Section 3.3.7 [Constructing Performability Models] for the construction procedure.) The following features of [591 have also been adopted here: 1) The development of the capability function 7 as a random variable, and 2) The introduction of a random variable h relating the basic underlying probability space to the intermediate trajectory-based probability space. Among the details in [591 not discussed in this thesis is discussion of how the co-ordinate probability space (U,,Pr) is constructed knowing the finite-dimensional distributions of the base model X. Two innovations assist the construction of the performability model: 1) The explicit construction of a probability space (Af, p) that includes the accomplishment set and the performability, and 2) The initial assumption of the set dof measurable sets of accomplishment levels. In these notes, important foundations will be established for the structures and relations employed in the evaluation of the capability function 7. The existence of various probability spaces underlying the total system will be postulated and the stochastic processes supporting "trajectory spaces" will be characterized. Recognition of these quantities is important, particularly to understand the stochastic nature of the models used and to differentiate between

57 "trajectories" and the random processes that define them. Note 2: To describe the stochastic behavior of the system and its environment, we assume the existence of a probability space (A,6, p), where A is the sample Ipace, O is the set of events, i.e., the measurable subsets of A, and p: G-[O0, 11 is the probabUilit measure. (See 11621, for example, for a discussion of probability spaces.) Note 3: The performability p is the probability measure of the above probability space (A,1, p ). Note 4: We assume there is a probability space (U,, Pr) describing the low-level behavior of the system. The mapping r: U-* A (3.29) is a random variable if both of the following conditions hold for all B E~: a) r-'(B) {u Iy(u) B} E (3.30) b) p(B)- Pr(-'(B)). 7 is called the capabilit function. (See Section 3.3.4 [The Capability Function y].) 7-' is a set mapping, taking sets of A into sets of U. If 7 is measurable, then 7-1 is measurable. We shall be constructing y by first constructing y-1. Therefore, if y-1 is a measurable set mapping, we are concerned with whether - is measurable, or indeed, if y even exists. Unfortunately, y need not exist [e.g., if a1 I a2, y"1(a1) = u and 7-'(a2) = uj, and, if 7 exists, it may not be measurable (e.g., see 1163, p. 318J). Of course, we require that 7 both exist and be measurable. We shall not pursue general conditions which insure these

58 conditions; rather, the classes of r-y considered in this dissertation will always induce q which exist and are measurable. The first class of y-' which we consider (see Chapter 4 [DISCRETE PERFORMANCE VARIABLE (DPV) METHODOLOGY]) will be from a countable set A to a countable set U. No restrictions will be placed on y-' except that no u E U can be in two preimages, i.e., if u E C1(al) and u E 1'(a2) then ao = a2. Clearly,S andware countable; so y exists and is measurable. The second class of y1' discussed (see Chapter 5 [CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY]) will be from A == R to U = JR". However, I will be given a priori and will exist and be measurable. The set U is the set of sample functions of a certain stochastic process. Thus, there is yet another probability space underlying U. The following diagram illustrates the relationships of the various sample spaces and random variables to be discussed: A 7 (3.31) Y U Li t t Qt} \(- w We assume there is another probability space (fl,,P), a state space Q of the total system,. and a stochastic process X {Xt t E T) (3.32) where Xt: f-Q. X is called the &ue model of the system. A sample function X(w) is referred to as a state trajectory u, = {u,(t)l t E T}, where u,:T-Q and u,(t) = Xt(w). "State trajectory" is a term derived from the theory of modeling. In the context of stochastic

59 processes, the term "sample function" is typically used. There is then a random variable h'i-: U defined as h(w) =,. (3.33) The set U is then the set of all sample functions of {Xt), i.e., U =(uIwEn}. (3.34) In practice, the underlying space (f2,, P) is unknown, and the base model X is usually described in terms of its finite-dimensional distributions. If T is continuous, there are then some measure-theoretical considerations in relating the space (U,$:Pr) to the base model X. These difficulties arise because U is not countable (being the I T -dimensional direct product of Q), while we have only finite-dimensional distributions and countable unions and intersections with which to work. These issues are addressed by Wu [591. In particular, we shall require that X be a "separable" (see 1164, p. 41]) process. h is called the iystem performance: Y = -h. Note 5: Note that the trajectory space U as informally defined above is not exactly the same trajectory space U as defined in Eq. 3.34. Eq. 3.34 restricts U to measurable "histories" only. Note 6: We desire to find the performability p. We know the accomplishment set A. To speak of the measure p, we must know which sets(J are measurable. Therefore, we shall require that the analyst specify 3. In this dissertation, we shall consider only two classes of accomplishment sets A: finite and R", n < oo. The following event spaces will be used:

60 a) If A is finite, then any subset of A will be measurable, i.e., 63 = {(B BCA) (3.35) b) If A islR", then will be the n-dimensional Borel a-algebra (see, for example, [164, Section 1.21.) In the remainder of this dissertation, when we write "-''(B)," we mean "7-'(B) for B EL." Note 7: Knowing (A,&), we wish to determine p. To do so, we: 1) (deterministically) construct y-' and so obtain (UJ,. This requires a top down analysis. Then, 2) we obtain the state set Q and index set T from U, describe the process {Xt}, and derive the measure Pr. From this information, we obtain the performability as p(B)= Pr({u l (u)E B}) (3.36) - P({wl Y(w) B}). Note 8: This is the setd. See Section 3.3.2 [The Accomplishment Setl. Note 9: See step c) below for restrictions on 7.

61 Note 10: Set~"to the smallest o-algebra that contains U 1-'(B).'must contain UCONS since, BE8 when probabilities are later determined, the probability of a consistent trajectory must be one, i.e., Pr(UoxoN) - (3.37) Pr(UINOONS)- 0. This condition sets restrictions on the function'9' of step l.b.viii). Note 11: As in Section 3.3 [An Informal Introduction to Performabilityl, notes will be employed in this section to present slightly more technical details. The treatment given in the notes of this section is somewhat different from the treatments of 129], 1301 in that: 1) the underlying probability space is explicitly decomposed into separate probability spaces at each level, 2) the interlevel translations rc are presented as random variables, and 3) the random variables h' are introduced to relate the basic level-i probability space to the level-i trajectory-based probability space. Note 12: In Figure 1, the box representing level-i (i = 1, 2,...,m-l) contains sample spaces and random variables with the following relationships, to be explained below (compare with

62 the diagram of Eq. 3.31): Kj (3.38) Level-i Q { Level-O is the same as above, except that Q'-1 is replaced with A. Level-m is simply: IC (3.39), Jb { Q-l ) \hM ~~~ evel-m - { ___^ Q^ \ t, i We assume that at level-i there is a state space Q'. Corresponding to each level-i there is also a stochastic process X'- {x I t e T'} (3.40) where Xt is to be defined below. X' is called the level-i model. T'C T is called the level-i ptumder act and Qb is the levd-i state spce. If the cardinality of T' is finite, we refer to the interval between two adjoining times t1 and t2 as a level-i phase if 1) tl, 2 E T, tl < 2, and

63 2) there is no ts E T' such that tl < t3 < t2. Q' can usually be decomposed Q' QC X Qb (3.41) where Qc is the compoaite state set and Q is the basic state set (at level-i). Note in Eq. 3.38 that Q' is denoted in the next lower box, i.e., at level-(i+l). The composite state set contains those states in Q' that are uniquely determined by the state behavior at level-(i+l), in a manner to be made clear below. Qc is thus a composite of lower-level states. Basic state information newly introduced at level-i is contained in the basic state set Qu. Thus, states in Q{ represent information not conveyed by states in Q+l. If the level-i model has no composite (or no basic) part, then the basic (or alternatively composite) part is deleted, i.e., Qi = Qb (or Q = QC). Note that the bottom level (level-m) cannot have a composite part, and so Qu = Qb. We view X' as a pair of stochastic processes Xc- {X, X t e } TI (3.42) Xb {Xtl t E T} (3.43) called the level-i composite process and level-i basic process, respectively. Here Tc C T', Tb C T', Xc, t E Qc, and Xb, E Ql. Because the index sets T\ and T\ may not be the same, we extend the state spaces Qb and Qc with the fictious state t so that if t 9 T', then Xt is defined to be ~, (or, if t ~ Ti, then Xlt is defined to be t) for all w E f. The ita-level basic process is defined over the probability space (ft', e,P'). Following the development in Section 3.3.3 [The Trajectory Spacej, Xt:flI'-*Q. A sample function Xi(w) is a lvel-i basic tate trajector u = {u (t) I E Ti}, where u',: T-"Qb and

64 u'(t) = Xb,t(w). There is a random variable h':f' —Ub defined as h'(w)= ui (3.44) and the U is then the set of sample functions of {X, t}, i.e., U == {u i w e fl}. (3.45) The development of the ithlevel composite process is slightly different. At level-i, there is a probability space (U',i,Pr5) (see below). Again following the development in Section 3.3.3 [The Trajectory SpaceL, X',Ut: U'+'- Q. A sample function X\(u) is a level-i composite atate trajectoryg K =- {K1(t) I t E TT+l}, where K+': T+'-+Q and KU+l(t) = X u). There is a random variable rKc: Ui'- U- (i =- 1,2,..., m; K: UO-A) defined as K('u) =,.. (3.46) The random variable Kr is called the s interleve tranfltion. The set Uc is then the set of sample functions of {Xc,t}, i.e., c = {Ic1 u E I}-1). (3.47) Now we define Xi (x, X^X) (3.48) and the probability space (U,"',Pr) - (Uc X U,~+ v, Pr'+l X P') (349) where +l + is the smallest a-algebra that contains;+ X EE = {F X El F E,E E }. The levl-i trajectory space U' is UL X U.

65 The capability function y is then simply the composition of all the c,, i.e.,'= t2o' tl'IC...'KM m. (3.50) Of course, if either no composite or no basic part at level-i is present at level-i, then the above representations of X' and U' are understood to be the appropriate single component versions. The collection of level-i models {X~,',, Xm} is a modd hierarchl if the following conditions hold: 1) Xm = Xb, that is, the bottom model is comprised only of a basic process. 2) X {XttE T), where Xt = (X^,, XO1,, *, X^t); (3.51) Q - Q X Q l- X.. X Q0; and (3.52) U = U X Ub-1 X.. X U. (3.53) 3) For each level-i, there is an interlevel tranlation,,, where Ko:'U~ X U~o-+A,,i:Ux X Ub -. U-1, (O<i<m) (3.54) ICE Ub - Ub"such that for u = (u,, u,.., U o, u) E U with u, E Ub -y7() -= O(Kl(. K m.-l(c^.(Um), u- l), * * ), 0). (3.55)

66 The probability space underlying the base model {Xt} is the product space (n,,.P) = (nQ x n' x.. x n, (3.56) 0 ~, 0 0- ~,P~X X P x P") 3.4. Models Having Finite Performance Variables Much of the work done to date on performability modeling has concerned the case where the performance variable Y assumes a countable (finite or countably infinite) number of values. The previous work includes evaluations of real-time control computers in a finitelength mission [251, 1291-1321, 1361, 139j, 1441, [891, 11511, 11561. To perform large scale evaluations of the type reported on above, machinery for doing much of the mechanical work automatically must be specified and implemented. Part of this dissertation discusses notation, algorithms, and a software package for doing discrete performance variable performability evaluation. This section presents an overview of the results. When the performance variable Y is discrete, the performability methodology will be referred to as the "discrete performance variable methodology," or DPV methodology. Analogously, if Y is continuous, then we refer to the "continuous performance variable methodology," or CPV methodology. In this dissertation, if the performance variable is discrete, then it is finite. We shall not deal with countably infinite accomplishment sets in this treatise. Section 3.5 [Models Having Continuous Performance Variablesj discusses the case where Y is

67 continuous. 3.4.1. Basic Concepts 3.4.1.1. The Accomplishment Set Let the accomplishment set contain n < oo accomplishment levels, i.e., A = ({ao,, a,,. a.,^}. (3.57) No assumption will be made regarding the relative importance of the aj to the user. That is, for any i and j, a, will not be considered to have more or less value than aj. The finite accomplishment set induces the type of performability evaluation that comes closest to traditional reliability evaluation. Indeed, as mentioned in Section 3.3.2 [The Accomplishment Set], reliability evaluation requires the two-element set A = { successful, fail }. As an example of a finite accomplishment set, consider the following scenario suggested by Furchtgott [891, that also appears in Meyer, Ballance, Furchtgott, and Wu [30), 136j, Meyer, Furchtgott, and Wu [31J, Meyer [291, (38j, Furchtgott and Meyer 1391, Meyer, Furchtgott, and Wu 1321, [441, Pedar 11511, and Pedar and Sarma [251. An airline company has an aircraft with a degradable computing system. The company is concerned with the period of utilization, consisting of, say, a flight of 10 hours duration. The airline is interested in safety, passenger convenience (specifically, avoiding diversion to an alternate landing site), and fuel consumption. Five levels of accomplishment are recognized: a0: safe, no diversion to an alternate landing site, and low fuel consumption a: safe, no diversion, and high fuel consumption a2: safe, diversion, and low fuel consumption a3: safe, diversion, and high fuel consumption a4: unsafe.

68 Clearly, it is difficult to assign an ordering to the above mentioned accomplishment levels. For instance, unless one is familiar with airline operations, it not clear whether the airline would prefer accomplishment al to a2. Because no ordering of the set A is presumed, any reliance on a concept similar to coherence (see Section Barlow and Proschan [71) is precluded. 3.4.1.2. The Trajectory Space As mentioned in Section 3.3.3 [The Trajectory Space], the set of all low-level system behaviors is characterized by the trajectory space U. We shall use the assumptions and notation suggested by Furchtgott [891 that were expanded in [30], [361. Those concepts also appear in 129], [31], [32], 1381, [39], 144]. We assume that the trajectory space U (see Section 3.3.3 [The Trajectory Spacel) is also finite. In particular, the parameter set T and the state space Q are both finite. The number of trajectories hence is 1 Qj I I r. Since we assume that the connections between high-level and low-level behaviors are deterministic [see Step 1) of the procedure in Section 3.3.6 [Solving for the Performabilityll, we do not need to consider the underlying stochastic description of U. Only when we wish to consider the probabilities of accomplishment levels will we need to deal with the base model X (Eq. 3.2) [see Step 2) of the procedure in Section 3.3.6 [Solving for the Performabilityll. U describes the possible behaviors of the low-level components during certain intervals called "phases." Each component during each phase will have a certain behavior, and the behavior will be assigned a value from some finite set. The phases associated with each component need not be the same. Also, as seen in Section 3.3.8 [The Model Hierarchyl, it is convenient to group certain components together, especially those that have similar "levels of abstraction," or briefly, "levels." Again, only a finite number of levels is allowed. To describe a trajectory u E U, we use a vector of matrices whose sizes are finite but

69 not necessarily identical: u = (U.~ IUJ,,u1 ) (3.5) where the element U,kj denotes the behavior of component i during phase j at level of abstraction k. Thus, each matrix represents a level of abstraction, each row of each matrix represents a component, and each column represents a phase. The set U,* is the set of all possible u,. For example, consider the aircraft example presented above. In evaluating the computing system, we may wish to take into account the weather that the flight encounters. The weather may then be denoted a system component. Further, we may wish to consider the behavior of the weather only during the period of time during which the aircraft lands. We assume that the weather affects only the various operations of the aircraft, e.g., landing and navigation, and not the functioning of the computer itself. The level of abstraction of the weather (which is higher than that of the computer) might then be called the "operational level." Thus, we would speak of the component "weather" during the phase "landing" at the level "operational level." The set of possible values might be U -osi' d — i ( {Cat III, non-Cat III} (3.59) wmshr, bldn - where Cat III is a category of bad weather requiring certain types of instrumentation for landing. For conciseness, we will usually symbolically encode names of components, phases, levels, etc. into numbers. Eq. 3.59 may then be written - 2,3 {0, 1}. (3.60) When used, the encoding will be made clear.

70 3.4.2.3. Trajectory Sets and Their Representations When the performability model of a system involves a finite number of trajectories, one can theoretically write out every trajectory. However, in practice, there may be a large number (say, billions, trillions, or more) of such trajectories, and so dealing with each trajectory separately is intractable. Instead, we shall deal with sets of trajectories as single entities. These entities are called trajectory set.. A single trajectory set may denote from as few as zero trajectories to as many as are in the entire trajectory space. Trajectory sets and a representation of them were first described in the context of performability modeling by Furchtgott [891, and are also described in Meyer, Ballance, Furchtgott, and Wu 1301. A calculus for manipulating trajectory sets is presented by Furchtgott 1891 and also appears in Meyer, Ballance, Furchtgott, and Wu 1301. Section 4.2.3 [Alternative Representations of Trajectory Sets] discusses various representations of trajectories and trajectory sets. In particular, the representation of 189j and [301 is discussed. This representation of trajectory sets is a variation of the lattice representation of discrete functions. (See Davio, Deschamps, and Thayse 11651.) The lattice representation is also briefly reviewed in Section 4.2.3.2 [Representations Using Lattice Expressions) and is then employed in Section 4.3 [Calculation of Trajectory Sets) to describe algorithms for calculating trajectory sets for a performability model. 3.4.3. Algorithms for Calculating Trajectory Sets Suppose we have specified the structure of a model hierarchy (see Section 3.3.8 [The Model Hierarchy]), i.e., the accomplishment level A, the models (X0,X1,...,X), and the interlevei translations Ko0, 1,..., m. Then we wish to solve the model by calculating the inverse capability function 7-' (see Eq. 3.4).

71 Computationally tractable algorithms or heuristics are required to perform these calculations. In addition, errors in the specification of the ci can be made by the analyst. Hence, further algorithms for checking the integrity of the ici are useful. To save additional computational costs, heuristics for reducing the number of trajectory sets are also of value. Algorithms and heuristics for the above purposes are introduced in Section 4.3 [Calculation of Trajectory Setsl. Let a be an accomplishment level in A. The algorithms include: 1) Based on Eq. 3.23, an algorithm for iteratively calculating'-1 given the Kc. 2) A heuristic, recursive, tree-based algorithm for calculating yr'(a) (see Eq. 3.19) given -l and rc'. The leaves of the tree represent the trajectory sets of 7yl(a) and the branches denote certain intermediate trajectory sets. The algorithm recursively constructs the tree. 3) A heuristic algorithm for reducing the number of trajectory sets. The algorithm iteratively attempts to combine pairs of trajectory sets into single trajectory sets. 4) An algorithm for a) determining whether the specification of rK 1 is complete, i.e., if for every u E U'- (see Eq. 3.18), whether rc'l(u) has been specified, and b) if the specification is incomplete, determining which values of U'- have not been specified. Embedded in the above algorithms are other algorithms that implement the operations of the trajectory set calculus of [311, [89j. The purpose of describing th2 above-mentioned algorithms and heuristics is to automate, to the greatest extent possible, those tasks of performability modeling that are mechanical, laborious, and error-prone. Hence, the emphasis is on issues concerning computer implementation of the algorithms. We are particularly concerned with computational efficiency, where "efficiency" relates to both 1) the amount of space required to represent the functions 7 and icj, and 2) the amount of time required to determine the representations of those functions.

72 Much of the theory developed to date concerning discrete functions deals with optimizing the amount of space required to represent such functions. That is because, in most applications (e.g., the design of switching circuits using switching functions), discrete functions do not need to be calculated repeatedly. Hence, the cost of the time required to compute a spatially optimal representation is slight, compared to the cost of the space itself. The current theory thus concerns topics such as concise representations and optimal coverings. However, our applications concern extensive manipulation of discrete functions, where the cost of the space (i.e., computer memory) to represent the function is not great. On the other hand, the computing time necessary to modify a function's representation is relatively expensive. We must specify how much time is appropriately spent reducing the size of the function's representation before we proceed with other computations involving that function. Such issues have been addressed in the study of exact reliability evaluation of fault trees 11661-11691 and reliability networks [1701-11741. However, most of those investigations concern problems such as determining a disjoint sum-of-products representation, knowing the set of prime implicants. The problem we face is one of constructing a disjoint sum-of-products representation of 7-' directly from the interlevel translations K,. The tradeoff between computing time and representation space is not straightforward. For example, reducing the size of a function's representation can serve to reduce the time required to determine other representations. Thus, the above-mentioned two efficiency components, space and time, are not totally conflicting, since, by reducing the size of a function, we could also reduce its related computational time. Nevertheless, finding a function's optimal spatial representation can require significantly more time than finding some suboptimal spatial representation. Further, storage in contemporary computer system memories is not expensive compared to system time. Therefore, it appears that the most efficient method of computing y is to invest some time in reducing the size of given functions, and not to spend a large amount of time determining optimal representations. We explore the above

73 issues in Section 4.3 [Calculation of Trajectory Setsl. 3.4.4. METAPHOR —A Performability Modeling and Evaluation Tool The algorithms and heuristics necessary to calculate the trajectory sets of a performability model are discussed in Section 4.3 [Calculation of Trajectory Sets]. To be useful, the algorithms must be implemented as computer programs. Therefore, we have incorporated the algorithms into the software package METAPHOR (Michigan EvaluaTtion Aid for PerpHORmability). METAPHOR was originally envisioned as a tool to be used at all stages of performability analysis, from the definition of model levels and interlevel translations (Kc'), to the calculation of trajectory sets, to the evaluation of the probabilities of those sets. Also, because of the design nature of constructing the model hierarchy, METAPHOR was intended to be an interactive facility. Owing to the large quantities of data that are sometimes necessary to input, METAPHOR can also be run in batch mode. The first version [411, [461 implements the probability evaluation of trajectory sets. The second and third versions implement the other steps. (In addition, the third version handles continuous performance variable evaluations as well as discrete performance variable ones.) The structure and use of METAPHOR are briefly discussed in Section 4.4 [METAPHOR-A Performability Modeling and Evaluation Tooll. The primary data structure employed in the algorithms of Section 4.3 [Calculation of Trajectory Sets] is the array, and so the first two versions of the package are written in APL [1751, a computer language with extensive array manipulation capabilities and compact notation. These APL versions have been written under the Michigan Terminal System (MTS). Version 2 was substantially completed, but because cf accessibility considerations, METAPHOR has been ported to UNIX8. Unfortunately, there is currently no UNIX APL suffiUNIX is a trademark of Bell Laboratories.

74 ciently powerful to run the APL version of METAPHOR. Therefore, version 3 is written in C 11761, a powerful, efficient language with flexible data structures and modern control flow. The third version also contains facilities for solving continuous performance variable perforinability models and has a menu-based user interface. METAPHOR is large. Not including the help facilities, METAPHOR contains approximately 150 functions; version 2 consists of 4500 lines of (commented) APL code (2700 lines uncommented) and version 3 has about 12,000 lines of (commented) C. Figure 3.3 shows the general layout of METAPHOR. 3.4.5. Examples As mentioned in Section 3.3.8.2 [Examples of a Model Hierarchy], examples of model hierarchies can be complex; complete evaluations can be even more complex. One example of a performability model and evaluation appears in Furchtgott [891, Meyer, Ballance, Furchtgott, and Wu [30J, and Meyer 1291, 1381. A second example, a performability evaluation of the SIFT computer 167], [681 appears in Meyer, Ballance, Furchtgott, and Wu 1311, Furchtgott and Meyer [391, and Meyer, Furchtgott, and Wu [321, 140), 1441. These studies generated the basic developments in the discrete performance methodology and also demonstrated the feasibility of performing such analyses. Also, a small example and two complex examples of performability evaluations appear in Hitt, Bridgman, and Robinson [1561. Section 3.4.5 [Examples] examines further developments in the DPV methodology that generally extend its scope of applicability. In particular, a less restricted notion of "phasing" is introduced. A non-trivial example is presented in Section 3.4.5 [Examples] to illustrate the methodology.

75 ---------— ETAPHOR --- ------ METAPHOR menu-based selector == —-- discrete..o1111111 I.IIIIIIIII lll li lllCOntinuous consistency checkmenu-ase cnecker Input description input input of model description hierarchy of the system b determinination o of mrlssing I IIIII. evalu ate I trajectories __ 4 p l calculation of calculation of the capability the capability function functiounctn __eva —--— evaluate _ Input of probabilistic description of the base model,~..... ___ Plot ________ ^_plot performability p lotting 4_performability calculation package I calculation Figure 3.3 -- Block diagram for METAPHOR

76 3.5. Models Having Continuous Performance Variables The models employed when the performance variable is continuous (the CPV methodology) are similar to those employed in performance evaluation (e.g., Markovian queueing models) but are extended to account for variations in structure that are due to faults. The initial work on the CPV methodology is by Meyer [511. That investigation examines a degradable buffer/multi-processor system with 2 processors, where the performance variable Y was taken to be the "normalized average throughput rate" of the system. Later, a more systematic approach was developed and documented by Meyer [331, 1531, 1561 that models the system with N processors. Among the fundamental results of [331, [531, [56) is an innovative approach for obtaining closed-form solutions relative to the above important class of performability models. This approach is summarized on page 24 of 1531 by a five-step algorithm. In Chapter 5 [CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY], i) the above problem is developed more generally in the context of reward models and nonrecoverable processes 155], ii) an integral solution for the class of systems that can be modeled by a finite-state, acyclic, nonrecoverable process is derived, iii) a recursive solution is derived from the solution of ii), iv) a software package for the calculation of CPV performability is discussed, v) a non-Markovian, multiprocessor/air conditioner example is investigated, vi) several additional examples are investigated, and vii) consideration of the closed-form solutions of still more general models is begun. This section presents an overview of these results. First, Section 3.5.1 [Basic Concepts] introduces the basic concepts of the CPV methodology and contrasts the CPV and DPV methodologies. Section 3.5.2 [Reward Models and Nonrecoverable Processes) briefly discusses reward models and nonrecoverable processes while Section 3.5.3 [Solution of Finite-State

77 Nonrecoverable Processes] overviews the solution obtained for finite-state, acyclic, nonrecoverable processes. 3.5.1. Basic Concepts 3.5.1.1. The Accomplishment Set Let the performance variable Y be continuous, specifically, the image of Y is a continuum such as the real numbers IR or the n-dimensional space IR'. As an illustration of the applicability of a continuous performance variable, consider the example presented by Meyer 1331 in which the accomplishment of a computing system is simply the normalized throughput rate9 of the system. The accomplishment set is the interval 10,11. Another illustration is the example of Section 3.3.9 [Difficulties of Employing Moments Rather than Performability], where the accomplishment is the amount of system uptime, and so A = [0, h]. Still other instances of interpretations of continuous performance variables include performance-based measures such as response time, turnaround time, processor utilization, waiting time, and length of busy period, as well as cost-based measures such as cost-per-mission. We assume that unlike the finite performance variable, the continuous performance variable has a partial ordering <. (In fact, in this dissertation, we generally assume A is isomorphic to JR", n < oo; see Note 6 of Section 3.3 [An Informal Introduction to Performabilityl.) The partial ordering provides the ability to say that one outcome may be "better" than another; hence, we can speak of the set of outcomes that are "better" than some threshold. 3.5.1.2. The Trajectory Space OThe tbrwgihpg rate is the number of jobs processed per unit of time. The merms/& d tArsAgpLd rate is the achieved throughput rate divided by the maximum throughput rate.

78 Generally, in the CPV methodology, we assume the trajectory space U (see Section 3.3.3 [The Trajectory Space]) is uncountable. If U were countable, then the image of the capability function a: U-*A (Eq. 3.3) would be countable and a countable accomplishment set IMAGE(y) could be substituted for A. The elements of the trajectory space U are functions u:T-,Q (3.1) where T is the parameter set and Q is the state space. As a precondition for U to be uncountable, either T or Q must be uncountable. We generally assume that T is uncountable. No restrictions will be placed on the state space Q, although in this dissertation, Q will usually be finite. Of course, T could be countable and Q could be uncountable and still satisfy the criterion that U be uncountable, but we do not consider such cases in this research. As an illustration of a trajectory space used to support a continuous accomplishment set, consider the simplex (SIM) computer of the example of section 3.3.9 [Difficulties of Employing Moments Rather than Performability). The state space is the two-element set [0,11, where 1 denotes that the computer is operational and 0 denotes the computer has failed. The parameter set T could be chosen to be either [0, h] or [0, oo) since the behavior of the system after time h is irrelevant. Let us choose [0, oo) to make the stochastic description easier. The trajectory space U is the set of all functions u: T- Q, i.e., QT. The system has no repair, and so once the system enters state 0, state 1 cannot be entered again. Hence, the consistent trajectory space (see Section 3.3.7 [Constructing Performability Modelsl) Urojs is the set of all monotonically non-increasing functions from T to Q, i.e., UoNs = {u:T-.+QI u(t) 2 u(t' ) for all t,t' E T such that t <. t.}. (3.61)

79 The set of inconsistent trajectories UIrjNCO contains all other functions T-+Q, i.e., UNcO, = {U:T-. Q I U f UcS}. (3.62) 3.5.2. Reward Models and Nonrecoverable Processes A common feature of user-oriented behavioral descriptions of computing systems is the presence of various "operational modes." Typically, such modes reflect different rates at which the system accumulates (or dissipates) reward, where the reward is a measure of user satisfaction. Reward rates can often be identified with aspects of system performance such as productivity, responsiveness, and utilization, or, at a higher level, with broader measures such as economic benefit. "Operational mode" is a behavioral concept, not a structural one, and hence operational modes are not to be confused with the structural or physical state of the system. There is frequently a strong correspondence between physical states and operational modes; for example, the present state of a system may induce the present mode of the system. However, in other cases, other factors, such as the system's state trajectory, may affect the system's mode. Meyer and Wu [551 have introduced a user-oriented "operational model" that is based on the variations in rates from mode to mode. In Section 5.2 [Reward Models], we review a special case of operational model called "reward model." 3.5.3. Solution of Finite-State Nonrecoverable Processes Among the fundamental results of 1531, [56) is an innovative approach for obtaining closed-form solutions relative to an important class of performability models. This approach is summarized on page 24 of 1531 by the algorithm that is recapitulated in Section 5.3.3 [The Approach]. However, although the algorithm delineates in broad terms the basic method for arriving at solutions, the algorithm does not suggest specific techniques for actually implementing the prescribed steps. In particular, the set of trajectories z'y(B) (Eq, 3.4) must be

80 characterized. Thus, the computational example presented in 1331, 1531, 1561 was derived in a relatively ad hoc manner; effectively, the solution was based on a graphical argument. This type of approach becomes more difficult when the number of states in a trajectory is four, and becomes intractable when the number of such states grows to five or more. Section 5.3.8 [Solution of Finite State Acyclic Reward Models] presents an integral solution for the class of systems that can be modeled by a finite-state, acyclic, nonrecoverable process. The crux of the solution is the characterization of the regions Cy. If specific state transition distributions are given, the integrations can be performed and the performability determined. The stochastic nature of the underlying process is unrestricted, and in particular, the process can be non-Markovian.

CHAPTER 4 DISCRETE PERFORMANCE VARIABLE (DPV) METHODOLOGY 4.1. Trajectory Sets: Basic Notation and Operations This section discusses a concise notation for describing trajectory sets (see Section 3.3.3 [The Trajectory Space]), along with a primitive calculus for manipulating trajectory sets. This calculus is useful for manual manipulation of trajectory sets. Most of the concepts in this section appear in Furchtgott 189j. They are also discussed in Meyer, Ballance, Furchtgott, and Wu [301. Most of the early performability evaluations (e.g., Meyer [291 and Meyer, Furchtgott, and Wu [32)) were carried out using the ideas, notation, and calculus described in this section. A more formal development appears in Section 4.2 [Discrete Functions and the Representation of Trajectory Sets]. The discussion in this section provides a straightforward introduction to Section 4.2 [Discrete Functions and the Representation of Trajectory Setsl. 4.1.1. Notation and Terminology The discrete variable methodology makes extensive use of arrays. The indexing of vectors, matrices, and arrays in this chapter will always begin at 0. An e + I dimensioned vector Z0o, 21,...,;J will frequently be denoted by zls, or if the index i must be explicitly indicated, by (z,),:. An (r + 1) X (8 + 1) dimensioned matrix will be similarly denoted by 81

82 [,lf x s or -(sli,j:r X S, i.e., 200 01 ~'' n Zoo 201 * o Xon X10 X11 * — ln (4.1) [ljr X = [-s.,i,j:r X, = ZrO Trl''' rs Higher dimensioned arrays and arrays of vectors, etc., make similar use of the same notation. Briefly reviewing Section 3.3.8 [The Model Hierarchy], a composite (alternatively, basic) trajectory at kvel-i is a function10 U: T-.Q (Ul: T-lQ\), where U(t) = Xt = XC,(w) (U(t) = X^ = X, t(w)) for some w E f. Tc (T\) is the kvel-i C, t xC' t (W) (U lb M t }{ composite (basic) utilization period while Qc (Q ) is the level-i composite (basic) state space. The level-i composite (basic) trajectory pace is the set U = {uc} =- {u, w Q} (Ug = {u} = {ub,,l w E fl}). The levd-i trajectory space is thus U' = UC X UU - {(u,,,u,) I w E }. The random processes X\ and X\ generally describe several system components, i.e., features such as hardware subsystems or behavioral functions that are identifiable and helpful in portraying the system. As noted in Section 3.3.8 [The Model Hierarchy), Q' and Q' can be coordinatized; the projection of X\,^ (XI, t) on a particular coordinate is called a composite (basic) variable. For the trajectories used, two coordinates are employed. One coordinate is the particular component being observed, while the other coordinate is the observation time. A level-i trajectory u' = u X u E U' - U' X U1 is first written as a column array: II thscape,_joe 1 a e d b ay o e i rk l (4.2) In this chapter, trajectories u are denoted by boldface type to emphasize their vector-like qualities.

83 where uc is the composte trajectory and u is the baic trajectory. If the number of observations at level i is 8, < oo, expansion along the time coordinate y ields the representation I(t )l:Si o u (to u1)u - u,(t,,) UU~0C U, C ~ (4.3) u'i _ _ =...... _...... —. - I ul(t):c,x s,, Ilu(t)l:, u(to) u (tlk u (t,,) where T' = {to, tl,... t,,. If the composite and basic components are respectively Uco UuC,...,uc,, and ulO,u,...u,, then expansion along the component coordinate yields: UCo U<1 (4.4) [ lu~,uu -u = --- = --- = lul:(r++l). [Ubi: Ub0 Ub1 Ub IU~~~jl:Pi u~~~~~~, ^

84 Thus, along both coordinates, the expanded representation is: u',j(t,)ljk:, x s, UCo(tO) Uco(tl). U.(ts,) U,(to) U,,(t,).-' UCi(ts.) U,(to) u, (tl). u,(ts,) (4.5) UC,(to) uCo(t) U, (to) U 1 (tJ) Uc S(t) cPi ci CI = li):(r, +,, + 1) X, A projection along a single time coordinate is referred to as a trajectory obcervation. Similarly, a projection along a single component will be called a component trajectory. For

85 instance, uco(tk) U c(tk) (4.6) u'(t,) = - - - I-. - = u[ (t) I:(,, + + 1) u^() U b, (tk) is a trajectory observation at time tk, while =u lu(1)l,:,. = [ u(to) ui(t) u(t8) ] (4.7) is a component trajectory of the jth composite component. The interval between the kth and (k+ l)th sample is called the (k+ l)th phuse. To conform with the trajectory set notation of Section 4.1.2 [A Calculus of Trajectory Sets], we shall usually write a composite (basic) variable uc(tk) as u', k (u (t) as uj, ft). As an example to clarify this notation and illustrate its use, consider the following. Suppose, at hierarchy level 2, a model with two composite components [flight control system (FCS) and navigation (NAV)j and a single basic component [air traffic control (ATC)] has been constructed. Consider, a trajectory 2 2 2 oU o U 1 02 FCS 2 2 2 NAV ""Q^,, O "O'', 1tl ^0^2

86 Here, the utilization period involves three samples T = {to, fl, t2}. To simplify notation, we shall not always write the level number with every variable of every trajectory when the meaning is clear. Then with the obvious correspondence co = 1, cl = 2, bo = 3, to = 1, tI - 2, and t2 = 3, Eq. 4.8 is written Uoo Uol U02 FCS u1o Ull u12 NAV (4.9) U2 u20 u21 u22 ATC Here the composite trajectory is 2 [ Uo0 U u2 ] NA (4.10) UC U10 Unl u12 NAV while the basic trajectory is b - [ U2o 21 U22 ] ATC, (4.11) where, for example, u12 is the state of the navigation system at the third observation time. Finally, if A is the matrix atk] x ~, the projection function2 Ik^(A) = a>k is frequently composed with an interlevel translation, e.g., jk(pc,+l). This is done to extract the particular portion of the function range that is of interest. As an illustration, consider the level-i composite model having r components U = U X X - X U c (4.12) [ UcJ:Lr, (4.13) where the U can be further coordinatized along time, U C(tk) = - —,, where t E Ti. 2Let A = A, be a Cartesian product indexed by the set I. The preojeti/e ef A em i is the function (,A-*AJ where for a 6 A and a, 6 Ai, r,(a) = a.

87 That is, U= I.Jjkr, x s, (4.14) Using the projection function, U )(tk) will be written jkU,. The interlevel translation from level-i to level-(i-1) (assuming level-i exists and i > 0) is (see Eqs. 3.15-3.16) Ki:U'-+U (4.15) where U' - U,' X Ub'. To select the function mapping U' into U'l(t) -= U c, we write.j:Ki' U'-'ktU1. (4.16) We shall refer to (jck, as the jth component interlevel translation at observation k. 4.1.2. A Calculus of Trajectory Sets It is convenient now to introduce a calculus that is of great use in determining the -y induced trajectory sets, i.e., -'1. This calculus can be used to simplify the upper level models of the hierarchy before any lower level models are examined. Also, the calculus is used to assimilate lower levels as they are developed. After the lowest level of the hierarchy has been operated upon, the result is -'. Derivation of -"' is important because, using the techniques of Section 3.3.6 [Solving for the Performabilityl on f-1, performability calculations for the system can be effected. Manipulation of sets of trajectories is necessary to derive the preimage sets of i. However, handling such sets can be awkward because of their size. Therefore, we have investigated techniques of manually operating with sets of trajectories in a convenient and compact manner. A set of trajectories will be called a trajector set. We first introduce a simple

88 representation of a trajectory set. Consider the trajectory Uoo U01'. UOs U10 U11 *. Uls (4.17) Uio iU'.. Urs U70 "f1 ~ ~ tr where each uj can assume values in a set of states QJ. Each u, is a "variable." For example, we may have u = [U~1,l x I = r U00 01 1 (4.18) UE =(,I E QI U10 U11 where oo Qoo = {1, 2,3}, UI E Qoi ( {0,2,4}, uio E Qio = {-1,-2}, and ul E Q11 = ({a, b, c}. Suppose we have two trajectories ul and u2 such that u, and u2 are equal variable-by-variable except for a single variable. That is, (4.19) 0 Ulk Uo.. Us Uro * * * Urs' 00 =- uo, ] (4.20) ",0 * * * U"^ where uJk y4 u11. We then write trajectory set {ui, u2) as Uoo 0'' Uo U00 oo' Uo, (4.21) {Uo,Ui}= ~. U= *..,... Uk, Ur0 ~ U Uo u U.. Urs {(oo)} - {UOn} (4.22) * t, U~t * {"UO) *{, {,'}

89 This representation is called an array product. Of course, all array products are trajectory sets. Frequently, when the trajectory in which we are interested can be written as an array product, we shall use the term trajectory set and array product synonymously. Note that the concept is similar to that of a cross product. Of course, the idea can be generalized: Roo Rol' R ol U00 U01''' o,, Rlo RI,... R1" U1O U1ll... Uln (4.23) ~ ~ *. U~ u uj, E R,kC Q,,| Rfo R1f Uro Url'' U~ s As an illustration, suppose that [12 1 [ 3 21 (4.24) u- -2 a Ul -2 a [12 2 [2 3 2 ] (4.25) U2- [ -2 b J U3= -2 b' then [{2) {U, Uo, U2,Us} [ 1-2} {a, (4.26) Because the use of array products has been so widespread in our work with trajectory sets, we have adopted the simplifying convention of writing array product elements that are singleton sets as elements without set brackets. Thus Eq. 4.26 can be written {UOU,u2,US} = -2{(1,} ) (4.27) No confusion should result since context will make clear whether an object is an array product (and hence a set) or a single trajectory. Furthermore, this convention makes array products easier to read by cutting down on the number of brackets through which a reader must

90 wade. Often a single array product cannot by itself represent all the trajectories within a single set. In that instance, the union of several array products must be employed to represent the trajectory set. Thus, for the general case, we write a trajectory set as the union of p array products P,: {uo,Ui,...,U,} = Po U P1 U.'. U Pp (4.28) = -[R, x, U [Rl]x, U [Rfkjr x (4.29) Rgo R.1 R~ Ro0 Ro1l. RoI "00 01 0 0' On R~o R~1 * R Rlo R,10 **" Rs" U.....~.. ~(4.30) Ro R~ l R~ Ro Rl -- R1 Ro Rol... R9 R~ R1 R~ RIo RP,... Rfn U U Io I 1 RP U00 U01 ~ n I I I (4.3) ULio Ull U1,, I E (0,1,..,P}, (4.31) u='k E RkC Qjk Uro Url... U, For example, in addition to u0, u, u2, and us of Eq 4.24 above, let U4 [ 2 a 1 Ub [ -2 (4.32)

91 Then {UoUiU2,UsU4,U6} = [a, -2 {a,b} U -2 {( In passing, note that this representation is not unique, e.g., the set in Eq. 4.33 above can also be written {Uo, UlU2,UsusU } [ -2 {a, b} U -2 {,b) (4.34) A canonical form can be easily defined. For instance, the elements of the arrays can all be required to be singleton sets. Under this constraint, the representations of a given trajectory set are identical up to the ordering of the arrays. However, such canonical forms are of little practical value since the number of arrays quickly becomes large. Two special sets should be mentioned. One is the emptl et (or null set) 0, the set containing no elements. The other is the ful set (or universe) * that represents the set containing all elements "of interest." For trajectory sets, this is the set of all possible states a variable can assume. For instance, if U [ 1 a (4.35) then {oUU [: (4.36) Another frequently used quantity is the nua array 4. This is defined to be any array product that contains the empty set 4 as an element. As an instance, [ 2) a,) (4.37) A second symbol, ~, having functional properties identical to *, is sometimes employed; see

92 Section 3.3.7 [Constructing Performability Models]. We now define the operation of intersection on the class of array products. The intersection f of two array products Po and P1 is the element-by-element intersection of the two arrays. P and P 1 must have the same dimensions. Po n P = R,I, x n R1,l x, (4.38) Ro~o Rol * * * Rolo R Rl''' Roln Ro R~O... ~.. Rn R.. R 10 ^11 Iln R10, R11 ^l n R~o R. R~ RIo Rh, R O o0. lO I' ol Ts Ro0 n Ro n Rll R O nR^ (4.4 Ro n Rho R~O n R *...* R( n R.4 R~o n Ro R~1 n Rh RO n R1 (4.41) R ( nO r x s ~

93 The following table defines the element intersection R, n R,: al aonal al a1 a 0 * aO 4' * * ~ 30 4 * t where ao and a are any sets and ao n af is standard set intersection. Thus, {(1,2) 4 1, [ (1,3) {0,2) 1 {1,2) n{l,3} 3) n{0,2) (4.42) I {(ab} j l -2 t J* n -2, b) n [12 {a'1 b} 1 (4.43) L -2 {ab} Array product intersection is distributive over set union. For instance: (1,2) 1 l [ (1,2) 0 1 p f13) (02) (4.44) [ * {a,b} l -2 b,c}- U - ( {1,2} 1 [ (1,2} bc) L * {ab} n -2 {b,c}J (4.45) f [ (1, {a ] [ {1,3) {0,2} ] -2 b n 2( (4.4 - n 4 (4.47)

94 -= 4 (4.48) The complement PC of an array product P is the set of all arrays not represented by P. This can be found as follows: Roo.. Ron Rio. R1n (4.49) pc X R,o''' RS Ro "' *R^ * u * *u* ~Ru * *@ *..., *'" R **, ~**... * R* *. * 1... (4.50) A A U U ~~~ U.uu;a=O b. R o o Ro''' R,,

95 where A R i a = j, b k, (4.52) * otherwise. RjkC Q, and R = {qg E Qj and q Rj}. Also,*'=, ~ *, ande = e. To determine the complement of a trajectory set, De Morgan's Law can be used. Suppose V is a trajectory set composed of p array products. Then VC (P1UP2U.. U p) (4.53) - PenPnn * — nP. (4.54) As an example, -2 ) {ab U [ 1 * 2 - [ 2 ] (2) 2 =I-2 ~ -2 {0'41[ ]-[ (a{1, 2} -,2 *U (0, 4, U -, U (4.56) nl [i:] U [ (0, ] [2] u [:;] = [ (0,4) ]u[2 u ] U [ (0,4) ~- U,,,', U[: (0,) ] [2 c ] C -2 [ We have found, however, that the evaluation of performability (Pr('(a))) is often simpler if the -induced trajectory sets (7''(a)) are represented as the union of disjoint sets.

96 The set of Eq. 4.55 above, for instance, can be written -2 {a,b} ]U -1 (4.58) -= [:{,} ] U U-1 -2 c Representing a trajectory set as a union of disjoint array products has analogies with representing a Boolean function in disjunctive form. Thus, we see the possibility of employing a generalized version of Roth's cubical calculus (see 11771 for instance) to handle sets other than (0, 1) and so manipulate trajectory sets. The next section discusses such an approach. 4.2. Discrete Functions and the Representation of Trajectory Sets Many of the algorithms implemented in METAPHOR manipulate trajectory sets. These algorithms are described in Section 4.3.3 [Algorithms]. To state the algorithms concisely, we require a compact notation for denoting trajectory sets and the discrete functions which relate them. This section discusses discrete functions and the representation of trajectory sets. Much of the notation and development of this section is strongly influenced by Chapters 2, 3, and 8 of Davio, Deschamps, and Thayse 1165). 4.2.1. Discrete Functions A function f: U-A (4.59) is a discrete function when U and A are finite, nonempty sets. Function f is an integer function when the elements of U and A are non-negative integers. For our applications, we will usually deal with sets that do not contain integers. However, the representation of these

97 sets within METAPHOR is based on integers. Hence, though METAPHOR internally deals exclusively with integer functions, much of the theory below is based on discrete functions. A set U will often be decomposed into a Cartesian product U= UOX U, X U.. x - X vj. i - 0 In such cases, U will usually be written in boldface type (U) to emphasize its vector nature. U will often be further decomposed using two indexes: U - Uoo X U01 X U0s X U 10 X Ulo ll X (4.61) X ls X X.0 XU* X X *... X U7. s XX Uj. (4.62) ~=m0 X_ k One may visualize such a product U as a "matrix" U00o Uoi... Uo U10 01o -.. U. S (4.63) U = [vijrx, = U7o U,1.". U,, Indeed, the description of the "trajectory set calculus" of 130) and of Section 4.1.2 [A Calculus of Trajectory Setsl utilizes such a matrix construction. We shall deal with functions: U- X Aj (4.64) whose domains ae Cartesian products. Such functions are referred to as gn whose domains are Cartesian products. Such functions are referred to as gcncral drireree

98 functions. Such functions can be decomposed into a set of r+1 discrete functions f: U-Aj, O<k<r. (4.65) The function f, is the Jh projection of f and is often written fj (see Section 4.1 [Trajectory Sets: Basic Notation and Operations]). Since the domain U or range A of a discrete function f are finite, one can describe the function by either exhaustively enumerating all the values f(u) E A for all u E U, or alternatively, by considering each a E A and enumerating all values u E U for which f(u) = a, i.e., by enumerating f-l(a). Both approaches will be used. Section 4.2.2 [Discrete Functions and Capability Functions] below deals more fully with representations of discrete functions. First, let us review the motivation for our studying discrete functions. 4.2.2. Discrete Functions and Capability Functions In the framework of discrete performability evaluations, the sets of concern are a set A (accomplishment set) of objects called accompliuhment level and sets U (trajectory spaces) of objects called trajectories. The trajectory spaces will usually be written as vectors or matrices, and to emphasize this, U will usually be written in boldface type. Several trajectory spaces U?, U,...,U1 and U~, U,...,U~ may be of concern. Uc (U^) is the evel-i composite (basic) trajector space (see Section 3.3.8 [The Model Hierarchy)). The Cartesian product U' = U X Ub is the tevel-i trajectory space. Regarding Cartesian products in this section, make the convention that if a trajectory space U is empty (U" is always X and, for O<i<m, U, may or may not be empty), then any reference to U in a Cartesian product is deleted. Thus, if either U, or Ub is empty, then U' is the other, non-empty, set. The following conditions hold: 1) U"C - 2) For all O<i<m, U^1 and U; may or may not be empty. The product of all basic

99 trajectory spaces of level-i and below UO; Ub X Ub X... X U (4.66) Xu Ug(4.67) - X Uo J - 0 is the levd-i basic model trajectory space (Section 3.3.8 [The Model Hierarchy]). UO is called the basi model trajectory space. Note that Ub can be defined recursively as follows: U? = UO (4.68) Ub = U x U;-, m<i<O. Any composite (basic) trajectory space Uc (U) may be decomposed = v x2 x c~,j (4.69) The values o and j dlina te compon, r. Te st The values of indexes i and j delineate components and pk8ct, respectively. The set [LJ k (UC,,j) is the lete-i composite (baic) trajectory apace of component j during phase k. Two types of functions intere st us. The first type consists of functions of the form',: U' X U -l A, 0<i<m. (4.71) That is, i, is a function whose domain is the Cartesian product of the level-itrajectory space with all non-empty basic trajectory spaces between levels O and i-l, and whose codomain is A., the levei based capability function. The function', is written' and is called the

100 capability function. The second type of function is of the form c,: U-U -l, 1<i<m. (4.72) ci is the lcuel-i intcrlcvel translation and maps level-i trajectories into level-(i-l) composite trajectories. Ic0 is identified with o7. Given a level-(i-l)m component j and a level-(i-)th phase k, the projection,kt i: U "i"'-' (4.73) is the function mapping level-i trajectories into the level-(i-1) jkth composite trajectory space. The relation between the level-i capability function y, and the interlevel translations KcO,..., Kc for levels 0 through i is as follows: Let u-,E~U X Us. (4.74) That is u', = (u, u,..., u,) where, for i>j>0, u1 e Uj. Then 7,(U) - co( ir,-i_,(C(u,), ui-),', U). (4.75) Recursively, for u' i f U U- X Ua', (4.76), = (u' i-, u,) where u' i e US, and y(u,' )'y1(c,(u,),u,l). (4.77) Thus, if the Ic< are known for all J<i<m, then one can recursively specify the ri. This specification is performed by listing for each a E A, the values 7'f(a). Hence) for a E A, (see

101 Eq. 3.23) sl( a) - c,1(a) (4.78).7,'(a)- _ (,C;'(U.), U' I-). A tractable implementation of the above equation is the reason for the development of the trajectory manipulation algorithms of METAPHOR. (See the discussion of Section 3.4.3 [Algorithms for Calculating Trajectory Setsj.) 4.2.3. Alternative Representations of Trajectory Sets We wish to be able to represent or denote specific discrete functions using notation more compact and general than trajectory sets (Section 4.1 [Trajectory Sets: Basic Notation and Operations]). With such machinery, we can discuss the algorithms used to manipulate discrete functions and we can prove some useful properties, e.g., that trajectory sets can represent all discrete functions. Many methods of representing a discrete function are known. Section 4.2.3.1 [Some Tabular Representations] discusses some basic graphical descriptions and Section 4.2.3.2 [Representations Using Lattice Expressionsl describes and algebraic representation using the lattice operations disjunction and conjunction. The former method gives insight into discrete functions, while the latter is the basis for the algorithms in METAPHOR. Other representation techniques, such as those using the ring operations ring sum and ring product and those techniques using polynomials over a Galois field (see Davio, Deschampes, and Thayse [1651 for a discussion of these topics) will not be discussed here. Consider the representation of a discrete function f: U-+A (4.79) where Ut - N + 1 and IA1 = i + 1. There exists a one-to-one mapping (enumeration) of U to the integer sequence 0, 1,..., N. Assume, without loss of generality, that

102 U- {0, 1,..., N}. Similarly, take A {0, 1,..., I). As a special case, if U = {0, 1)} for some M < oo and A = {0, 1}, then f:{0, 1i}M_{0, 1) is a ntching function. The properties and applications of switching functions are well understood (see, e.g., Miller 11771, Kohavi 11781, or Preparata and Yeh 11791). 4.2.3.1. Some Tabular Representations The simplest representations of discrete function are the tabular representations. Of these, the most elementary is a vector that enumerates the entire function; the vector is called the value vector off [f(0) f(1).. f(N) = If(u)lNor fulN, u = 0,1,...,N. (4.80) The number of such functions is jN. If {U} is a product3 = jX0 uj (4.81) u-X U, 3,,-0 where the number of entries is N= IUI J (4.82) then the function f can still be expressed as a value vector [V(u)l or [N, U E X U (4.83) 3The development of the material in this section usually employs a single dimensional product U = X Uj (see 1165], for example). However, in the context of METAPHOR, Uj is itself a one dimensional r J quantity: U = X X Uj. More generally, products of such two dimensional trajectory spaces are often of m i'i concern, i.e., U = X X UP.] In the following exposition of the theory, the single dimension i-0 L 0 -o k0 J notation will be employed because it is easier to grasp. However, our applications will usually require more complex dimensioning.

103 There are (I + 1)N+1 such functions. The order in which the fu appear in the vector IfUIN is arbitrary so long as the order follows some well specified convention. Convenient orderings include lexicographical order and Gray code order. [fJN can be written in tabular form. If the ordering of the u is lexicographical, such a table is called a Veitcs chart [1801; if the ordering is the Gray code, the table is called a Karnaugh chart [181). For our purposes, none of the above techniques are appropriate for direct computer implementation. Instead, we shall employ an algebraic approach based on the lattice structure of discrete functions. 4.2.3.2. Representations Using Lattice Expressions 4.2.3.2.1. Lattice Polynomials The concepts of cubical representation and cubical notation for a switching function [177j will now be extended to discrete functions of r variables. This extension is not straight. forward, however. To understand why, consider the lattice4 theoretic basis of switching theory. In the more general development of lattice theory, (see Gratzer [1821, for instance), one usually 1) defines the concept of r-ary lattice polynomials,s 2) shows that, for any distributive lattice A over which the variables take values, there is an obvious equivalence relation among the polynomials,6 and 3) shows that the equivalence classes form a distributive lattice p.7 This lattice has interesting properties, but does not denote all the functions from A' to A (much less from UA), a feature we must have. Suppose B is a Boolean lattice.8 Then, using the same approach as above, one 1) defines 4All lattices discussed in this chapter are presumed to be finite. I5nformally, a lttice polynonwi is an expression (well-formed formula, see Section 4.2.3.2.4 [Lattice Expressionsi) using A, V, variables S zs,..., s,-l, and parenthesis. For instance, so, (so A si), and (Si V S3) A (s2 A so) are lattice polynomials 6Two polynomials are cquiwklet if they represent the same function. 7P' is the free distributive lattice on r+l generators. P, is finite. 8A Booka lattice is a distributive and complemented lattice A em$zmsCu ed bttice is one in which every element has at least one coapkmest, i.e., for each element a, there is an a such that a A i is the least element and a V a is the greatest element. A finite, distributed lattice B is complemented (and hence Boolean) if and only if B contains 2' elements, r > 0.

104 the concept of "r-ary Boolean polynomials,"9 2) shows that for the Boolean lattice B there is an equivalence relation among the polynomials, and 3) shows that the equivalence classes form a Boolean lattice B.10 Further, if B is the two element Boolean lattice {0, 1}, then every discrete function f:{0, 1}'-+(0, 1} is represented by one of the elements of B,. Hence, one can understand the reason for the use of boolean lattices in switching algebra, namely, be employed. 4.2.3.2.2. Th e L attice f Discrete functions Our approach is direct: Rather than constructing a distributive lattice of some special subset of functions in which we are interested, we shall instead construct a distributive lattice sAN of all the discrete functions. (Recall dEq. 4.82l Uji (4.82) i - 0 is the cardinality of U.) Of course, AN will be much larger and more unwieldly than either P' or B2,, but AN will contain all the functions that we need. (See MacLane and Birkhoff [183], Theorem 13, Chapter 14, for a succint development of the lattice of discrete functions.) Take 1: X US-+A, (4.84) o In a manner similar to lattice polynomials, eookln poely/iwh (or ooeksi espriesom) are written with the additional operators -, 0, and 1, e.g., sz (zo A zi), I V so The functions described by these polynomials are called oolkm. Iucetiosee iM s rritlk ower B. B2 is the free Boolean lattice on r+l generators. B2, is finite.

105 where the range A = {0, 1,..., I, and define a lattice (A, V, A, 0, 1}, where 0 is the least element, I is the greatest element, and the operations disjunction (join) V and conjunction (meet) A are defined as follows: for a,b E A, a b = min(a,b) and a V b = max(a, b). The lattice {A, V, A, 0, 1-1} is hence a chain, and importantly, distributive. Now consider the direct productll of N lattices {A, V, A, 0, l}. This is again a N lattice and is {AN, V N, A NY 0 N, IN}, where AN = X A. The lattice AN is distribui —0 tive since the direct product of distributive lattices is distributive (see, e.g., MacLane and Birkhoff 11831, p. 496). However, AN is not necessarily a complemented lattice, and it is this lack which keeps AN from necessarily being Boolean. Given two functions IfuiN and [g]9N, their conjunction and disjunction are the componentwise extensions [(A ^ )u]N I= IA 9uu N (4.85) [(/V g)JN = LU V 9u]N. (4.86) Also, for a E A, (a A )ui N = au A fuN (4.87) [(a V f)uN = [au V flN. (4.88) A point a of the lattice AN corresponds to the function f.: U - A described by the value vector la]fN. Thus, the set of discrete functions {f:U-+A} is isomorphic to the product lattice {AN, V N A N, 0 N, iN}. Note that AN can be large; for instance, if r = 3, U = 3, and IAI = 3, then ANI = 33 X s x = 327, or approximately 7 X 1012 functions. The Hasse diagram of the set of functions {f:{0, 1)2 —{0, 1}} (22 x 2 = 16 elements) is shown in Figure 4.1, and the Hasse diagram of the set of functions {f:{0, 1, 2}-({0, 1, 2}} (33 = 27 llThe direet Frodct of two lattices (A0o V o A o 0o,'o) and {Al, V l, A, 01 1} is the lattice {Ao0 x A, V ox. A ox 1, 0o x i Io x } where (o,l )Vo x i(Yo, Y) = (oV o x iyo Vox iYi).

106 [1,1.1,1] ~~~~~~~[0,1.1.o] [O~~~b~,0,0,1],~[0,0.0.0] ~~~[oFig. 4.1 Lattice of the functions {f:{0o,o]0 })

107 elements) is shown in Fig 4.2. 4.2.3.2.3. Lattice Exponentiation We now extend the exponentiation of lattice elements (e.g., see MacLane and Birkhoff 11831, p. 503 or Gratzer 11821, p. 82). Let x = [zxJl = [Zo, 1,...., z, be a vector of r+l variables such that zj is a variable taking values in Uj. Then if (C,)C U, define the lattice (C) ezponentiation (Davio, Deschamps, and Thayse [1651) zx' to be the function: (C.) j-) U- A (4.89) where _ if xji (Cj) (C)! I if zE(c,) (4.90) 0 otherwise. Note that zXJ' I and z() = 0. Also, note how the following expression evaluates: * ad: (c. aAxa ifz E(C,) (4.91) 0 otherwise. If (C,) = (UoU{, u.. u,J}, write (cK) (Uoir..UJ) (4.92) An example will be given below. We shall use lattice exponentiation and constant functions as building blocks to represent more complicated functions (called "cube functions.") Cube functions will then be used to represent yet more complex (indeed all) discrete functions. 4.2.3.2.4. Lattice Expressions Define a lattice ezpresion to be a well-formed expression12 made of the following and (so, si) / o x i(Yo, Yi) = (o/ o x lYo, si / o x lfyi) 12A well formed expression (wfe) is an inductively defined string of constants (0, 1,..., l), variables

108 [2,2.2] [1.2,2.2] 0[212] [2,2,1]. [0,2,2] [11.,2] 2,.o,2130 [[ ~ ~ ~ ~ ~ ~ \ 1.0,120 [0,0,1 La ic o te u n [1.0 ^ (0.0.0] Fig. 4.2 Lattice of the functions {f:{0,1,2}({0,1})

109 symbols: a) the constants are the lattice elements 0,,..., I (C) b) the variables are the z, c) the operators are the two binary lattice operators V and A. (Note the lack of a complement operator; complements do not necessarily exist in the lattice AN.) The operator A takes precedence over V. Sometimes in an expression, the operator A will be omitted, e.g., a A b will be written ab. Also, matching parenthesis "(" and ")" will sometimes be embedded within an expression to denote a local change in the order of operator precedence. Every lattice expression describes a discrete function, since by choosing any value u E U, substituting u into the expression and evaluating, a value in A can be obtained. Further, as shall be seen, every discrete function can be represented by at least one lattice expression. As an example of representing a discrete function by lattice expressions, consider the discrete function f: Uo X U —.A (4.93) where U0 {= 0, 1, 2, 3, 4}, U1 = {0, 1, 2), and A = {0, 1, 2, 3}. A tabular (o,'i..., sN) and r-ary operators (0o, 1,.., #). The ruls of construction are 1) Any constant is a wfe. 2) Any variable is a wfe. 3) If Eo, E,..., E, are wfc's, and is an {r}-ary operator, then,(Eo Bn..., E,) is a wfe. (If r = 2, the wfe is often written EB 9, EB) 4) The only wfe's are those obtained by finite applications of irules 1), 2), and 3).

110 representation for f is presented below: 20 0 1 2 3 4 0 1 3 3 0 2 zI 1 0 3 3 2 1 2 1 0 3 2 3. f can be represented by the following lattice expression: F(zo, z1) = 12~)2~',2) A 1z 4)xi1) A 2zX)zX1 2) A 2xz4)xz~) (4.94) A 3 1. 2) oP1)) A 3 2, 4) 12) Note that F is composed of the conjunction of six functions ("cube functions"), the first being izxx)z~102). In turn, each of those functions is composed of the conjunction of three functions: A constant function and two lattice exponentiations, e.g., 12x)xiO, 2) is the conjunction of the lattice exponentiations xS~) and' 2) Using trajectory set notation (Section 4.1 [Trajectory Sets: Basic Notation and Operationsj), the same function would be represented by the preimages: f'(0) = 11 U 11 21 U 13 0 f-'(1) 10 {o,2}1 U 14 11 (4.95) f-(2) 13 {1,2}1 U 14 0 f-1(3) 1(1,2} {0,1} 1 U 1(2,4} 21 Note the similarity between Eq. 4.94 and Eq. 4.95; the entries of the array products of Eq.

11l 4.95 are simply the exponents of Eq. 4.94. With the entries o f'(O) being all the entries not represented by f-1(1), f-'(2), and f-1(3). Indeed, it is clear that lattice expressions are a more general method of representation of discrete functions than are trajectory sets in the sense that every trajectory set can be written as a lattice expression similar to Eq. 4.94, but not every lattice expression can be written as a trajectory set. We shall find that every lattice expression denotes a discrete function (i.e., represents a point on AN), but not every trajectory set represents a complete function. Yet, as shall be seen, every discrete function can be represented by an expression similar to Eq. 4.94, so using transformations of the kind used to obtain Eq. 4.95 from Eq. 4.94, trajectory sets are sufficiently powerful to represent every discrete function. There is a countably infinite number of different lattice expressions that represent function f. For example, f can also be described by G(Zo, 1) = (O V,1'.2,3) V T11,2))(o V T1,2,34) V t~0,2))(o V,23) V i~, 1)) (1 V,41.2,,'4) V z11))(1 V 40,1,2.,) V Vi0~2)) (4.96) (2 V o,1,2,4) V 2j0))(2 V,0o.,2.3) V 2'.2)). 4.2.3.2.5. Classes and Properties of Lattice Expressions Let us now formalize the above discussion. We begin by considering a more general function than simple lattice exponentiation. A cube function (Davio, Deschamps, and Thayse 11651) is a lattice expression c(x) of the form c(x) a A 2tC E (eI a is the weight of the cube. The elements of the set (C,) are cntnries of weight a. In particu

112 lar, c(x) = a if zE C forallj (4.98) 0 otherwise. If f is a switching function, i.e., f:{0,1}'-{(0,1}, then cube functions correspond to impliconts in switching theorey. Since we shall not be dealing with any other class of implicants (see 11651 for other classes), we shall freely use the term "implicant" for cube function. If each (CJ) contains exactly one element, then c(x) is a join-irreducible clement of the lattice AN. If c(x) is such a join-irreducible element and a = 1, then c(x) is an atom of AN. An atom is hence a cube function l Zij~(5),uEU,. ~(4.99) 0 An atom c(x) is 1 if zj - ui for all j, and is 0 otherwise. If at least one set ci =., then c(x) = 0 for all x and so c(x) is the minimum element (called the empty cube) of AN. This corresponds to a null array f of the trajectory set calculus. Let co(x) and cl(x) be cubes co(x)= aoA;A2, (4.100) Cl(x) = aA AC J,cj where no, al E A. Then the conjunction of the two cubes is T(C. n c. (4.101) c(xo) A c(xl) = A A a A 1) (4.101) The duals for the above concepts follow. An anticle function is a lattice expression

113 d(x) of the form V~~(D ) ~(4.10 2) d(x) = aV V (, aEA, DJUj. (4.,) 0 a is the weight of the anticube. Now, d(x) I itzj ED, for somej ( a otherwise. In switching theory, anticube functions correspond to implicate#. If each D = Uy - Dj contains exactly one element, then d(x) is a meet- irreducible element of the lattice AN. A meet-irreducible element with a = I is an antiatom of AN: l1 V~ j0 2] E, uEO,. e (4.104) An antiatom d(x) is 1-1 if j = uj for all j, and is I otherwise. If at least one set d, --, then d(x) = I for all x and so d(x) is the maximum element (called the emptg anticube) of AN. This corresponds to a "full set" of the trajectory set calculus. Let do(x) and dl(x) be cubes do(x) = aoV V D, j=0 (4.105) dl(x) = al V \ D1) where ao, al E A. Then the disjunction of the two cubes is OUDd(xo) A d(xo) = ) (4.106) d(Xo) A^ d(o) = o ^A a, A,,,U., 3 ~~ 0

114 The negation (as opposed to "complement") f of a discrete function f is [Jul = [-f ul. (4.107) The minus sign is arithmetic subtraction. Negation relates the above concepts of cube and anticube functions. Clearly, (f) = f, and De Morgan's Laws apply, i.e., for a set of functions {(h: V fhx) = /A (z) (4.108) / f(^(x) = V fA/x) The negation of a cube function is an anticube and conversely: a V aA /\ X = -aV VY, (4.109) C. a V xi = aA xA iJ J.- 0 J' 0 Also, the following identities can be shown: $, j _ x,(')(4.110) ( (c,) ()Cj (4.111) (flC~,A = ( iT,() (4.112) xY'i A. ^^ A ^^ ^j

(C) (C) (4.113) 1A1x 5 - V (C) l (4.114) () (4.115) O vz - = C. Xi (4.116) Distributivity of A over V and V over A holds: V A C =(cA) A x(cl (CV A ( V (4.117) (4 v,, ( =C.)A V 4.2.3.2.6. Normal Forms of Lattice Expressions Let F be a lattice expression representing the function f. If F is a disjunction of cube functions, i.e., if F(x) Cc(X) (4.118) then F is a diwjunctivc normal form of f. Similarly, if G is a lattice expression representing f and G is a conjunction of anticube functions G(x)= f\ dh(x), (,4. then G is a conjunctive normal form of f. For example, Eq. 4.94 is in disjunctive normal form. The trajectory set notation is a variation of the disjunctive normal form. Nothing corresponds to the conjunctive normal form in the trajectory set calculus. Every lattice expression may be transformed into an equivalent disjunctive normal form and an equivalent

116 conjunctive norm form by repetitive application of the distributive laws (Eq. 4.117). A notation that is close to trajectory sets is the "cubical representation." Represent a cube function (C.) (4.120) by the vector of one scalar and r+l sets: [a Co C1.. C,]. The disjunctive normal form off, F(x) -Ow ko Zhc*4 h o = 0 h (where wk E A) being the disjunction of p+l cube functions, is then represented by the array of 8+1 scalars and (p + 1)(r + 1) sets ao Coo Co0... Cor al C1o CH1... C1, (4.122) F = [ak CkAlkh:p X r = a, CO C, l... Cpr Note that the above matrix could be decomposed into a matrix C of the sets Ck^ and a vector W of the wk: Coo Co1... Cor Cio C11' C1, (4.123) C = ICkhIp X r = CPO C,1.* CP, W- [wklk:p = Wlo. (4.124) It is convenient to represent the subset Cat of Uj as a binary vector (Ca) = Ib^Ihl:N = Ibt bl,..,b ] (4.125)

117 where Nj is the number of elements in Uj and ^ Jf 1 if (C ) (4.126) 1 0 otherwise Then the representation F of f given in Eq. 4.122 can be written F w= [Wk [bol.lNkIk:A. X r (4.127) wo [bbol80g:N, [b^ llo l^br^Jg:N Wi [bo1,g:N1 tIb^I,:N1 [ lrl:N1 (4.128) For instance, the expression of Eq. 4.94 F(zo, z2) 140~)2~02) A 1zA4)z1) A 223)21') A 2x44)To~) (4.94) A 3z1'.l2)I~'1) A 32P.'4)" 2) would have the following cubical representation: I 1 101 [ 0000 1oo 1 o0 1 o1 o o (4.129) (4.129) 2 [0110 I I 01 [ I 0 00 2 10 0 11 11o 0 0o 0 3 [00 11I 1[001101 (or.3(03 o10 o 0 [11 0100 (or 010 11o 1000 11 o 1 I o o 0 (4.130) [010101 [(101001(4.13 o011 ol 0 0 1 1oo011 01o o 001 Io 10100 01o l ool I1 o l ool]

118 and W= [112233] ). (4.131) 4.2.3.2.7. Canonical Forms of Lattice Expressions As mentioned above, a discrete function can be represented in many different ways. Even when restricted to disjunctive normal forms (Eq. 4.118), an infinite number of expressions representing a given function can be written. With the machinery of the preceding section, the possibilities of specifying canonical or "standard" forms of discrete functions can now be discussed. Also, that every discrete function can be represented by at least one lattice expression will be shown. First, define a minterm to be a join-irreducible element of the lattice with weight greater than 0, i.e., a minterm is a cube function of the form A aV /\ (U) (4.132) ) V 0 where u = (uo, u,..,u,), 0 < u, < N, is a point in the lattice. Note that there is a minterm for each non-zero weight a at each point in the lattice, and hence, there are N ( - 1) distinct minterms (where N = fI Nj). We have the following important JI o

119 Theorem 4.1: (Discrete function representation) Shannon 1184] Every discrete function f can be represented by a lattice expression in disjunctive normal form, each of whose cube functions is a minterm. Further, this lattice expression is unique up to a permutation of the minterms. Proof: Construct the expression F(x) V r(u) A A z, where u' o -0 ^(4.133) u = (Ou, U,..,uR) O<u,<N. and note that each cube function inside the brackets is a minterm. For a given u = (o ul, ~., Un), at most one cube function is non-zero, specifically (eo) (e ) f(u) A xz A z( A A a Ax., (4.134) which has weight f(u). Thus, the expression F(x) assumes the value f(u) at the point u. Every function f can be so represented since the construction of F does not depend on the nature of f. To show uniqueness, assume there is a second qualifying expression G(x). Suppose G(x) contains a minterm c(x) (where c(u) O0) not in F(x). Then clearly G(u) & F(u) = 0 and either F(x) or G(x) does not represent f. Next, suppose F(x) contains all the minterms G(x) and, in addition, the minterm c(x) [where c(u) 4 0]. Then again G(u) = 0 yi F(u), and once more either G(x) or F(x) fails. Hence F(x) is unique. II

120 The expression F(x) of the proof is the canonical disjunctive form of function f. Two lattice expressions are equivalent if and only if they represent the same discrete function. Suppose a lattice expression G(x) representing function f is equivalent to the canonical dis~ junctive form F(x) of f. Then F(x) is said to be the canonical disjunctive form of the expression G(x). Any two expression are equivalent if they have the same disjunctive normal form. A form of Shannon's expansion theorem provides one method of obtaining the canonical disjunctive form representing a discrete function from any expression G(x) representing that function: Theorem 4.2: (Shannon"s first expansion theorem) A discrete function f represented by an expression G(x) can be expressed No F(x) = V {z) V G(uo, x,,..., z)} (4.135) Applying Eq. 4.135 repeatedly with respect to every other variable z, 2..., x, results in the canonical disjunctive form. Algorithm 2.1 of Davio, Deschamps, and Thayse 1165) describes a simpler and faster procedure for obtaining the canonical disjunctive form. The dual concepts for canonical conjunctive normal forms can also be described. Our interest in these forms vis-a-vis applications such as METAPHOR is somewhat academic since these forms are not practical in use. This impracticality results from the size of these normal forms. For example, in the worst case, a function f that is identically I everywhere (i.e., n f(u) = I for all u), would require N = n Nj cube functions to be represented in canonical -=0 disjunctive form. However, the canonical disjunctive form does demonstrate that every function can be represented by a lattice expression, and hence, every discrete capability function d and interlevel translation K can be denoted by a set of trajectory sets. Further, any two lattice expressions can be compared for equivalence. Thus, our machinery is "complete" in the sense that everything we could want to represent can be represented.

121 All that remains, then, is to describe and implement a set of algorithms which allow us to manipulate trajectory sets (lattice expressions) efficiently so that we can obtain our goal, viz., y-'(a) (Eq. 4.78). Such a set of algorithms is presented in Section 4.3.3 [Algorithmsj and a discussion of a computer implementation is given in Section 4.4 [METAPHOR-A Performas bility Modeling and Evaluation Tool]. 4.2.3.2.8. Exponentiation and Composition of Lattice Expressions Exponentiation of lattice expressions and composition of lattice expressions are closely related and follow naturally from the definition of lattice exponentiation (Eq. 4.90). Consider the discrete functions /:Uo — U (4.136) g:U-+A where Uo UO X U X'' X U,, U= {0,1,...,L}, and A = {0,1,...,l}. Let x - = [zr = [o- ZI,,..., z, be a vector of r+l variables such that zX is a variable taking values in U,, and let x be a variable taking values in U. Then if (C,)C U, and (C)C U, there are functions. J: Ups U (4.137) (C) *:U-A (4.138) where ) L if z E (C)4.139) 0 otherwise

122 X(C) I if E (C) (4.140) 1 0 otherwise. Then the composition of z, i and zC) yields the cxponentiation of a lattice exponentiation:' (i)c)(C)U~ A(4.141) where (c))(c) f I if {x E (Cj)and L E (C)} or {(z (C) and 0 E (C) (4.142) \ 0 otherwise. Note that (xz(,)) = I and (zx())(C) (x)))() = 0. Ezponentiation of a cube function, exponentiation of an anti-cube function, and exponentiation of a lattice expression are similar extensions. Thus, for u E U and a E A, (c)(c) |a if {(j E (C,) and u E (C)) or {(, 0 (C,) and 0 E (C)4143) a A(uA ^z') orq = E (C) 0 otherwise. The composition of two lattice functions F(x) and G(z) representing f and g (Eq. 4.136) is then G(F(x)), i.e., G with all occurences of z replaced by F(x). 4.3. Calculation of Trajectory Sets We now wish to state a set of algorithms which will enable us to compute the performability of discrete performance variable models. These algorithms have all been implemented in the METAPHOR package (see Sections 3.4.4 [METAPHOR-A Performability Modeling and Evaluation Tool) and 4.4 [METAPHOR-A F-erformability Modeling and Evaluation Tooll). In the development below, the names of the METAPHOR functions implementing specific algorithms and steps of algorithms are noted.

123 We begin (Section 4.3.1 [Representation of Discrete Functions Within METAPHORI) by examining the internal representation that METAPHOR uses to represent discrete functions; this representation is a hybrid of the trajectory set notation (Section 4.1 [Trajectory Sets: Basic Notation and Operations]) and lattice expressions (Section 4.2.3.2 [Representations Using Lattice Expressions]). Then, in Section 4.3.2 [Notation], some specific notation for representing the discrete functions of METAPHOR is introduced. Finally (Section 4.3.3 [Algorithms] discusses the algorithms. 4.3.1. Representation of Discrete Functions Within METAPHOR METAPHOR's internal representation of discrete functions is based on the disjunctive normal form of lattice expressions (Eq. 4.118); the actual representation is exemplified by the array of binary vectors of Eq. 4.127. However, the form of METAPHOR's representations differs slightly from Eq. 4.129, and METAPHOR places some restrictions on the freedom of the representations. The differences are described in this section. First, the variables in METAPHOR have two indexes (attribute and phase), so a cube function (Eq. 4.97) is written.A ^ ) aA \ ( (4.144) The exponents of the cube function, (Cj-), can be denoted by a matrix Coo Col'.. Cos Clo C1l... C1, (4.145) [ICjlr x s' Cro Co1... Crs where (Ck)C Uj and the weight of the cube function denoted by [Cjtl, x is understood to be a. Thus, we see the relation between array products (Section 4.1 {Trajectory Sets: Basic

124 Notation and Operations]) and cube functions of the form Eq. 4.144, viz., [Cllr x = [(Cjk)Ir x, (4.146) where the weight a is implicit. In the remainder of this chapter, we use the terms "array products" and "exponents of a cube function" interchangeably. With this correspondence in mind, define the "complement" of a cube function to be analogous to the complement of an array product (see Eq. 4.49). Let c(x) be a cube function described by [(Cj)1, x,. The complement of c(x) is the lattice expression c(x)C constructed from the complement of [(Ck)] x s i.e., I(C)l x s, Note that 1(Ck)1-CX S = U U I(Cjt)l1 X, (4.147) =mOb==O where ^ C,*C if a = j, b k, ( (4.148) * otherwise and C, = QJ - Cjk. (4.149) Of course, we would like to have the various arrays in c(x) to be disjoint; Eq. 4.147 can also be written V $ I(C,-),x = U U I(Cj,)lr xs (4.150) 4 =Ob =0 where C+* if a, b = k, (4.151) Cjk = Ctj if a < j, or if a = j, b < k, $ otherwise

125 and the l(Cj),) x, are disjoint. Using the properties of Section 4.2.3.2.5 [Classes and Properties of Lattice Expressions], the complement of a cube function can be written c W ^^ \ W W (c)\\ (4.152) c(x)c = AA (k (4.152) Note that the complement of a cube function is not necessarily a cube function. The terminology "complement" is somewhat abusive, especially since the complement of a complemented cube function is not defined. However, the definition follows naturally from array products and has a natural interpretation, namely the complement of a cube function c(x) with weight a is the lattice expression yielding value a only when c(x) does not, and conversely. A disjunctive normal form representation of a function f is (see Eq. 4.118) F^ v iO [W^AA^ A Z(*)) (4.153) F 0 I A 0 k 0 where there are p+ 1 terms in the representation, wh E A is the weight of the hth cube function, and (C.jk)) is the exponent for the jkth term of the hth cube function (implicant). The array products corresponding to the exponents (C(k)h) will be written ( Cjl, x. The inverse of can now be characterized in terms of sets of (Ct) (or of CJkh1, x s) as follows: f-(a) is denoted by {(Ctk)h) I W, = a} (4.154) or {lCc,,h x sI W = a) We shall generally use the first form, i.e., {(C(ot)h) I wu = a). In representing discrete functions, the second difference between METAPHOR and the form of Eq. 4.127 is that in METAPHOR the cube function of weight 0 must be explicitly included in the characterization of a function. Thus, while every lattice expression denotes a discrete function, not every set of array products denotes a function. Requiring specification

126 of cube function with weight 0 would be unnecessary if the same convention employed by lattice expressions were used, namely, any u E U that does not appear in the representation would have value 0. With such a convention, the cube function with weight 0 could be determined by the conjunction of the complements (see Eq. 4.152) of each non-zero weight cube function (a form of DeMorgan's Law): P A=1.c(x). (4.155) However, computationally, it is easier to store these trajectories rather than to recompute them every time it is necessary to refer to them. Further, by forcing the user of METAPHOR to enter these trajectories, a form of error checking can be implemented. Thus, if the user forgets to include a point in the cube functions associated with a given weight a, that point will not be automatically assigned weight 0; instead, an error message will be generated, and the missing points can be computed using Eq. 4.155. The third difference between METAPHOR's internal representation of discrete functions and that of Eq. 4.127 is that, in METAPHOR, a point u E U cannot appear in more than one array product (exponent of a cube function). Such a restriction is not inherent in lattice expressions, since, if u appears in the exponents of two different cube functions, the resulting disjunction would still be a function. Notice that this restriction allows another form of error checking by METAPHOR. METAPHOR opperates on the premise that if the user includes the same point u in two different cube functions, than that point has not been properly considered. Upon detection of such an overlap, METAPHOR generates an error message and can determine which point has been considered twice. Finally, in METAPHOR the lattice expression in disjunctive normal form is factored such that each weight a E A is written only once, i.e., if p, is the number of cube functions having weight a, in the lattice expression (hence p =- p,), then the representation of Eq. I -o

127 4.153 can be written (C wV, t () (4.156) hI 0' 0k 0 =0 h II O where (C(t).h) is the jkth exponent of the hth cube function having weight a,. The representation used by both the APL and C versions of METAPHOR use the form of Eq. 4.156. From Eq. 4.156, the function f can be denoted by a four dimensional (raggedl3) array (compare the following discussion with Eqs. 4.123-4.128) C = I(Cjknh) h: r X X I X p. (4.157) along with a vector P [Po P'' Pi] (4.158) Actually, each (C,.kh) is also a vector (Cjk^ ) = [bA g N = [bs1, h..., b 1i (4.159) where Njk is the number of elements in Ut and a -(kn'1 if g9 (C(jknh) (4.160) 0 otherwise [see Eq. 4.125], and so the computer representation of f is a five dimensional array C == [b,,,,Jn Ijg r X s x p, x N,. (4.161) 13We define a rggfed arrj to be a structure al a, Ef e I of elements a indexed by i and j taking values in the sets I and J, respectively. In the simplest case, I 0= t, 1. ),. m, 0,1,...,n, and the resulting structure is the array [all, x,. The indexes i and j can be functions of one another, though we shall use only the case where the maximum value of one index is dependent on a second, relatively independent index, i.e., Iei,,i E,, je, where I = {0, 1,...,m }, and, = {0, 1,..., n,}. Such an array will be denoted by laI,, x.or [a,, j m,. x * The concept generalises in the way to higher dimensional structures.

128 The representation F of / given in Eq. 4.122 can be then written F = [w, Ibj9knhl:Nj, lJkh: r x X p, (4.162) In APL, the length of the vectors Cat is the size of the largest set UjV. In the programming language C, the arrays are implemented more efficiently as structures of pointers and data. As an illustration, consider the trajectory set of Eq. 4.95, i.e., f'(o)= 1 11 12 U 13 0'(l)= [0 {o,2} U 4 11 (4.95) f1(2)- 3 {1,2}1 U 14 0 f'(3)= ({1,2} {0,1}1 U {2,4} 21 The vector p is 13 2 2 21 (4.163) since there are three cube functions denoting /-'(0) (i.e., have weight 0), and two cube functions representing each of f-l(1), f-1(2), and f-1(3) (i.e., with weights 1, 2, and 3). "Stacking" the arrays of Eq. 4.95, [o 11 I12j 3 01 _- -_ -_ -_ -_ - - - (4.164) 0 {0,2}) 4 11 [3 {1,21 [4 01 1,2} )(0,1}) [ {2,4} 21

129 the array C is constructed as below: I I[0 0 011O 0 1001 I0 1 ol [0 1 0010 l 1 I Il 1 o l 0o o0 O 11 1 (4.165) [ 1 0 11 0 0 000 1 I C= I 00 101 [1000 01 1 [0 1 011 [0100011 1 I 10 0 1 0 [ 0001o 1 OI [0 1 1 IO 0 1o 1 1 I 010 01o 1 01 o 11 Note that the computer memory required to store Eq. 4.165 is the same as that of Eq. 4.129 (if Eq. 4.129 also includes the cubes with weight 0), but that the indexing of Eq. 4.165 is more versatile (in the sense of quick access to the hth term with value a,; see Eq. 4.156) when the domain has two coordinates. As a somewhat more complex (in terms of dimensioning) example, consider the model hierarchy of Furchtgott [891, Meyer, Ballance, Furchtgott, and Wu 1301, Meyer [381, and Meyer 1291. Specifically, we take function,yl of Table 2 in Furchtgott [89], which also appears as Table 3 of Meyer, Ballance, Furchtgott, and Wu [301: 1 o o] [oil [l0,o1) 1 (ao) [:* * U [O U [O [2 {,O1}] [ 2] [ 3] [1 {2,3}] "~,) = o ~ Juj U U o, U o ~ [ify(a2) = 1 [ (0 ] (4.166) l[ (as) 2[ 0,}] [1 {2, Y1 (a4) = [ * ] [22,3 }] (2,3) [ ]

130 The vector p is p = 13 4 1 2 3. (4.167) Again "stacking" the arrays of Eq. 4.166, 0 0 1 {0,1} 0 1 2 {0,1}] 0 2 0 3 (4.168) 0 * {1 (0,1) 2 {0,1}] [1 {23}3) 3[ * 2 {2,3}] {2, 3} *]

131 the array C is constructed as below: [[0001 1100011] 10001) [001 11 [[0001) [01001 11111 [1111)1 00011ooo000 (4.169) 00ooo01) 111111 [ 0010] 100l1oo [0ooo00) 1111 [1 0o 01o 1001o l (0o010 111111) [o010 01 [0011] [~o 1 1,o ooi] (4.169) 00101 [111111 0010 1 1 1001 [00oo1ol 0)[1111)] [1000) [11111 1olol [ll lll [1111) 1111] [0100)01 1100 11oool I1111) [10001) [0010) [11001 [111111 In conclusion, note that with the combined lattice expression/trajectory set notation discussed in this section, f'(a) can be compactly represented in general discussions using sets of C, (as in Eq. 4.153), and, for applications, can be efficiently represented in computer memory using arrays of form Eq. 4.165. As shall be seen in Section 4.3 [Calculation of Trajectory Sets) (and as might be inferred from such results as Eqs. 3.4 and 4.78), the inverses of discrete functions are important quantities (both conceptually and computationally) in the determination of'1.

132 4.3.2. Notation There are two broad classes of discrete functions with which we shall deal in the remainder of this chapter, namely y, (the level-i based capability function; see Sections 3.3.4 [The Capability Function -y, 3.3.8 [The Model Hierarchyl, and 4.2.2 [Discrete Functions and Capability Functions]) and c, (the level-i interlevel translation; see Section 3.3.8 [The Model Hierarchy]). Below, we describe how these functions are represented by the hybrid lattice expression/trajectory set notation discussed in Section 4.3.1 (Representation of Discrete Functions Within METAPHORJ. Generally, the inverse f-1 of function f will be denoted by sets of array products (see Eq. 4.154): f'(a) is denoted by f-'(a) {(Ctk^))l a = W^} (4.170) where (in the "extended" disjunctive normal form of Eq. 4.156) I (CF(x)= nO ant1~V' /\Yl y /\ Z(C k)nh (4.171) and there are p = C p, terms in the representation and (Ck),nh) is the hth set of entries of i = 0 weight an for the jkth term. Briefly reviewing Sections 3.3.8 [The Model Hierarchy], 4.1 [Trajectory Sets: Basic Notation and Operationsj, and 4.2.2 [Discrete Functions and Capability Functionsj, decompose the level-i trajectory space U' into composite and basic trajectory spaces: U'= Uc X U. (4.172) Let U' denote the level-i trajectory space along with all the basic trajectory spaces of higher

133 level models: UI'- U X Ub X * X Ub (4.173) where U~ = U~ and Um = U. UI-l (i = 1,2,...,m) is related to U' by the level-i interlevel translation c,:U'U-.,U (i = 1,2,...,m) (4.174) KO:U -A. (4.175) Now TUC-= l,l],,:,t t x,, X (4.176) (see Eq. 4.14), i.e., the level-i composite trajectory space has r, component coordinates and 8e time coordinates (observations). Also, (see Eq. 4.5) U' = IUJklk: ri(( + P.), (4.177) Using the projection function;k* (see Section 4.1.1 [Notation and Terminology]) ik*U i = k (i= 1,U 2,(...,m), (4.178) Kc can be decomposed (see also Eq. 4.16).J,:I,-.U (4.179) We represent K0 and the other K; as follows: I PrO 0o + 0 + + so ) v tAo U (4.180) n 0 h J 0 k 0

134 NJV- S0k,, okI lo o^jO ~ i + P + s wx Ay _ /\A /\(K,k1] eigno Io- () I j-o A I o lJl 1 (4.181) (i- 1,2,...,m). (Kk)nh) denotes the hth array product (exponents of the cube functions) associated with accomplishment level anf of Ko, while (K'~ h) represents the hth array product (exponents of the cube functions) associated with value w, of the jokth projection of,i. Eqs. 4.180 and 4.181 can be written as the four dimensional ragged arrays (see Eq. 4.157) O(U) = Ko=[ )n] (4.182) o (U): - [K,Jnh:(r.+p+l 1) XoX X p X P.10 _K[ zI) k"',i: (,, +',.+ 1) X s; X Nil- o X pa, ~' ~ +P, + (4.183) (i- 1,2,...,m), along with vectors Po IC PO 0. 1 (4.184) P, = Pn. 1n:, [Po, Pi PI Pi I __0k_ = [ l (00ooi 0 o=o l[ (4.185) P~ [P N~I-n'- I [~Pi P I P N'- P 0k*0 0( As in Eq. 4.154, the inverses Kcl(a) and (Joo,)-l can be written (4.186),'(a) = jLK" JI (,J,.:(,+i,+)x<, x -, a., = a =|{ [K l*l)J1k1in: ( + P. + 1) X s, X I l (4.186) (Jo, )- = |Ko,)l,"( +. +1)X., x._ (4.187)! I, -

135 The lvel-i-based capability function is 7,:U' - A (4.188) defined inductively as follows: If j = and u E U0, then o(u) = tco(u), (4.189) and, for i>0, if (u,u' )EU' where u E U', EU U~ X U X Ui-1, then',(u, u ) =' 7Y,-il((u), u ). (4.190) If i = m, then -i = -. We shall represent the level-0 capability function 7o (Eq. 4.189) as follows:;'?'o+Do+' *o \/ \/ A A (ro 70(u) a. A \ / V (4.191) =O h1 I J O k 0 where (rok),,^) denotes the hth array product (exponents of the cube functions) of accomplishment level a,. )o can then be written as the four dimensional (ragged) array (see Eq. 4.157) r- = (ro^^)l) x s x p, (4.192) -a(r~,, +,:o + 1) Xo+ X I X,'.O along with a vector P = [P:OI: = IPo P1J70 70 * p7 (4.193) p^O [p.ln:,- {po P I pi "' sub {{n}:' 1} The inverse 7ol can be represented y(a)= ( [(r)"nh)l]J,:(o+,o+ ) x so x a,, = } (4.194) It will be convenient to write'o (Eq. 4.191) so that its basic and composite projections

136 are clear (see Eq. 4.176): l P. ~ r 0 s 0 70(u) = V an A AoU, A /rjk\ u h I k 0' o ( I r'tk (4.195) A A tA t^J I 0 k - 0 j'k In a manner similar to Eq. 4.192,'/o can then be denoted by the array r~ = Ir? rl =- [(rI? [(r i o x (4.196) r) k1t^nAl:roX n x I X pT ) ]knh:po X n X I X Xp, The vector p70 is still as in Eq. 4.193. Then, if (u,u' ) E U1 (Eq. 4.173), from Eqs. 4.179 and 4.190: l P'0 o l(u,u')= A hV V A A I k I_ o' A ( 1= ~, tl (4.197)'W Ah^= I I 0 ( 0jk') AS (r(r).)J Eq. 4.197 appears quite complex; concisely, the equation is specifying a collection of unions and intersections of level-1 array products (exponents of cube functions). In Section 4.3.3.2 lIntersection Trees], Eq. 4.197 will be discussed in detail. The particular sets which are operated upon are dictated by ^ and,c1. When those operations are complete, Eq. 4.197 can be written in reduced form:

137 l P. rl S1 i 1(u,~u'a ) V a, /\ Iiv o outI A )a n==O h 0 k 0 (4.198) r PfLo Sr O A ^ NJ, t where (rtj),h) denotes the hth array product (exponents of the cube functions) of accomplishment level a~, which are associated with level-i trajectories, and (rIot)nfl) denotes the hth array product associated with level-0 basic trajectories. I7 can then be written as the (ragged) array ri- ilii rio (4.199) -- I1 tlr,.,ll lr ~,.,ll ] [ Ij Jknh:(r, + + l) X s X p;X jknh:(ro+ o + l)x oX o XX po along with a vector 7 7 71... 711 (4.200) P - P. ].:'[Po P. Pt The inverse 7i' can be written (see Eq. 4.154) (4.201) (a) = ( [(rt) )ln.t:(i+ P1+ 1) x s x I (r)() ~ )I),k:(l+1 + 1) x 8xIn =,. } Continuing iteratively (see Eq. 4.190) and letting (u, u' ) U', (i = 1, 2,...,m), we obtain for level-i:

138 p 7 -1 -I'7(U,U' ) V a A V1 =0 k 0 V W A 1 A A 1)^ = -I o {1 0U JJ A V A A i ft o1 1 ~, oPJ't* A array0 ail 0, ) 7f'3 p [ p'(:l)h I 0 k 0pie (4.203) A, oX A X' a [e (a)O J t7* J^ where (rj)nh^) denotes the hth array product (exponents of the cube functions) of accomplishment level a., which are associated with level-i trajectories, and (r^)nA) denotes the hth array product associated with level-e basic trajectories. i= can then be written as the (ragged) array r'= r" r(-)... r'o = ir",:, (4.204) J^h:(r ( + p + 1) x st X p7i j,: X a iX Ps C J ftfelf X 1-1 along with a vector P lPn: = I. (4.205) P P":I=, P' PI

139 Finally, the inverse 7, 1 can be written (see Eq. 4.154) 7(a) = { (r^ n)ILn:(l+l +l)X a = a (4.206) Detailed algorithms for computing ri are discussed in Section 4.3.3 [Algorithms]. 4.3.3. Algorithms 4.3.3.1. High Level Algorithms Recall (Section 3.3.6 [Solving for the Performability]) that to obtain the probabilistic

140 description of the accomplishment set A, the following two-step procedure can be employed: Algorithm 3.2b: (Algorithm for obtaining the performability of a system) For each (measurable14) set BCA of accomplishment levels, METAPHOR Function metadiscrete METAPHOR ~~~~~~~~Step | Function 1) Determine commandbuild,-'(B)= {u I -(u) E B} (3.4) i.e., the set of all trajectories u that result in an accomplishment level in the set B. 2) Determine the probability of the set of trajectories commandeval,r-'(B). For the present, let us concern ourselves with Step 1), viz., finding 7-1(B). Since A is finite, then to specify Prob(B), it suffices to specify Prob(a) for each a E A; then Prob(B) = Prob(a). (4.207) a EB In terms of trajectory sets, Algorithm 3.2b can be written 14 Since A is finite, then every snbaet of A is finite, and so every BCA is measurable.

141 Algorithm 3.2c: (Algorithm for obtaining the performability of a system) METAPHOR Function metadiscrete METAPHOR Function 1) For each a E A, determine r7(a), i.e., the commandbuild [rI)tl of Eq. 4.206. 2) For each a E A, determine the probability of commandeval the set of trajectory sets r-'(a). Note that the iteration "for each a E A" has been distributed so that the entire capability function -1' is determined [step 1)j before any probability calculations are made [step 2)1. Finding 7 h(a) [step 1) of Algorithm 3.2cl will be done in a top-down manner. (See Section 3.3.8 [The Model Hierarchy] for details on the model hierarchy employed.) Beginning with the level-0-based capability function, we have (see also Eqs. 3.22 and 3.4) 70'(a) = I1'(a), (4.208) 7, (a) = U (4l(u),u),m i> O (.209) (u,u) 67. _1(a) where (Kc1(u),' ) = {(v,u' )I,(v) = u}. The algorithm can thus be stated iteratively:

142 Algorithm 4.2: (Algorithm for constructing I-') METAPHOR Function commandbuild METAPHOR Step |Function 1) Build the top levels of the hierarchy: commandbuild a) Specify the accomplishment set A. getacclev b) Specify level-0, i.e., U0 (see Section 3.4 getlevelO [Models Having Finite Performance Variables]) c) Specify level-i, i.e., U. getlevell d) In terms of trajectory sets, specify y1' (Kcl), getlcarray i.e., the inverse of the level-0 capability function. This is done by stating the [rIT)nhI of Eq. 4.192 and the Pn of Eq. 4.193. e) Specify cl', i.e., the inverse of the level-i to getkarray level-0 interlevel translation. This is done by stating the [K8t),hl of Eq. 4.182 and the p, of Eq. 4.184. f) Using 71o and rcl, calculate l'l, i.e., the in- intarraysets verse of the level-i capability function (Eq. 4.208, and more specifically, Eq. 4.206). 2) Build each of the remaining levels of the hierar- commandnext chy, i.e., for i = 2 to m: a) Specify level-i, i.e., Ui. getlevell b) Specify K,, i.e., the inverse of the level-i to getkarray level-(i-l) interlevel translation. This is done by stating the [K(t),l of Eq. 4.183 and the p, ~ 0 of Eq. 4.185. c) Using - yr and r,1, calculate -t i.e., the intarraysets inverse of the level-(+1) capability function (Eq. 4.209). Most of Algorithm 4.2 consists of the relatively straightforward actions of inputing data and checking the consistency of that data. The computationa heart of the algorithm is Steps

143 1)f) and 2)c). Each of the functions in Algorithm 4.2 will be discussed in detail below. 4.3.3.2. Intersection Trees Algorithm 4.3: (Basic algorithm for constructing y,-) METAPHOR Function intarraysets METAPHOR Step Function For each a E A: (for n = 0 to:) 1) Consider each level-(i-l) cube function associat- buildtree ed with 17 1(a), recursively calculating y7l(a), i.e., the set of level-i array products that map into a. (for h = 1 to p,-: find [(r e),)lI,:(ri + +) x x x such that a,= a (Eq. 4.206).) 2) Reduce (if possible) the results of step 1) to ob- reducelcarray tain a smaller set for Eq. 4.206. Step 1) works by constructing a tree of partial array product intersections. Recall Eq. 4.202: i p~P -1 rtil Si-I y,(uu' )- = a^ AA n 0 h j 0 k 0 (ri-, | I/j[-l | Pn, r,'+P,+1 s A V I A AI A tl (4 202),i o i o *i A e,A A U p1 As can be seen from Eq. 4.202, each level-(i-l) array product (denoted by the (r(0i)~A) in the

144 exponent of the middle line] must be replaced, [level-(i-l)] componentwise, with the corresponding cube functions in ici [denoted by the terms within the outer set of braces in the middle line, i.e.,',P-.,, +. + J 1hY. 11u# A l X"A A.^uitk ^ 1. ^(4.210) Wn A / LAO ) AII nl =: 0 I i h I 0 kI 0*'kl These componentwise replacements generate level-i array products, which in turn, can be (level-i) componentwise intersected. Rather than doing the intersections from scratch for each new level-i array product which we calculate, we shall "telescope" the intersections: first, we shall take one of the cube functions involving level-(i-l) projection (0, 0)); then, we shall take one of the cube functions involving level-(i-l) projection (0, 1)), intersect it with the previous set, and save it (if the intersection is non-null). We continue taking one cube function from each level-(i-l) projection, intersecting it with the previously derived cube function, and saving the result until we have reached level-(i-l) projection (r,i,,,si). This (combined with the basic part from higher levels, i.e., the last line of Eq. 4.202) is a level-i array product. Then we take the next cube function from projection (r,_-, 8,i_) and intersect it with the saved result from projection (r, _, 8,_2). We continuing taking cube functions from projection (r,_1, 8,_) until they are exhausted. Then we back down to projection (r,_l, s8,2), take the next cube function from there, and repeat the above procedure. This entire procedure of traversing back and forth among level-(i-l) projections and taking intersections is continued until we return to (0, 0) and there are no more cube function at that level. Of course, what we have just long-windedly described is the recursive traversal of a tree. The tree has the following interpretation. depthl5 d of the tree denotes a specific level-(i-l) projection (j, k); the root corresponds to projection (0, 0), depth 1 corresponds to SThe depth d of a node of a tree is the distance (number of arcs) to the root. For example, the root has depth 0, any nodes connected by a single arc to the root have depth 1, etc.

145 projection (0, 1), and so forth. Generally, depth d represents projection (j, k) = (d div (8ai+l), d mod (8s,+l)), (4.211) where "div" is integer division. Each node of the tree has associated a level-i "partial" cube function (partial in the sense that the cube function is not necessarily part of,). For the hth node at depth d r, + P, + 1 8, wh we sal soetes wte a which we shall sometimes write as Adh (^))Jk:((, + p, + 1) x s, (4.213) The root contains the "full cube function" (A^)) = Ut. (4.214) The node Adh^ at the end of the nth branch (i.e., the nth child) of the hth node A(d-1)h at depth d-l is obtained by the componentwise intersection of the nth cube function of K, with the cube function associated with the node A(d-~). Thus, we have (j, k) = (d div (si +1), d mod (8l+ 1)), and for a specific r)^h,,. + p, + I r, + p + I o (r,*5,k) AoAo^k lo A+ o w|, Ik (4.215):1o k UJI= j o W where w, is the weight of the nth cube function of K,. (Compare Eq. 4.215 with the middle line of Eq. 4.202.) Thus, we are constructing a tree of level-i cube functions. Call this tree the intersection free of the level-(i-l) array products [Pj),h]J.x. The leaves at depth

146 (ri + pi + 2)*(8, + 1) are the cube functions specified by the following term of Eq. 4.202:,-1, -I $ nil v P, r+, +l s 11 ^ vA A (Kf )I (4.216):J o = 0 1 =o 0 1 A= 1 JJ o'I Ia By evaluating an intersection tree, we mean determining the lowest leaves, i.e., the values of Eq. 4.216. Finally, once any given leaf of the tree has been determined, the corresponding basic components of the underlying level-(i-1) cube function (i.e., the last line of Eq. 4.202) can be adjoined (the A operation). Note that unlike, say, a parse tree, we are simultaneously constructing and evaluting the intersection tree. We can therefore toss away a "partial" cube function (Eqs. 4.212-4.213) once we have constructed and evaluated all nodes below it. Indeed, we never have to store more than a single chain of partial cube functions (as we go down the tree), and that chain's length is at most the depth of the tree. (Of course, coming back up the tree, we must store any newly computed level-i cube functions.) We are now prepared to state in more detail how an intersection tree is constructed and

147 evaluated. The following algorithm is called from intarrays as "buildtree(A~~, 0)." Algorithm 4.4: (Recursive algorithm for constructing and evaluating an intersection tree) METAPHOR Function buildtree(Ad, d) Comment Step a) Initialize this level's result to a) result +- 4; null. b) fornl= Oto NJk do b) Consider each level-i cube function associated with the c) If w,1 Er then (nonbasic) jkth projection of rc,. t for h, = 0 to p,, do begin c) For each cube function d) A(d+l)h~- Ad n (4.217) d) AA kl^ ^itr.+[ lk ], lkl:(', + Pi + 1) x s; [fIKb:kIhllJ jk1:(rj + pi + 1) X Sn e) If d $ (r, + p, + 2)*(s + 1)then with weight wu, in rkft^l: result + [resultl d) Do componentwise intersec- [buildtree(A(d+l)", d+l) tions of Ad and Eq. 4.217 to form the child A (d+l)l I(re e) If this is not the bottom level, else result - [Iresult] A(d+l)h^l"l do the next level with the child of step c). Save the result above end; along with all the higher level basic components of the level- f) return result (i-1) cube function of Eq. 4.217, i.e., the bottom line of Eq. end; 4.202. f) Return the result. Of course, many refinements can be made to Algorithm 4.4. In particular, we need not continue constructing the tree below any given node if that node is null, i.e., the result of the intersection of step c) is a null array. Also, as discussed above, at a given level, we can compute all the children of that node "before" proceeding down the tree. Then, we can attempt to simplify those children to derive a smaller set of children to traverse. Another technique that can possibly lower the computation time is to reduce the cube functions in

148 IKJ ]k)nj,:(r1 +p +L)x s before doing intersections with Ad [step c)]. The point of this latter step is to reduce the number of cube functions that will appear in the collection of children. Thus, we have the following algorithm:

149 Algorithm 4.4b: (Algorithm for constructing and evaluating an intersection tree) METAPHOR Function buildtree(Ad, d) Comment Step a) Initialize this level's result to a) result - 4; null. - 4; b) Consider each level-i cube b) for n, = 0 to Nj'1 do function associated with the (nonbasic) jkth projection of c,. c) If w. E ri: nA then c) For each cube function for hi = 0 to p; do (4.218) d) K [(K] lK i)"l ++ 1) x, [l: ],i k (i,; + p,. + 1) X S.i; with weight wRl in rF)^,: e) K reducearray(K); d) Remember the cube function f) N - <number of cube functions in reduced K>; satisfying step c). g) for nlI 1 to N do begin e) Reduce the cube functions stored in K. h) A(d+l)n Ad n K.; f) Note how many reduced cube i) If d (r, + p, + 2)*(8 + 1) then functions there are. result - [[result] g) For each cube function in K: [buildt( A(d+)" ) d+1) h) Do componentwise intersec- [ ree tions of Ad and the reduced ver-) x s x 1] sions of En. 4.218 to form the - child ~sious of(d44 is child A^(d else result [result - resul (dtl ^ i) If this is not the bottom level, do the next level with the child of step c). Save the result above j) return result j) return result along with all the higher level basic components of the level- (i-l) cube function of Eq. 4.218, i.e., the bottom line of Eq. 4.202. j) Return the result.

150 4.3.3.3. Reduction Several algorithms described above (Algorithms 4.3 and 4.4) specify that a set of cube functions be "reduced" or made smaller. In this section, we describe what the operation of "reduction" on cube functions means and how it is performed. To motivate this section, recall that in switching theory, two commonly performed operations are 1) finding the prime implicants of a switching function, and 2) from the resulting set of prime implicants, finding an optimal (under some cost criterion) cover of the function. The Quine-McCluskey algorithm 11851, [1861, and less commonly, consensus algorithms (Quine 11851; see also Tison 11871), are typically employed to solve the first problem. Various covering algorithms are used to attack the second. There is a similar problem when representing discrete functions. We desire to find an "optimal" representation of a discrete function. In Section 4.2.3.2.5 [Classes and Properties of Lattice Expressionsl, the correspondence between cube functions and implicants in switching theory is noted. Therefore, one might conjecture that the representation problem can be solved by appropriate extensions of the prime implicant extraction algorithms and associated covering algorithms. This is indeed the case. Some previous research addresses such extensions. For example, work by Tison 11881 examines generalized consensus for discrete functions, while Davio, Deschamps, and Thayse [1651 discuss a generalized Quine-McCluskey method and consensus. As discussed in Section 3.4.3 [Algorithms for Calculating Trajectory Sets], much of the above theory [1651, 11881 concerns the optimization of the space required to represent discrete functions. That is because in many applications, such as the design of switching circuits, discrete functions do not need to be calculated repeatedly. Hence, the cost of the time required to compute a spatially optimal representation is slight, compared to the cost of the space itself. However in tae application considered here, v^ are computing many representations, any one of wh. h is relatively ephemeral. If we.re not careful, the expense of

151 searching for a spatially optimal function may outweigh the advantage of possessing such a optimization. Our approach shall be to find a sub-optimal representation as quickly as possible. Unfortunately, the problem of finding a sub-optimal (or even an optimal) disjoint representation cannot be solved by straightforward extensions of, say, the Quine-McCluskey algorithm, since there is a constraint that prevents us from using "prime implicants" or "prime cube functions." The constraint is that the cube functions must be disjoint so that when probabilities are computed (using the cube functions as events), no event is measured twice. Instead of using prime implicants, we shall define a "maximal disjoint cube function" and specify an algorithm similar to the Quine-McCluskey algorithm. We need not worry about a covering algorithm because the set of maximal disjoint cube functions which are derived will all be necessary to cover the discrete function, and therefore, there will be no redundant cube functions. We begin by specifying the cost for a representation of a discrete function: The cost of a normal disjunctive form of a discrete function is the number of cube functions (minterms) in the representation. For example, cZ(x) = A 1 A A (%C)3) (4.219) h -n f k - S0 has cost p, because c(x) has p, cube functions. The set of prime implicants of a function is unique and so the cost of the disjunction of all of its prime implicants is fixed. On the other hand, we shall find that a discrete function may have many sets of maximal disjoint cube functions, and that the cost of such sets may vary. Therefore, to find the optimal maximal disjoint cube function representation, we may have to determine all such representations and choose the least costly one. If, however, it is somehow relatively "expensive" to determine all such representations, and it the variance of the cost of the representations is small, i.e., there is not a significant difference in cost among representations, we may be satisfied with a suboptimal representation, i.e., one with a higher

152 (hopefully not much higher) cost than the optimal. This is the case for our application. Our criterion for choosing a set of maximal disjoint cube functions is straightforward: We choose the first such set we can find. We shall not formally analyze the above problem; the intuitive argument should, however, explain the reasoning. Consider two cube functions t S co(x) /\ (J1k xi j 0 k - 0 (4.220) r S c,(x) = /\ /\ C"). C IX)- o t o If (Ctk)) = (Cknh)) = (C(',h)) for all j, k except one pair j', then co(x)V cx) = A o v(C' (C (4.221) Co(X) V C(x)= A U (C' A t (C, (4.2 U] #i k 3k e Recalling the identity (Eq. 4.111) YW (cb)) (4.112) we have (CO, )U(C 1 O (i)') (4.222) Y Ok 0 Tj and setting (c, M )) - (C& )) U (Ct k )) (4.223) we find that the two cube functions of Eq. 4.220 can be "squeezed" into a single cube func

153 tion r s co(x) V c1(x)- = /\ /\ xA^.')) CO(X) V CI(X) rj0 k =0-C (4.224) -= c(x). In words, the disjunction of two cube functions whose exponents are identical componentwise, except for possibly one component, can be written as a single cube function. Eq. 4.224 is the basis for a reduction operation *: CO(X) if every (C (C)) (4.22 Co(x) * c(x) if every (Ck)) = (C,)) except one 0 otherwise. Let F(x)= _ A, oi o SA!.)) (4.226) A I J- 0 k 0 ~k Then each cube function zj 0)) is a maximal disjoint cube function if for every pair of cube functions zC* ) and ( ik) (C ) (C ) f (oj)* (*() = 0 (4.227) and (Cb)) A (C) O. (4.228) That is, the cube functions cannot be reduced by Eq. 4.224 and are pairwise disjoint. In an earlier paragraph, the claim was made that there may be more than one representation of a discrete function whose cube functions are maximal disjoint. Consider the simple switching function whose Karnaugh map is shown in Figure 4.3. The cube functions (min

SUOTI 44uasaJdaj uoi ounJ o(f,no 4uIoCsTp ItuTx-u0 aIdIT4nuW q1TM uoT4ounJ SUTtlIoms jo TaIdw1 xa EV' 2TJ (XX X) (XX x M Xx 1 XM zAt, zA z zA........ FXII I IX5 ~........... ~~ ~~~I,. - (~ eIIIIx)n Z/ x I,,;7

155 terms) are noted by the rounded boxes. Clearly, the representations in each diagram are disjoint and maximal and represent the same function, yet are not the same representations. We are now in a position to describe how to take a disjunctive normal form of a function (e.g., Eq. 4.228) whose cube functions are mutually disjoint and extract from it a set of maximally disjoint cube functions. Note the assumption that we are initially presented with mutually disjoint cube functions. This is a valid assumption for our algorithms since all representations are always kept disjoint. Basically, the algorithm selects the first cube function and compares it with each other cube function. If the two cube functions can be reduced (Eq. 4.224), they are reduced; the first cube function is replaced by the reduction and the other cube function is deleted. When all the other cube functions have been compared, the algorithm selects the second cube function and compares it with all the other cube functions except the first. Again, reductions are made if possible. The algorithm continues selecting one cube function and comparing it with the remaining cube functions until all cube functions (except the last) have been selected as the "first" cube function.

156 Algorithm 4.&b (Algorithm for determining a set of maximal disjoint cube functions) METAPHOR Function reducetraj Comment Step a) Pretend changes have been a) changeflag - 1; made. b) while changeflag = I do begin b) While it is still possible to have reductions, keep repeating c) changeflag +- 0; the algorithm: d) for hi 0 to p-l do c) Reset the counter noting changes have been made. e) for h2 = hi + 1 to p do d) Consider each cube function f) while h2 < p (except the last) as the "first." and compresstest(Khl, Kh2) do begin e) Consider each of the "remain- g) K - squeeze(K, Kh ^, Kh 2 hi, h2); ing" cube functions. h) p -I f) If the second cube function exists, and if the two cube func- end tions can be compressed (Eq. 4.224): end; g) then compress them together. h) Note that we lost a cube function. When the above process completes, if any two cube functions were reduced, the process is repeated from the beginning until no reductions are made during a pass. The two algorithms used in reducetraj (compresstest and squeeze) are straightforward:

157 Algorithm 4.6: (Algorithm for detecting whether two cube functions can be compressed) METAPHOR Function compresstest(Ko, K1) Comment Step a) Note that no matching corn- a) found * 0; ponents have been found. b) for j =0 to r do b) consider each of the first dimension coordinates. c) for k = 0 to s do c) consider each of the second d) if (C~t) (C,) then dimension coordinates. e) If found = I then return false d) Check if the exponents are identical, componentwise. f) else begin e) If not, see if we have any oth- found -- 1; er nonequal exponents. If we have, then these two cube func- g) < squirrel away j and k> tions cannot be compressed, so return false immediately. end f) Otherwise, these cube func- else; tions may potentially be compressible. Note that have h) return true; found dissimliar exponents: end; g) and store their coordinate so that if we do squeeze the two cube functions, we do not have to recalculate the coordinate. h) If we got this far without returning, then the cube functions can be compressed, either a single coordinate is different or the two cube functions are identical. Either way, return "true."

158 Algorithm 4.7: (Algorithm for compressing two cube functions) METAPHOR Function squeeze(K, KhI, Kh2, hi, h2) Comment Step a) Take the union of the two a) (C)) (C^)) U (Cj)) differing exponents and use it to replace the exponent of the first b) <purge K of Kh > cube function. The values of j 2 and k were stored when compresstest (Algorithm 4.6) end; was executed. b) Delete the second cube function. Algorithm 4.3 refers to a METAPHOR function called reducelctraj and Algorithm algogetktraj (to be defined in Section 4.3.3.4 [Consistency Checking Algorithms]) refers to a METAPHOR function called reducektraj. At level-i, reducelctraj considers for each accomplishment level a, the set''(a), and tries to reduce that set. Similarly, reducektraj attempts to reduce (jkrc,)-'(u) (see Eq. 4.16). These two functions are specific (and slightly more effecient) instances of the more global function reducetraj (Algorithm 4.5); the major differences are in the looping. 4.3.3.4. Consistency Checking Algorithms The high level Algorithm 4.2 describes the basic information which must be input to METAPHOR, mainly descriptions of the model hierarchy and the interlevel translations ic,. It is important that the input information be correct and consistent. As with any algorithm or program, the correct answer can be produced only if the data is correct. There are several algorithms that METAPHOR can employ to insure that the input is consistent; the user has responsibility for the correctness of the input. Consider the high-level input algorithms (see Algorithm 4.2) for specifying the model hierarchy. The first three (getacclev, getlevelO, and getlevell) are straightforward:

159 Algorithm 4.8s (Algorithm for obtaining the accomplishment level) METAPHOR Function getacclev Step METAPHOR Function 1) Fetch the number of accomplishment levels 2) Fetch the name of each accomplishment level getname Algorithm 4.9g (Algorithm for obtaining the specification for level-0) METAPHOR Function getlevelO g^~~Step ~METAPHOR Function 1) Get the attributes for leve-O getattrOval 2) Get the phases for level-O getphaseOval 3) Get the basic variables for level-O getbasicval Algorithm 4.10: (Algorithm for obtaining the specification for level-1) METAPHOR Function getlevetl METAPHOR Function 1) Get the attributes for level getattrlval 2) Get the phases for level-1 getphaselval Each of the functions getattrOval, getattrOval, getbasicvar, getattrlval, and getattrlval are similar to getacclev (Algorithm 4.8). The function that inputs 70 = Kr is getlctraj:

160 Algorithm 4.11: (Algorithm for obtaining yo = o) METAPHOR Function getlctraj Step METAPHOR Function 1) For each accomplishment level a, get the getlccount number of cube functions in o'l(a). 2) For each accomplishment level a, get the cube getlctrajsets functions in -y0(a). (A scanner and parser.) 3) Check that the cube functions input in step 2) checklctraj describe (according to the conventions of Section 4.3.1 [Representation of Discrete Functions Within METAPHOR]) a (possibly non-total) function, i.e., that there is no point u E U0 that maps into more than one accomplishment level. This involves checking that no point appears in more than one cube function. 4) Check that the cube functions input in step 2) checktotal describe (according to the conventions of Section 4.3.1 [Representation of Discrete Functions Within METAPHOR]) a total function, i.e., that every point u E U0 is defined. This involves checking that every point appears in at least one cube function. 5) If the user desires, reduce yo. reducelctraj The function getktrajset is similar to getlctraj and fetches the level-i interlevel translations

161 Algorithm 4.12: (Algorithm for obtaining c,) METAPHOR Function getktraj Step METAPHOR Function 1) For each possible value u of each attribute of getkcount each phase of the level-0 trajectory space (i.e., each value of each projection of U~), get the number of cube functions in (j^kCl)-l(u). 2) For each possible value u of each attribute of getktrajsets each phase of the level-0 trajectory space (i.e., each value of each projection of U~), get the of cube functions in (Jcl)-l(u). (A scanner and parser.) 3) Check that the cube functions that were input in checkktraj step 2) describe (according to the conventions of Section 4.3.1 [Representation of Discrete Functions Within METAPHOR]) a (possibly nontotal) function, i.e., that there is no point u E U1 that maps into more than u at level-0. This involves checking that no point appears in more than one cube function. 4) Check that the cube functions input in step 2) checktotal describe (according to the conventions of Section 4.3.1 [Representation of Discrete Functions Within METAPHOR]) a total function, i.e., that every point u E U~ is defined. This involves checking that every point appears in at least one cube function. 5) If the user desires, reduce Kl. reducektraj Note that only a function getlevell (but no getlevel2 is defined (see Algorithm 4.2). Also note that we compute 7i before we examine t,+l [step 2) of Algorithm 4.21. Once the function y1 has been determined, we can "reorganize" the hierarchy as follows: discard all references to level-0 and yo, rename the "old" level-l to be the "new" level-O, rename the "old" yi to be the "new" -o, and continue as though the next level of the model is level-l. Thus, internally to METAPHOR, we never progress beyond level-i. Hence, we do not require any functions that deal with more than two levels (level-O and level-l).

162 Consider the function checklctraj (see Algorithm 4.11). checklctraj verifies that the given cube functions describe a "legal" -o in the sense that no trajectory u E U0 is mapped into more than one accomplishment level. This is checked by making sure the various cube functions are mutually exclusive. The technique employed is direct: Take the pairwise meet of each cube function with every other cube function and make sure that the result is null. If the result is non-null, then the cube functions do not describe a function. METAPHOR will print the offending two cube functions. If the user is requesting that reduction of the input cube functions is to be made, then the cube functions associated with a given accomplishment level do not have to be mutually exclusive (since the reduction function will produce mutually exclusive cube functions) and so no checking is performed between those cube functions. Recall Eq. 4.191: l /p o0 rto+Po+ so 7o0() =a A I, (4.191) 0^k Te0 h i n0 0 Then

163 Algorithm 4.13: (Algorithm for checking that o7(u) is unique) METAPHOR Function checklctraj Comment Step a) Consider each accomplish- a) for ni = to I do ment level. b) for h = 1 to p. do begin b) Consider each cube function associated with that accomplish- c) for n2 =0 to nl-1 do ment level. 7 d) for h2 = 1 to pn2 do begin c) Consider each accomplishment level up to the present e) nonnull +- true one. f) for j = 0 to (ro + po + 1) do d) Consider each cube function f) If nonnull then associated with the accomplish- h) for k = to 8 do ment level of step c). i) If nonnull then e) Assume the two cube func- t tions of steps b) and d) are not j) nonnull - < nr),lh n r.)n2A2 mutually exclusive. i non-empty?>; f) Consider each component (at- k) If nonnull then error( tribute): end; g) We can quit if the meet of I) If not reducelc begin the cube functions are null i m) for h2 = I to hi - 1 do begin h) Consider each phase: n) nonnull - true i) We can quit if the meet of the o) for j = 0 to (ro + po + 1) do cube functions are null I p) If nonnull then j) See if the intersection of the q) for k = 0 to 80 do compenents is null. If so, then the meet is empty. r) nonnull then s) nonnull *-< rokF~ n o k) If the cube functions are not I) nonnull < rj)k)ll )h2 mutually exclusive, then process is non-empty?>; the error. t) If nonnull then error() 1) If the cube functions are to be end; reduced (See Algorithm 4.5), then we need not worry about end; the mutual exclusiveness of cube functions within the same accomplishment level. Otherwise repeat steps b) through k) for the current accomplishment level.

164 The function checkktraj is essentially the same as checklctraj. Recall Eq. 4.181: (K P ~ )-A) ~Jo~o~,(u) -vI A A.,,, o= o A I.= o t=o t 4.181 (- 1,2,...,m). The major difference between function checkktraj and checklctraj are that checkktraj is executed for every projection (jkKi rather than once: Algorithm 4.14: (Algorithm for checking that Ic(u) is unique) METAPHOR Function checkktraj Comment Step a) Consider each level-O com- a) for jo = 0 to ro do ponent (attribute). b) for ko 0 to 80 do b) Consider each level-O phase. c) checkkprojtraj(jo, ko) c) Check the projection of Kl. end; where

165 Algorithm 4.15: (Algorithm for checking that joko (u) is unique) METAPHOR Function checkkprojtraj(jo, ko) Comment Step a) Consider each value that jot0 a) for nl = 0 to Nfok do can assume b) for hl = 1 to P'do begin b) Consider each cube function r to p do be associated with that value. c) for n = Q to nl-1 do c) Consider each value up to the j t 1 present one. d) for h2 = 1 to p, 0~ do begin d) Consider each cube function e) nonnull - true associated with the value of step f) for j- = 0 to (rl + p + 1) do f) if nonnull then e) Assume the two cube functions of steps b) and d) are not h) for kl = 0 to 81 do mutually exclusive. i) If nonnull then f) Consider each component (at- ij) nonnull < Kto tribute): o0k0)n1h1' ^002*2 is non-empty?>; g) We can quit if the meet of k) I nonnull then error( the cube functions are null end; h) Consider each phase: 1) If not reducek begin i) We can quit if the meet of the m) for h2 = 1 to h - 1 do begin cube functions are null n) nonnull *- true j) See if the intersection of the ) or j to ( + + 1) do compenents is null. If so, then the meet is empty. p) If nonnull then k) If the cube functions are not q) for k = 0 to 80 do mutually exclusive, then process r) if nonnull then the error. s) nonnull -- <K J0 fln K 1) If the cube functions are to be 01o)1 n Koko) 1^A2 reduced (See Algorithm 4.5), is non-empty?>; then we need not worry about the mutual exclusiveness of cube t If nonnull then error( functions within the same value. end; Otherwise repeat steps b) through k) for the current value. end; The other function in Algorithms 4.11 and 4.12 which checks the consistency of the input is checktotal. This function verifies that the function (either 7o or c1) is total, i.e., that every point in the domain is defined. This is done by compressing all of the cube functions

166 (regardless of weight) associated with the function until they will no longer compress. The function is total if the result of the compression is the single "full" cube function: f $ /\ /\ Z(Qj,) (4.229) 0 k c 0 If the result of the comparison is not the full cube function, the function may still be total; any missing values can be found by taking the difference between the full cube function and the result of the reduction operations. The algorithm to find any missing trajectories is implemented by the function complementraj. Algorithm 4.16: (Algorithm for checking that 7o or Ki is total) METAPHOR Function checktotal Comment Step a) Reduce the cube functions. a) C - reducetraj b) If we do not get the full cube b) If C 4 <full cube function> then function then: c) complementtraj(C) c) Complement the cube functions we do have. end; Function complementtraj works by using Eq. 4. 1555: p h1 c,(x)c. (4.230) See Eq. 4.152 for the definition of the complement of a cube function, cp(x)C. complementtraj complements each of its input cube functions and then intersects them, using an intersection tree (see Section 4.3.3.2 [Intersection Treesl) algorithm similar to Algorithm 4.4.

167 Algorithm 42.7: (Algorithm for complementing a set of cube functions) METAPHOR Function complementtraj(C) Comment Step a) Determine how many cube a) N - <number of cube functions in C> functions are given. b)D - NULL; b) Initialize the set which will hold the complemented cube c) for n = 1 to N do functions. d) D - [D complement(CO)I; c) For each given cube function: e) comptree(D); d) complement the cube funtion. end; e) Intersect all the complemented cube functions. The function 1.\ 8 implements Eq. 4.230.

168 Algorithm 4.18: (Algorithm for complementing a cube function) METAPHOR Function complement(C) Comment Step a) Initialize the structure which a) COMP -; will hold the complement of the cube function C. b) for a = 0 to r do b) Consider each row. c) for b = 0 to s do begin c) Consider each column. d) If Cjk f Qjk then begin d) If the exponent of concern is e) D - 4,; not full, continue. Otherwise, we do not have to consider this f) for j = 0 to a-i do element since the resulting array would be null. g) for k = 0 to s do e) Initialize the next cube func- h) D -e Ca; tion in the complement. i) for k = 0to b-l do f) Go through each row up till just before the present: j) Da C.; g) Go through each column in k) Dab Cab; that row: h) And copy the exponent of that element. i) Now, do the present row. Consider each column up till just before the present. j) And copy the exponent of that element. k) Now complement the exponent under consideration.

169 METAPHOR Function complement(C) (continued) Comment Step 1) To finish up, go through the 1) for k = b+l to s do remaining columns on this row. m) Dot - Qt; m) And make those exponents full. n) for j = a+l to r do n) Go trough the remaining o) for k 0 to s do rows. p) Dt Cjt; o) Go through each column in that row: q) COMP [ (COMP DJ p) And make those exponents end full. end; q) Store the newly constructed cube function before construct- r) return COMP ing the remaining ones. end; r) Finally, return the result. Finally, the function comptree (see Algorithm M, 17 ) is a version of function buildtree (see 4.4). 4.4. METAPHOR —A Performability Modeling and Evaluation Tool As discussed in Section 3.4.4 [METAPHOR-A Performability Modeling and Evaluation Tool], the algorithms in Section 4.3.3 [Algorithms] (as well as other supporting and ancillary algorithms) have been implemented in METAPHOR. The current version is numbered 3 and is implemented in C 1176J. The portion dealing with discrete performance variables is called "meta_discrete", contains approximiately 8,000 lines of source, and bas approximately 256 Kbytes of executable code, although the runtime size varies considerably due to dynamic

170 allocation and deallocation of memory. Appendix A contains the Unix manual entry for METAPHOR, while Appendix B contains the manual entry for meta_discrete. Appendix D describes the calling structure of meta_discrete. Among the supporting algorithms in METAPHOR are routines for calculating the probability of the derived trajectory sets. These algorithms are described in Ballance, Furchtgott, Meyer, and Wu 1301, and Wu 1591, while their implementation within METAPHOR is discussed in Furchtgott [41J, [461. 4.5. Examples We now present some examples of the use of METAPHOR as a tool for generating 1-1 and calculating performability. As discussed in Section 3.3.8.2 [Examples of a Model Hierarchyl, examples of model hierarchy construction by Furchtgott 1891 is available in 129j, 1301, [381 and a second example of model hierarchy construction appears in 1311, 1321, 1391, 140), 1441. These examples will not be redescribed here, but METAPHOR sessions for those examples will be described. In addition, two examples by Hitt, Bridgman, and Robinson 11561 will be discussed. The latter two examples were also solved using fault-tree evaluation and a tablular method based on the program TASRA [117]. All of the examples were also solved completely by hand, though the labor was sometimes extensive. Except for the fourth example, the hand generated answers agreed with METAPHOR's results. The fourth example was so large that the hand generated answer had to make simplifying assumptions which led to errorneous results. The fault-tree and TASRA results were also errorneous because of the way in which they handled statistical dependencies. 4.5.1. Simple Reliability Network Example This example is by Hitt, Bridgman, and Robinson (1561, who call it the "series-parallel problem." Consider the simple reliability network in Figure 4.4. The mission length is 10 hours, and the subsystems fail with the following constant rates:

171 Fig. 4.4 Simple reliability network

172 Subsystem X A 5X 10-4 B 4X10-4 C 1X10-3 D 1X10-3 The system is successful if, at the end of the mission, there is a path from A to B. The accomplishment levels are (success, fail) and the performability can be identified with the reliability. Appendix F shows the METAPHOR session deriving the result. The answer agrees with hand calculation, and presumably with the results of [1561, though those results were not reported. 4.5.2. Simple Air Transport Mission Example The second example we discuss is a simple air transport mission [29J, [301, [381, 1891. As discussed in Section 4.5 [Examples], we will not repeat the scenario here. Consider the system Ps2 of [291; the METAPHOR session for this example is in Appendix G. Accomplishment Level a p(aq) a0 = (ALL GOOD) 0.9998 a, = (BAD FUEL EFFECIENCY) 337. X 10-7 a2- (DIVERSION) 0.0 as = (BAD FUEL EFFECIENCY AND 1450. X 10-7 DIVERSION) a4 = (CRASH) 5. X 10-7 4.5.3. SIFT Computer Example The third example we present is the SIFT computer 1671. Again, as disussed in Section 4.5 [Examples], we will not repeat the scenario here. The entire METAPHOR session is comparitively lengthy; it follows closely the sessions in Appendices F and G in structure. The input data for this example is in Appendix H. The specific instance of [321 calculated is the London to New York flight having initial state (6,6) with probability one.

173 Accomplishment Level a p(a) ao = (ALL GOOD) 0.9962 a = (BAD FUEL EFFECIENCY) 3.80 X 10-3 a2 = (DIVERSION) 3.77 X 10-1 a3 = (BAD FUEL EFFECIENCY AND 6.02 X 10-8 DIVERSION) a, = (CRASH) 8.34 X 10-'3 4.5.4. Dual-Dual Example The fourth example is by Hitt, Bridgman, and Robinson 11561. The scenario is given in Appendix I. As with the SIFT example (Section 4.5.3 [SIFT Computer Example]), the entire METAPHOR session is comparitively lengthy. The input data is in Appendix J. Table 4.1 contains the results for the solution techniques using METAPHOR, hand calculation, fault trees, and TASRA. The hand calculation as described in [156] was quite involved and hence prone to error. However, there is a significant discrepency between METAPHOR and the fault tree and TASRA results. The reason for this lies in the technique used in the fault tree and TASRA evaluations to combine two separate outcomes (diversion and crash) which are statistically dependent. For the evaluation, separate fault tree and TASRA evaluations were performed for diversion (ignoring any possibility of crash) and for crash (ignoring diversion). Thus, the derived probability of diversion does not take into account any crashes which might occur. Similarly, there are trajectories in which attempted CAT-I landings crash and hence lower the probability of a successful CAT-II landing. It may be possible to evaluate such a mission correctly with fault-tree tools. However, with a program such as METAPHOR, such evaluation becomes significantly easier. Also, it is interesting to note that the example does not contain any time dependencies (mission outcomes which depend on sequential combinations of events) such as those of the examples of Sections 4.5.2 [Simple Air Transport Mission Examplej and 4.5.3 [SIFT Computer Example. Time dependencies are much harder to model using the "snapshot" techniques of fault tre-s

174 Mission Out- METAPHOR Hand Performability Fault Trees TASRA come Probabilities Safe Flight and 0.983740 0.974212 0.974245 0.974236 Landing at Primary Destina. tion Safe Flight and 0.016193 0.025763 0.025701. 0.025740 Landing at Alternate Destination Loss of Aircraft 66.59X 10'8 25.98X 104 25.98X 10 23.69X 10Table 4.1 Dual-dual computer system results for four solution techniques.

CHAPTER 5 CONTINUOUS PEFORMANCE VARIABLE (CPV) METHODOLOGY 6.1. Introduction Much of the recent work in analytical evaluation of degradable fault-tolerant computing systems has concerned the evaluation of the total "benefit" derived from the computing system during some specified interval of time 1151, [171-1221, [241, [261-[28J, [33j, [561, 11891-11941. In these studies, a variety of concepts and terminology have been used to formalize benefit, including "work"[15J, "capacity"[17], "reward"[22], and "performance"[181, Other studies have concerned the evaluation of "cost" in conjunction with the evaluation of some form of benefit [261, [1891-11931. Much of this work [191-[221, 1271, (911, [1891-11941 has considered systems in steady-state which are utilized over an unbounded period of time: repair of components is allowed (indeed, required), and the evaluated quantity is the expected rate at which benefit is derived under steady-state conditions. However, many important applications of degradable systems have bounded utilization periods. In such cases, a transient solution of the system benefit is usually required. Moreover, in many such applications, the user i interested in how benefit is distributed probabilistically (i.e., the probability distributio: unction of the benefit variable). This further complicates the evaluation process as com-,arei, say, with the evaluation of expected benefit. 175

176 As discussed in Section 3.3 [An Informal Introduction to Performabilityl, we view system "performance" as a relatively general concept which includes various types of benefit as possible specializations. Briefly reviewing some of the terminology of Section 3.3 [An Informal Introduction to Performabilityl, we define the performance of a system S over a specified time period T to be a random variable Y taking values in a set A. Elements of A are aeeompishsment levels representing how well S performs during T. Relative to a designated performance variable Y, performabiUi is the probability measure p induced by Y where, for any measurable set B of accomplishment levels (BCA), p(B) is the probability that S performs at a level B. Evaluation of performability thus requires solution of the probability distribution function of Y. This solution is based on an underlying stochastic process X, called the bDes model (of S), which represents the dynamics of the system's structure, internal state, and environment during utilization. Specific interpretations of performability thus depend on how values of Y (the accomplishment levels A) are interpreted. In particular, if elements of A represent levels of benefit then performability describes the system's ability to benefit its user. In the discussion of this chapter, we consider this type of performability where benefit is equated with reward (see Howard 1155j for example) derived from using the system during a specified period T. We are interested in the case where the time period T is bounded, although unbounded periods are considered during an intermediate step of the evaluation process. Also, by the definition of performability (see above), we seek to determine (either analytically or numerically) the probability distribution function of the reward variable Y. Prior work relating to this problem has dealt primarily with evaluations of expected reward, i.e., the expected value of the reward variable Y. This background is nevertheless relevant and two approaches are of particular interest. The first is based on the concept of the "potential" of a semi-Markov process; see Cinlar 11951 for an introductory discussion o" potentials. Gay [911 (see also Gay and Ketelsen 1191) has applied potentia!3 of f.!arkcprocesses to modeling the performance of degradable systems. Most of this work deals witr steady-state solutions of systems having repairable components. The:;,c lid ip>pr.):ci'

177 considers transient solutions of the expected reward and is based on semi-Markov reward models such as those discussed by Howard 11551. The formulation is in terms of a set of integrations and variables which must be solved in order to determine the expected reward. Because of the many inherent convolutions, the equations can be conveniently written in terms of Laplace transforms. However, the equations are generally difficult to solve (and if done in transform space, to invert), and so practical applications are limited. De Souza [221 has applied Howard's work to fault-tolerant computing systems via a unified reliability/cost model; however, these methods presume that the system is in steady state. Though some of the underlying characteristics (especially the nature of transition graphs) of these potential and reward models are allowed to be more general than those considered in this thesis, these models are restricted to be semi-Markov and, more importantly, the evaluation results are expected values rather than probability distribution functions. Relatively little work has been done on obtaining transient solutions of the probability distribution function (PDF) of the reward variable. Notably, Cherkesov [921 presents an elegant solution for the probability of accumulating a minimum reward during a finite interval when the underlying process is semi-Markov. The length of the interval can itself be random. However, the solution is in terms of a system of equations of two-dimensional LaplaceCarson transforms (see p. 39 of [1961 for a description of this transform). Solving and inverting these equations is intractable except in the simplest of cases. Instead, Cherkesov employs the transforms to obtain expected values. This work is mainly of theoretical interest. In the presentation that follows, we obtain a method for determining the PDF of the reward variable, subject to certain conditions imposed on the base model and on the corresponding reward model. Specifically, we assume that the base model is a finite-state process and is "acyclic" in the sense that states are not revisited by the process. However, the process need not be Markov or even semi-Markov. We assume further that the corresponding reward model is a nonrecoverable process in the sense that a future state (reward rate) of the model cannot be greater than the present state. These conditions reflect properties that are typically exhibited by degradable, nonrepairable computing sytems. For

178 this model class, we are able to obtain the probability distribution function of the performance (reward) variable and, hence, the performability of the corresponding system. Moreover, this is done for bounded utilization periods, an assumption which demands a relatively complex solution. The approach is based on the strategy used in 1331, 1561 to solve performability, once the performance (reward) rates were obtained via a queueing model analysis. (There, in reward terms, the reward rate of a given structure state is taken to be the normalized throughput rate; performance is viewed as average reward, i.e., total reward divided by the duration of utilization.) This earlier work, however, does not suggest specific techniques for implementing the prescribed steps. The computational example presented in [33], [561 (namely a 3-state process) is solved using graphical arguments to determine appropriate regions of integration. Such an approach becomes more difficult when the number of states is four and becomes intractable when the number of states is five or more. This chapter describes a solution technique wherein regions of integration are determined in a tractable, algorithmic fashion. Section 5.2 [Reward Modelsj discusses the model; Section 5.3 [Reward Model Solution] presents the solution and describes a program implementing the solution procedure. Examples of closed-form solutions are presented in Section 5.3.9 [Closed-Form Solutions] and applications are illustrated in Section 5.3.11 [Numerical Solutions]. 5.2. Reward Models 5.2.1. Definition of Reward Models We consider reward models of the type discussed in [155], although the stochastic process need not be semi-Markovian. We restrict our attention,however, to the case where reward is determined solely by reward rates (refrred to in 1551, [591, [62J as "operational rates") associated with states of the model. We assume further that these rates are constant, i.e., time-invariant. More general reward structures (time-varying rates and "bonuses" associated with state transitions) are also accommodated by our approach, and they are considered

179 in Section 5.4 [History Based Reward Models (HBRMs)l. A state, in this context, typically represents a certain "operational status" of the systern, including configurations wherein the system is not operating (system failure). In a given state, the associated reward rate reflects the pace at which the system rewards its user. Such rates can thus be identified with aspects of system performance such as productivity, responsiveness, and utilization, or, at a higher level, with broader measures such as economic benefit. In Section cpvorates, we discuss reward rates in greater detail-what such rates can represent and how to obtain them. More formally, let X, be the random variable denoting the state of the system at time r. Accordingly, the base model of the system is the stochastic process X - (, re [, oo)}. (5.1) For reasons given later in the discussion, we restrict X to be finite-state, i.e., Xr E Q = {qv, 1.N-l.., qo}. Suppose further that each q, E Q has an associated reward rate r, (a nonnegative real number). Then there is a natural function r: Q —.i 0'+ (5.2) where r(q,) = r, is the rate associated with q,. r is referred to as a reward structure of X. Let X,, r e [1, oo), be a random variable Itaking values in r(Q)J representing the reward rate of S at time r, that is X, r (X). (5.3) Then the stochastic process X= {X',I rE ([0, oo)}. (5.4) is a reward model of the system. Such reward models are a special class of the type of "operational models" investigated by Wu (621. (Also, compare with the "capacity models" of

180 Gay 1911, Gay and Ketelsen [191.) The performability models we consider consist of a reward model X along with a performance (reward) variable Y representing the total reward accrued during utilization of the system. More precisely, given a reward model X and a utilization period T = 10, tI, we take Y to be the random variable = I,d. (5) 0 Equivalently, in terms of the base model X and the reward structure r, Y = fr(X) dr. (5.6) 0 5.2.2. An Example of a Reward Based Performability Model Consider a computing system S consisting of N processors, each processor having a computational capacity of 6 jobs/hour. In addition, there is an air conditioner which does not affect the computational capacity of the system (though it may the affect the failure characteristics of the processors). The base model X is a (2'(N+ 1)-state (not necessarily Markov) stochastic process whose state-transition diagraml appears in Fig. 5.1. In state (i,j), i E ({0, 1,...., N} denotes the number of operational processors and j is 1 if the air condillnformally, a staec-traunitw diagram for a stochastic process X is a diagram representing a directed graph whose nodes (points) denote the states of X and which has a directed arc (line) from node q, to q, if and only if, with nonsero probability, X can visit state q, and then state q, without visiting any intermediate states That is, there exist times r, v E [0, oo) where r < v, such that ProbX, == q, and Xv == q and X E ( q,), for all f such that r < 0 < v] > 0 (To be able to discuss the behavior of X during an interval, we assume X is separable.) If X.s Markov and if tb' transition rate for the transition from q, to q, is associated with the directed arc, then the diagram of the graph is a istae-trassitiow-rate diagram. State (node) q, is reachkble from state (node) q, if there exists a nontrivial sequence of di3tinct state (amd. e q,, q1, q2. ~ -, q beginning at q, and ending at q1 such that each neighboring pair in the sequence [i.e., (q,, qli. (91. q2).., (iq., q)l is connected by a directed arc. The state-transition diagram is Acl/eic if the graph is acyclic, i.e., no point is reachable from itself.

181 (\/ ( 1,1) 0o 0 0 0 0 ((0 1!1) ~> ~ ((00) Fig. 5.1 Transition diagram for the Markov model of the example of Section 5.2.2

182 tioner is operational and 0 otherwise. Assuming system capacity is proportional to the number of operational processors, the reward structure is the function r(i,j) = i65. Accordingly, the reward model X has N+1 states corresponding to the N + 1 different reward rates. Relative to a specified utilization period T, the performance (reward) variable Y is the number of jobs processed during T. 5.2.3. Determination of Reward Rates An important consideration when applying the methodology described in this chapter is the interpretation and derivation of the reward rates. In this section, we delineate several possible interpretations and quantifications of reward. All determinations can be made using one of at least four methods: A) measurement, B) simulation, C) analytic techniques, and D) subjective judgement. The accuracy of the resulting reward model depends on which of the above techniques is employed. Generally, A) is more accurate than B), which is more accurate than C), which, in turn, is more accurate than D); conversely, the most accurate methods are usually the most expensive to employ. The following paragraphs discuss several possible interpretations of reward. This survey is by no means complete; there are certainly many other applications not considered here. 1) Capacity and Workload: One basic measure of a computing system's reward rate is the speed at which the system is able to perform computations. This is commonly referred to as capacity. Work of Beaudry [17], Gay 1911, Gay and Ketelsen [191, Castillo and Siewiorek [21J, Oda, Tohma, and Furuya [1921, and Munarin [1941 model capacity. A related concept is that of worioad, which is the demand for computation placed on the system by the environment. Studies by Gay 191] and Gay and Ketelsen 119J examine workload models. 2) Queues and Networks of Queues: A detailed view of the system's behavior often can be obtained by studying queueing models of the system. Examples of such models abound in the performance evaluation literature; see Kobayashi 131, Ferrari [2), Chandy and Sibsfr I4l, and Trivedi [731. For instance, S could be a multiprocessor (k servers) and a buffer (queue)

183 for storing arriving tasks; S is then a G/G/i queueing system. If arrival and service rates are exponential and the buffer length is L, then S is the M/M/k/L+k system considered by Meyer [331. The reward rate and total reward associated with a queueing system depends on what is of concern to the analyst. One could associate reward with throughput rate, system (or service) time of customers, server utilization, number of customers in the system, etc. Hence, for a given structural configuration, standard queueing analysis (e.g., see Gross and Harris [1971 or Kleinrock [1981) or simulation could provide any of the above rates. In [33j, for instance, the reward is based on a "normalized" throughput rate. A study by Huslende [24j considers response times of M/M/n queues, with suggested generalizations to M/C/1 and GI/M/In queues. We generally wish to avoid the difficult problem of transient solutions of queueing models, so we make some reasonable assumptions. Usually, therefore, one assumes that the average time a customer is in the system is very small compared to the utilization period 10, tJ and the average time between changes in the structural state of the system. The differences in these times may be many orders of magnitude. For instance, in the M/M/k/L+k system of [33j, the customers are computational tasks arriving and being serviced in terms of seconds or milliseconds, while structural changes occur when processors and buffers fail with MTBF's of hundred's of hours. S) Profit and Expense: An important measure of the reward derived from a system is the profit (say, in terms of dollars) obtained from the system, or, in a negative context, the expense of the system. Each system state has an associated rate of profit or expense. Such values may be obtained, for instance, by economic analysis or by utility techniques, e.g., Raiffa [199j. Koren and Berg [189J and Huslende [1931 have based analyses on economic factorb. 4) Control Theoretic: When the system is a real-time control processor, the reward rate associated with a given state could be specified as functions of such control theoretic con

184 cepts as response time. For instance, Krishna and Shin 1261 have examined such descriptions, while Gai and Adams 128] have discussed tradeoffs between "optimal" and "robust" response times. 5.2.4. Nonrecoverable Processes 5.2.4.1. Definition of a Nonrecoverable Process Consider the "desired" behavior of a degradable computing system used over a bounded period of time. Given a set of available resources, a well-designed degradable computing system configures itself to maximize the reward rate. Thus, when a component fails, the system does not reconfigure itself in a manner that causes the reward rate to be greater than before the failure. We will assume there is no repair (i.e., component replacement via an external source), so an increase in the reward rate due to the acquisition of additional components cannot occur. Transient faults that could lower the reward rate temporarily, thereby raising the rate again when the fault is corrected, are not be considered. Under these conditions, the reward rate of the system is non-increasing in time, which has the interpretation that the system does not become a "better" system after a change in state (e.g., a component failure). These assumptions are formalized by requiring that the reward model X be a nonrecoverable proceau{62\ relative to the usual ordering of real numbers. In other words, for all states q1, q2 E Q and all times r,v E [0, oo) such that r < v, we require that Prob[Xr= r(ql) and X, = r(q2)J > 0 ^ r(q2) < r(ql). (5.7) Consider the process X of the multiprocessor/air conditioner example discussed above. Clearly, by the state-transition diagram of Fig. 1 and the definition of the reward structure, the reward rate of the prcwess cannot increase (with positive probability). Hence, the associated reward model X is an example of a nonrecoverable process.

185 5.2.4.2. Some Properties of Nonrecoverable Processes The following are derivable immediately from the definition of a nonrecoverable process: Theorem 5.1: (Necessary and sufficient conditions for X to be a nonrecoverable process) Let X be a finite-state stochastic process with state space Q and reward structure r. For all states ql, q2 E Q and all times r, v E [0, oo) such that r < v, the following are equivalent statements: i) X = {r(X,) I r E [0, oo)} is a nonrecoverable process. (5.8) ii) Prob[X,= r(q,) and X,= r(q2)] > 0 = r(q2) < r(q). (5.9) iii) Prob[X,= ql and X,= q2 > 0 = r(q2) < r(q). (5.10) iv) Prob[X"= r(q2) | X,= r(q,)J > 0 q r(q2) < r(ql). (5.11) v) Prob[X=- q21 X,= qj] > 0 ~ r(q2) r(q). (5.12) vi) Prob[(X,,r(ql)l,-r(ql)j = 1. (5.13) vii) Prob[X,<XJ = 1. (5.14) Proof: i) = ii): by definition of nonrecoverable process. (5.15)

186 ii) i iii): by definition of X,. (5.16) ii), iv): ( ProblX, = r(ql) and X = r(q2)] > O= r(q2) < r(q,)) (5.17) Prob[X, r(ql) and X, r(q2)] r(il) ProbIX, = r(ql), ( Prob[(X = r(q2) 1 X, r(q1)l > O= r(q2) < r(ql)) iv), v): by definition of X,. (5.18) iv), vi): Since E Prob[XY, = r(q) X, = r(ql)j = 1, then 9E Q ( Prob[X=- r(q2) I X,= r(ql)j > 0: r(q2) < r(ql)) Prob[XY r(q)l -= r(ql)j = 1 (5.19) qE Q r(q) < r(q1) (= Prob(X^<r(q) I,- r(q)l 1 vi) v vii): ( Prob[i,<r(,)lx,-= r(ql)l = 1). ( Prob[X,<r(q,) and X,-=r(q,)l ProbiX,= r(q)l ) (5.20) ( Q roblX r(q)and = r(q)] = (Prob[X,<i,5 ] = ).

187 The following result (which we shall not prove here) is slightly more difficult and requires a separability condition:2 Theorem 5.2: (Necessary and sufficient conditions for X to be a nonrecoverable process) Let X be a separable process with reward structure r and state space Q. Then X is a nonrecoverable process if and only if for all states q E Q and for all times v E [0, oo): ProbiX, > r(q) for all r < v and Xv= r(q)] = Prob[X,= r(q). (5.21) 5.2.5. Acyclic Processes 6.2.5.1. Definition of an Acyclic Process An essential step of our model solution procedure (Section 5.3 [Reward Model Solution]) is to first evaluate conditional performabilities, conditioned on the sequence of states entered by the process during the unbounded period 10, oo). For this reason, we restrict our attention to finite-state base models. To insure a finite number of state sequences and hence conditions, we also require that the base model be "acyclic" in the sense that states are not revisited by the process. More precisely, let M, be the random variable denoting the total number of visits of the process X to state q, E Q during the interval [0, oo). X is an acyclic process if for all q, E Q, Prob[M, > 1] = 0. 5.2.5.2. Properties of Acyclic Processes A well-known necessary and sufficient condition for an acyclic process follow: 2We implicitly assume all underlying stochastic processes discussed in this thesis are separable.

188 Theorem 5.3: (Sufficient and necessary condition for a process to be acyclic) Let X be a stochastic process with state space Q. Then X is acyclic if and only if its state-transition diagrams is acyclic. Proof: Clear from the definitions of acyclic process and acyclic state-transition diagram. ii An absorbing state is one which the process can never leave, i.e., q, E Q is an absorbing ftate if for all r, E 1[, oo) such that r < v, Prob[X,, = q, I X, = q,l = 1. Theorem 5.4: (Every finite-state, acyclic process must have at least one absorbing state) Let X be an acyclic process with finite state space Q. Then at least one state q, E Q is absorbing. Proof: Assume otherwise, i.e., no state is absorbing. Let Q I = N. Suppose for some ql E Q and some r E [0, oo) that Prob[X,1 = q,] > 0. Then there exists a q2 E Q and a 2 E [0, oo) such that q, 3 q2, rl < r2, and ProbjX, = q2 X, == q1] > 0. Continuing iteratively, we find that there exist q2, q.. qN,+l and r2, &,...,r+1 such that q2 # q', q 5s q4,.., N qN+ l, 2 < r <'' _ < rN+l, and so Prob[X, +l = I X qN+PXProbl = qNNI X,I X qN-1]. ProbX,-2- = q2 X,- qljProb[X,1- =q1 (5.22) > 0 * Prob[X,N+1 = qN+lXN qN''X, = q1l > 0. 3See the footnote in Section 5.2.2 [An Example of a Reward Based Performability Model].

189 However, I Q I = N and so at least one q = q, j i k. Thus, 0 < Prob[l,Y, = q,,X, = q, < Prob[Mq - 21 (5.23) which contradicts X being acyclic. 5.2.6.3. Acyclic Properties of Nonrecoverable Models This section examines the relationship between acyclic processes and nonrecoverable models. We shall find that nonrecoverable models are acyclic and that the underlying base model must either be acyclic or else have an acyclic nature in the sense that all states in a cycle must have the same reward rate. Theorem 5.5: Nonrecoverable models are acyclic Proof: Assume otherwise, i.e., there exists a cyclic nonrecoverable model X. Then there must be a state (reward) r, reachable from itself, i.e., there must be a second state (reward) r, such that X can, with probability greater than zero, visit r, then r, and r, again. But since X is nonrecoverable, r, > rj r, =, r, -= r, that is, r, and r, are the same state (reward). Therefore, X cannot be cyclic. I I Theorem 5.6s (Acyclic nature of the base models of nonrecoverable models) Let X be a nonrecoverable model. X can be either cyclic or acyclic. If X is cyclic, then if ql, q2 E Q are in a common cycle (i.e., ql and q2 are reachable from each other), then r(ql) = r(q2). Proof: Let X can be acyclic. If for every ql, q2 E Q such that q2 is reachable from ql, we have r(q2) < r(q1), then X is nonrecoverable and so X could be acyclic. Suppose now that

190 X is cyclic and q1, q2 E Q are in a common cycle. Then r(q2) ~ r(qg) since X can reach q2 after ql. Similarly, r(ql) < r(q2). Hence r(qg) = r(q2). 5.2.5.4. Approximating Cyclic Processes When evaluating fault-tolerant systems where repair is not allowed, the acyclic condition is not restrictive since once the system leaves a state due to a fault, the system can never return to that state. Often, however, one is interested in analyzing systems with transient faults or temporary failures. Models of such systems are generally cyclic because if the system recovers from the fault, the system returns to some previously entered state. Such recoveries can occur an unbounded number of times during [0, tl, resulting in an infinite number of state sequences. One approach for approximating cyclic base models is to "unravel" the process into an acyclic base model. This is done by simply choosing a finite set of state sequences to consider. For example, the set chosen can be those sequences which have at least some threshold probability of occurring during the utilization period [0, tj. In other words, those sequences of sufficiently low probability are ignored. Alternatively, the set could include all sequences less than or equal to some specified length. A second approach is to "lump" all the states in each cycle. Because each state in a cycle must have the same reward rate (see Theorem 5.6) the reward rate of each lumped state is the rate of each of the component states. The difficulty with this approach is determining the equivalent stochastic behavior of the lumped model.

191 5.3. Reward Model Solution 5.3.1. A Partition of the Trajectory Space As noted in 1561, one approach to determining performability is to partition the trajectory space and solve for each of the resulting classes of trajectories. During the unbounded period 0, oo), the base model process X will, with probability 1, pass through some finite sequence of distinct states, say u = (u,, u,_1,....,uo), where state u, is the initial state and state u0 is an absorbing state. X is nonrecoverable (see above) so there is a sequence of reward rates (r(u,), r(ul.),...,r(uo)), corresponding to u, such that r(u,) > r(U,,l) 2 * * > r(Uo). (5.24) Let U be a random variable denoting the sequence of states visited by the base model during [0, oo). Since the base model is acyclic, and since there are N < oo base model states, there are only a finite number of possible state sequences, i.e., sequences u such that Prob[U = ul > 0. Let Y be the performance (reward) variable defined in (5.5) and let Fy be the PDF of Y. Further, if Fylu is the conditional PDF of Y given U, then Fy can be expressed as the following summation over a finite index set: Fyl() - = Fylu (ytu)Prob[U u]. (5.25) 5.3.2. Notation At this point, it is convenient to introduce a body of notation for dealing with the time the process spends in various states during both T = [0, t( and 0, oo). Unless otherwise noted, the remarks that follow assume U to be some arbitrary sequence u = (us, us_..,..., u0) such that Prob[U = u) > 0. For each state sequence u = (u,,, u,,1,..., u) there is a vector-valued random variable V, = (V~, V, 1,.., VO) taking values in (? O~+)", which describes the time the pro

192 cess resides in each state in u. For 0 < i < n, V, is a random variable representing the time of the base model process resides in state q, during the interval 10, oo). State uo is an absorbing state, and so vo is oo. We will be interested in the PDF for Vu conditioned on U = u, i.e., FV IU(v I u) = Prob[V. < v. IU = ul (5.26) = Prob[V, < v, A V,_1 < v,_l A.-. A\ Vo < vo I U = ul If the conditional probability density function for V. exists, it will be written fv. We suppose that fv, exists and we find it convenient to expand fv as follows: /f i u(vu I u) -= v I (v I U)/_, I V U ( V._I v, u). (5.27) Iol,.. Y._,. V, U (Vo.-, v, ) U) For a given u, define W to be a random variable denoting the amount of time the process is in state u, during the utilization period T. The basic relationship between W' and V, can be expressed as min( V,), ifi n (528) W t max(min(t - Vj, V,), 0), otherwise, J I+1 Note that if U = u and if W' 3 0 and V, > 0, then W'-, W'-2... W~ 0. Also note that the value of Wc can always be expressed by one of the following alternatives: t, V,, t-, V;,orO. ~<~+l 5.3.3. The Approach We seek to determine the PDF Fy of the reward varible Y (Eq. 5.25). The algorithm delineated in [561 will be employed:

193 Algorithm 5.2: (Determining the probability distribution function, Fy) 1) Determine all state sequences u, i.e., the range of the random variable U. 2) For each of these u, determine ProbjU = u]. 3) For each u such that ProblU = ul > 0, determine Fy U(y). 4) Apply Eq. 5.25 to arrive at Fy. For the present, we assume that steps 1) and 2) have been completed. Consider step 3), the calculation of Fl u. Using the notation introduced in the previous section, the reward variable Y can be directly expressed as a linear combination of the random variables W. If U = u, then Y" r(u) W. (5.29) ) -0 If the W' were independent random variables, we could obtain the PDF Fy by mathematical convolution. However, as discussed in 1331, 1561, the W: are statistically dependent and a probabilistic characterization is generally difficult to obtain. Such complications are avoided by formulating Y as a function y. of V, (if it is known that U = u). Although the basic concept is simple to state, the details are somewhat complex. Using the relationship between W and V, of Eq. 5.28, (see Eq. 5.6) r(u,)t + S (r(uk) - r(u,))Vk (5.30) ~Y = q(,)= |if i Vk< t, Vk t,forn>j Y = I(V — ) = k j+1 r(u,)t if V, > t See Fig. 5.2 and compare with Eq. 24 of [331. Let By be the set of accomplishment levels (reward outcomes) not greater than y, i.e., By = {b > oI b < V) and let Cy = r-(By) be the set of all base model trajectories which

194 A (r(u3) - r(u1))Wt r(u ) 3 r(u3) y r IIt r(u2) r r(u l)) V3 tt t Fig. 5.2 Decomposition of y (V ) ~r ~( ~u u) LIU) LI.. ~ ~~ ~~ ~~~~~~~~~~~~~~~~ ~~r ~~~u~~~~~)t~~ r ~ ~~u ~C000 0.............~~~~~~~~~~~~~~~~~~~~~~~~ ~ ~ ~...................~~~~~~ ~~~~~~~~~;~~~~..........................X ~~~

195 traverse the state sequence u and provide reward no greater than y. Then FYIU(ylI) = Prob[V. e Cy IU = j= f Iv. I(v. I ) dv (5.31) Since the probability density function fV I U( * ) is assumed to exist and be known, to formulate FyI u we must determine Cy. 5.3.4. Formulation of Fu In the next section (Section 5.3.5 [i-Resolvability]), we show that, depending on the values of u and y, Cy can be easily partitioned, i.e., expressed as a union of disjoint regions Cy. Such a partition provides the ability to break up Fyrl into a sum of smaller, less complex integrations. In particular, if the length of the sequence u is n+l, Cy will be decomposed into n+2 disjoint regions {C+1, C;,..., Cy (5.32) (The disjoint sets Cy will correspond to those trajectories which are "i-resolvable". See Section 5.3.5 [i-Resolvabilityl. For the discussion of this section, it is sufficient to consider the Cy to be arbitrary disjoint sets.) Let vu be a point in Cy and let Cy,, { v, I there exist vi,, tv.,..., V (5.33) such that (v,, v,..., V) E Cy) and for n > j 0, let Cy,,(v., v,,..-., v+) = { I there exist v_1,, v2,..., vo (5.34) such that (v", v,_,..., v0) E.Cy In other words, Cy.(v, v,, i,..., v+1) is the set of all values v] that are "contained" in

196 some v. E Cy. For conciseness, we shall write Ct,, in place of C',,(vn, v,,..., v+1) when the (v,, vnL,.., v+l,) is implicit. Consider uI (Eq. 5.30). The regions Cy are not generally Cartesian, that is, Cy is usually not expressible as the cross-product of the C,,,. Rather, a different relationship based on "i-resolvability" (see Section 5.3.5 [i-Resolvability]) will be employed to define the Cy. By the definitions of the Cy and Cy,, Eq. 5.31 can now be expressed as n+l n+l Fyiuv(lu) = Prob[Vu ECyl - ul = f fvu Ilu(vu)dvu DiinO~ iBOC; (5.35) 1... f vulu(vuIUdvu.-0c;,. y,.- Cy;, The expansion of fv. I (vu I u) in Eq. 5.27 is especially appropriate in this representation. Finally, combining Eqs. 5.25, 5.27, and 5.35 with the characterization of the C,,' developed below, we arrive at an integral solution for the PDF Fy. 5.3.5. i-Resolvability 5.3.5.1. Definition of i-Resolvability In this section, we describe a method of partitioning the set Cy = Y.-1(By) into the Cy (Eq. 5.32) and characterizing Cy', (Eqs. 5.33 and 5.34). The immediate difficulty with characterizing the C' arises from the nature of 7u (Eq. 5.30), viz., -t is a sum of a random number of random variables V,, Vn1,..., V+1,. We introduce the notion of "i-resolvability," which allows the decomposition of the single large problem of describing the sum of a varying number of random variables into a set of smaller problems, each consisting of describing the sum of a fixed number of random variables. The number of random variables to be considered will be determined by i-resolvability, a notion which takes full advantage of both the monotonicity of -, and the finite utilization period. The concept can be loosely stated- "for a given subtrajectory, based on the past and regardless of the future, what can we say about the entire trajectory?"

197 We are interested in determining the probability that the total accumulated reward 7u(Vu ) < y. Recall that the time the process resides in each state of the sequence u is given by the random variable Vu = (v,, v,_l,..., vo). Suppose that the sojourn times for the first n-i+l of these values are known, i.e., (V,, V,,,..., V,) = (v,, vl,..., v,). Knowledge of these times conveys significant information regarding the set of possible sojourn times (V,_1, V-2,.., Vo) that can be associated with the remaining states and still satisfy 7,(Vu) < y. As an informal introduction, imagine that someone chooses at random some vector v, of "possible" sequence times and starts dealing, in order and one at a time, the values v,, vnl,..., v0. We look at the values as they are dealt. If after the value v, has been dealt-but not before-we can determine with certainty that ru (vu) < y, then we shall say that vu is such that I(vu ) < y is i-resolvable based on vu, or, more succinctly, vu is iresolvable (y and,u are implicit). If at any time we can determine with certainty that 7u(vU ) > y, then vu 0 C.; we are not interested in that value vu (since it does not contribute to Prob Y < y]) and so we reject it. We now formalize this property. Let u'JR "-' iR and v, = (v,, v,,_l,, vo) E IRn" be such that 7, < y. In addition, let 7,(v,,v,,n..,v,,,,...,o) be the (n-i+l) variable function derived from 7u(,,,..., ) by setting V, = v,, V,_, = vnl,... V, = v,. When y is (with probability one) an upper bound4 of the function 7u(v^r Vl, V...,v,,,.,..,.) but not of 7u(vn v,_x,...,v,+,i, O,...,), we shall say that v, is i-resolvable. [As a special case, when y bounds 7r(o,,...,) (with probability one), then vu is (n+l)-resolvable.] In other words, when a determination as to whether 7u(v1) < V can not be made based solely on the values (v", vol,..., v..+l), but such a determination can be made with the additional knowledge of the value of v,, we shall say that Vu is i-resolvable. [When vu is (n+l)-resolvable, then the determination can be made with no knowledge of v".J (Except where necessary, we shall drop the phrase "with probability one" when referring to y bounding 7u{(v", v l'.,v', _, ~,...** ).) 4A constant y is an upptr deun of the function I:D-.R if for all z E D, 1/() < y.

198 To motivate our interest in i-resolvability, consider the following example. Suppose U is such that (r(u2),r(u1),r(o)) == (2,1,0) and that y = 5 and t = 4. If V2 = 1, then for ru(Vu ) < y to be true, we must have W, < t - V2 = 4 -1 = 3 (see Eq. 5.28), and 2 r(u,)W' = IlW1 + 1-2 = W + 2< 3+ 2 = 5 =. (5.36) t t t'"Now Wl < 3 irrespective of V,, so V, can therefore take on any value in the interval 0, oo) and still satisfy 7u(Vu) < y. In other words, since V2 = 1 then, without further examination of V, or Vo, we know 7u((V,) < y. Thus, every (1, V,, Vo) is 2-resolvable. vu is 3-resolvable if y 2 v2t = 2-4 = 8 since we know that 7ru(vu) < y without examining any of the v,. vu is 1-resolvable if i) y < 8, and (5.37) ii) 2E[ y -lt ] [O, y -41 (e.g., if v2 = 0 when y = 4), since if v2 is in the above range, then 7u(Vu) < y regardless of the values of vl and v0. Also, v, is 2-resolvable if t) y < 8, and ii) v2E (y- 4, oo) 5.3.5.2. Properties of i-Resolvability 5.3.5.2.1. General Properties

199 From the definition of i-resolvability, we have the following important Theorem 5.7: (i-resolvability and bounds of y,) VU is i-resolvable if and only if A. for i = n+l then 7u(,,.., ), u(Vn,,,. ),...,',(vn, Vn-1,.., vo) are all bounded by y, and B. for n > i > 0 then i 7U( n, V _,.. -, V,,,,..,), u (V, Vn-l 1, ~.J V, _l,-, V,. I., ),'' * 7u(Vn V,-,.. vo) are all bounded by y, and ii) None of (,u(,.,...,), (V,,..,.),.. 7u ( vni V_,..., ~, V+l,, O,..., ) are bounded by y. Proof: A: This is simply the definition of (n+l)resolvability. B: =: Suppose v is i-resolvable. Then by definition u(v,, v,_,,.., v., v,,,...,)< y. Let v, i, v,2,..., vo be the values of v,_l, v,_2,.., Vo that maximizes 7u(^, vnl, *,, V, O,., ~ ). But then y 2 7u(,nnV-l,.., v,, V, l,,..,) > (5.39) u(v, v_-l, ~ ~ ~,,, v,- l,, ~ ~ (,0) A A y >'u(y(, v- _ ~..., v,, v,_l vI-2_,,..,o) > (5.40) 7 ru(Of -l ~I' tP, V1, — 2,^.I. V P -)

200 A A A (5.41) > u(Vn, v,,_l..,V,, V_-i, V,_2,... Vo) and so y is the upper bound of 7u(v,,v,,,..., v,, v,_,..,), 7u(Vn, Vn,-l.., V,, -,_1, Vj_2, V.,,),..., u(V, V, VV_,, V, _, V 3 Vi.., Vo). Now again by the definition of i-resolvability, r,,v,, v,l,..., v,+, o,,...,) is not bounded by y and so there must exist values v,, v;l,...,v such that 7u(v", v,1,.... A A v,+, v,,..., ) > y. But then 7yu(v, vl,...,,+2,,,...,.) is not bounded by y for we could choose v,+l, v,,..., vo as arguments. Similarly for U(v, -,,, v..+,+,, I,,... 7u(., *-., ). Thus, 7u(Vn, vn-l,..,v,+2,,...,.),..., 7(,..., ) are not bounded by y.: The converse is trivial, since if y bounds %(va, v,_l,.., v,,.,,...,) but not 7U(v,,, v., * V+l,,,...,) then by definition v is -resolvable. I I One can define i-resolvability recursively, as follows: Theorem 5.8: (Recursive definition of i-resolvability) For a given v, = (v,i,..., ): i) v is (n+ -resolvable if and only if Y(vu ) < y regardless of the value vu. ii) v is i-resolvable if v is not j-resolvable for all j > i but 7u (vn, Vn-l..., v,,,,..., ) )< y regardless of the values of (v,_, v, 2,..., v). Proof: The initialization step i) is obviously equivalent to the definition of n+l-resolvability. (Indeed, it is the definition.) Consider now step ii): j=: Let v, be i-resolvable. Then by Theorem 5.7, y does not bound %u(v. n,. i,..., V, *.., ), "y(v,,v,,_,..., v,+v,. 2,,,...,),. 7u.,(,.,...,.). Therefore, vU cannot be j

201 resolvable for any j > i. Further, by the definition of i-resolvability,'u(Vn Vn-I,..., VI, I, ~..'.,) ) < y. ~: Suppose the premise is true. Then v, is not i+ l-resolvable and so 7u(vn, v,1,.., v+l,.,,..., ~) is not bounded by y. But 7u(v,, V,_,.., v,,.,..., ) is bounded, and so by definition, v is i-resolvable. II Theorem 5.9: (Conditions for v, = it) Let n > i > 0. If v is i-resolvable, then v,= -. for all n>j> i +1. Proof: Assume otherwise, i.e., v, 3 u'. But then wJ-'= = u-2 w = 0 (from t t"' t Eq. 5.28). Thus, 7u(Vn, _l.*... V,,,...,) = 7u(v,.n-V,...,V-.1,,,...,) (5.42) =.., = 7u(Vn V_-,...,v, V,..., ) since regardless of the values of v1 l, v,_,..., vo, their contribution to -Y will be zero. (See Eq. 5.30.) This contradicts the premise that v, is i-resolvable. II 5.3.5.2.2. Existence and Uniqueness Properties Theorem 5.10: If Vu ie i-resolvable then vu is not j-resolvable for j y4 i Proof: Clear from the definition of i-resolvability. II

202 Theorem 5.11: v. is (n+)-resolvable if and only if y > r(un)t. Proof: Definition of (n+l)-resolvability. II Theorem 5.12: (maximizing,u(v,, v,_-,...,, )) Let n > i > O. The minimum value v, that maximizes the function 7u (Vnn -..., v,+l ) i I} l J~i+l l(5.43) Further, this value is independent of the values v,_l, v,i_,..., Vo. Proof: Let 7u(v,^,v,_l,..,,+l,0,0,...,0) = y. Then 7u(Vn V-l,*.,+l, V+,v,,v,_..,v0) = y + S Ur(uj)' (5.44) J~~o'"0 Since every wJ > 0 and r(u,) > 0, then u w r(u,) > 0. So, (u vv1,._-., vt+1) is max-! t3 O -0 imum when u w r(u,) > 0 is maximum. Furthermore, r(u,) > r(u,i-) > * > r(uo), so I -0 w' ur(u,) > 0 is maximum when w' is maximum, and that value of w' is t t t max t f = O m ax|t- u olv,i.e., w' is all the remaining time in the utization interval [0, t] after accounting for the intervals (v,, v,_,..., v,+l). Also, note that in the derivation, the values v,-_,,-2,..~, vO do not affect the value of v, which maximizes 7u(v, V,, v,.., v,+l). II

203 Theorem 5.13: Every v, such that u) (v) < y is i-resolvable for some i, where n+l > i > 0. Proof: If y > r(u,)t, then by Theorem 5.11, vu is (n+l)-resolvable. Suppose y < r(u,)t. Then 7u(, *,..., ) is not bounded by y, but 7u (vn, v,1,,.., v0) is. Further, yu is monotonically non-decreasing, so for some i, it must be the case that u(Vn,, v"1,..., v) is bounded by y, but (v,, v,,_l,..., v,+l) is not. II Theorem 5.14: (i for which there exist i-resolvable vu) Let y e [r(ult1)t, r(ut)t), n > I > 0. Then for all i such that I > i > 0 there exist vectors v, such that v, is i-resolvable. Further, for all other i, there do not exist any vectors v, such that vu is i-resolvable. Proof: (Here we will show only the existence of a single v, that is i-resolvable. In Section 5.3.8.1 [Characterization of C/ J, we will characterize all of the i-resolvable v,.) Let y E [r(u.l_)t, r(ul)t), n > I > 0, and let i be such that I > i > 0. Let (- r(u,,_) Note that v, < t since r(u.) - r(u,,)t v - r(u._) - r(u.-l) (5.46) < t r(u.) - r(u.,)t - L r(u~) - r(u.-) J = t

204 Then the function u(vn, Vn,- v_2,.. V+1,, ~V,..., ) y- r(u,_i)t = 7( r(u.)- r(u.) 0,0,0,,...,. ) < r(Un)vn + r-1)[t - vn] (by Theorem 5.12) (5.47) F y - r(U,1)1 1 r 1rU1 = r(u)[ r(u) 1r(u-) + [ r(u ) (u,) ] ru)- ru- r(.) - r(u,_) = y is bounded by y. Further, consider the function 7u(.,VnI-lVn- 2...,v+1,,..., ) (5.48) { y-r(t.,_)t.!fr(u.) - tr(u,_l)' If v, = t -,, = t _ ( ),then the function of Eq. 5.48 has value r (u, ) - rr(u_l) r(u"') r(u) r(u,-1 )] + r(U"[) (- y (U,-_)) > r(u) r (u ) - r (u, 1) + r(u)-) )- r(u,)) ] > — - r(uy)t r(u,.) - r(u,1l) J rQ~)[r(u~) - r(u1l)j The proper inequality is due to the condition r(u,) f r(u,-_). Therefore, Yu (v,, v.-l,..., v,) is bounded by y, but -u(v,, v,_l,..., v,)+l is not, and so vu is i-resolvable. To prove the second part of the theorem, we must consider two cases: > I and ieqO. First, suppose there exists a vector vu which is i-resolvable for i > I. Then 7u(v, _,..., v,) is bounded by y. By Theorem 5.9, vu = w=,

205 Vn-i = w,..,+ = w+, and by Theorem 5.12,'ou(v,,-1,..., v,) has maximum value when v,_, -- max t -' v, 0 = ma =^t.'1 (5.50) = t - That value is r(u)v r + r(u) [t - r(u,)v + r(u,)- v] 1+*=- +l [ cJ +l J = i+l, 2 r(u,)t (5.51) > y and so u,(v,,,,..., v,) is not bounded by y, which contradicts v, being i-resolvable. Finally, if i = 0, then by definition, 7u(v,,,_l,..., vo) < y but 7u(vv,..,- vl, V ) is not, i.e., there exists some v0 such that 7u(vn, ^n-, -,1, vo) > Y. However, vo = oo with probability one and so,u(v^ v,_"1.., vl, ~ ) = 7U(Vn, v _,.., v0) > y, and there can be no O-resolvable v~. II Theorem 5.15: (Some conditions for which there does not exist any v, such that v, is O-resolvable) There does not ezist any v, such that vu is i-resolvable if any of the following are true: A. i = n + 1 andy < r(u,)t; B. i < n and r(u,) = 0; C. < i < n and r(u,) = r(u,_l);

206 D. 0 < i < n, r(u,) y4 r(u,), and there does not exist any v that is j-resolvable, where j is the largest k =, 0 such that k < i and for which r(uk) ( r(u,); E. i < n and y 2 r(u,)t; or F. i= 0. Proof: A. Theorem 5.11. B. r(u,) = 0 =* 7u(Vn, v-,..., - V+1) = 7u(vn, v i_,..., v,) for all v, = y bounds 7u(vn, v,1,.., v,+l) if and only if y bounds 7u(v ^, Vnl..., tv). = There is no v, such that y is an upper bound of U( (vn, o1..., v,) but not of -y,(v,, v,,.,, vI). = There is no v. that is i-resolvable. C. r(u,) = r(U,_) =* ru(Vn, o-.. o+,,.'i, vi-l,,,,..., ~ ) = u(V,, ( 1,V. -.,v,+l,0, V,-1 + i,.,.,..., ), i.e., v, is replaced by 0 and v,_l is replaced by v,_l + v,. Now, for vu to be i-resolvable, vu cannot be (n+l)-resolvable, n-resolvable,..., (i+)-resolvable; see Theorem 5.10. Consider first the case i = n and suppose v, is not (n+l)resolvable. Then

207 a < u(t, ~,,..., ) (Theorem 5.11) < 7u(0, v- + t,,.,.., ) (5.52) < %u(v", vn- + t,,..., ) for any v,, > 0 = I does not bound 7u(vu * ~ * ~., ~* ). = There does not exist a v, such that v, is n-resolvable. Consider now the case i < n and suppose vu is not (n+l)-resolvable, nresolvable,..., (i+ )-resolvable. Then in particular, y does not bound 7u (Vn, vn-,l.., v,l+) and so y < 7u(V, V,_1, ~ ~ ~,V,+, t -, ~ -,-) (553) 1+1 (see Theorem 5.12), i.e., v, is replaced by the value that maximizes the function 7u(V, V-,_, ~ ~ ~, v,+1). < u (V,, V,-l,...,V,+1,0, V,-1 + t - vJ, ~,.,..., ), i+1 that is, the maximizing value of v, is replaced by O and v,_l is replaced by vi -+ t- Vj. 3+i Y t/ < 7u(8"nf n-i. *' -.^+1^.,1-1 + t- SE * *. ~..,~*), for any v, > 0. 1+1 y V does not bound'u(v,, vl,..., v,) for any vu

208 there does not exist a vu such that vu is i-resolvable. D. If y > r(u,)t, then by Theorem 5.11, no v, are i-resolvable for n > i > 0 and the claim is obvious. Suppose y < r(u,)t. Then y must be in some region [r(ul-1)t,r(ul)t) for n > I > 0. Then by Theorem 5.14, there exist i-resolvable v, for all I > i > 0. If, as given, there is a j < i such that r(u,) y r(u,) and no v, is j-resolvable, then j > I and again by Theorem 5.14, there is no k-resolvable v; specifically, there is no i-resolvable vu. E. Theorem 5.14. F. Theorem 5.14. ii 6.3.6. An Algorithm for Determining C Recalling that Cy ='-1(By), let Cy = {v, E 7-'(By)andv, is i-resolvable). (5.54) Lemma 5.18: The Cy are mutually disjoint. That is, c lnc, = ~ for ij. (5.55) Proof: There can be no vu such that simultaneously v. is i-resolvable and v, is j-resolvable, i7j (Theorem 5.10). Therefore, no v, can be in both Cy and C), for i $3 j. Further, we also have

209 Lemma 5.17: The Cy cover Cy. That is, Cy = C -U... U. (5.56) Proof: Since ever vu is i-resolvable for some i (see Theorem 5.13), every vu must belong to some Cy. II Combining Lemmas 5.16 and 5.17, we find that Theorem 5.18: The Cy partition Cy. The iterative definition of "i-resolvability" (Theorem 5.8) yields the following algorithm for determining the Cy: Algorithm 5.3: (Determining the Cy) For a given By, i) Determine CY+l, i.e., all v, E r7-(By) such that v, E y-'(By) regardless of the value of v,. These are the v, which are (n+l)-resolvable. (C'+' will either be empty or all Cy, depending on the value of y.) ii) For each i = n,n-l,..,0, determine all v E 7-(By) such that v,. C' U C-' U' U Cy+' and, E —'(By) regardless of the values of (VI-l V,_2, * 0o). 5.3.7. Conditions for vu Being t-Resolvable To show that a given vector v, is i-resolvable, the definition of i-resolvability may be used directly. We need to consider three cases of i:

210 1) i = n + 1: vu is (n+l)-resolvable if and only if y > r(un)t. (See Theorem 5.11.) 2) i = 0: No vu is 0-resolvable. (See Theorem 5.15.) 3) n > i > 0: Whether vu is i-resolvable depends on the values of the components v, in Vu. The first two are the trivial boundary cases i = n+1 and i = 0; the third, more difficult, case consists of all the values in between. The remainder of this section considers only case 3). In this section, i Is restricted to n, n-l,...,1. By the definition of i-resolvable, to show vu is i-resolvable, one need only show:' a) y bounds {(v,, vn 1,..., v,,, *,..., ), i.e., there does not exist any v,_l, v1-2,,o such that V(V~, I * *,V,, vi-P V,_2,., V* o) > y, (5.57) and, b) if i $ n, then y does not bound (vn, vn,,..., v,+1,.,,...,) [if i = n then y does not bound y(O,,..., ~), i.e., there exist v,, v,_l,..., Vo such that A A A A(vII...,vI, v,,v, I,...,VO) > y (5.58) Condition a) can be shown by examining the set of v,-_, vi2,..., vo that maximizes (n(v,, v,_, ~ ~ ~, i y, e,..., ). From Theorem 5.12, those values are VI-1= max(t- v, 0O), i 1 (5.59) vj is arbitrary, i-1 > j > 0, (5.60) Vo = oo ( since v0 is absorbing). That is, v,_1 is all the remaining time in [0, tl after accounting for the intervals

211 (v^, Vn,..., v,). Let5 Vu = (v,...,v, v,-i, v,-2,..., Vo). (5.61) Then -(vu) is the maximum value of y(v,, v-,....,vi,,...,). To examine the behavior of the w, let w =- (w,..,w, w,_1, w...,,.., ) (5.62) be the vector of w, corresponding to vu. Now V,-i > W- (5.63) and (if i > 1) (W,-2,,_..., o) = (0, 0,...,0) (5.64) and so 7(v)< () (v) (see Eq. 5.30 and Theorem 5.12). Thus if i(vu)< y, then 7( v^-l,...ev.,,,... ) < y. To show condition b), define v similarly to vu above, i.e., A A A A VU = (V., v,+, V,, t-,...,o), (5.65) where again from Theorem 5.12 v, max(t- 3 v,,0), (5.66) J _ 1+l vj is arbitrary, i > j > 0, and (5.67) A V0 = oo. VU should be modified by the index i, e.g., v,, since there is a different vector vu for each i. However, throughout this discussion, the only index to be used is i; so to simplify notation, vu will be employed. 6Again,vu should be modified by the index i, e.g., vu,, since there is a different vector Yv for each i. However, as with vu, we shall simplify the notation by employing only vu.

212 That is, v, is all the remaining time in [0, t] after accounting for the intervals (onv, n-l,... v,+i). To study the wu, let W = (W..,+, 1 W w.., ) (5.68) A ^ A be the vector of w; corresponding to vu. When 7(v) > y, then it is not necessarily the case that (v) ) > y. From conditions a) and b) (Eqs. 5.57 and 5.58), we have the following Theorem 5.19: (Sufficient and necessary conditions for vu to be i-resolvable) vu is i-resolvable (n > i > O) if and only if a)'r(vu)< y, and (5.69) b) (v,)> y. 6.3.8. Solution of Finite State Acyclic Reward Models 5.3.8.1. Characterization of C. 5.3.8.1.1. Introduction This section characterizes C,,. (See Eq. 5.33 for the definition of Cy,,.) Let u = (v,, v,_l,..~, vO) and suppose vu is i-resolvable. We can now state some restrictions on the possible values of the w' and v,; more restrictions will soon become apparent, but the following are obvious. (See Theorem 5.9.) For v: O iC S t t (5.70) 0 < E.k < t 0 t, for each j such that n > j > i 0< u = vj

213 Vk =- t (5.71) k = -I 0 w < w 1- < W1 -,-1 t - 0 "i, <"^ " k + (5.72) O< Z w+ I wk t -Z t t, for each j such that i-1 > j > 0 -- 0 A and for v: If n > i> 1:'IO<C~~~~~ o~~~~~~< 1 ~(5.73) 0 k < t for each j such that n > j > i + 1 0< t = vj vt t(5.74) A A 0 < w' < W = V, 11O < t V(5.75) - <k+ t =+ J for each j such that i > j 0. d = O Recalling the capability function (Eq. 5.30), r(u,)t + (r(u,) - r(u,))Vk k,j+l Y 7(V) = if S V < t, V > t, orn >j (5.30) y ='(Vu,) =3+ r(u.)t if V > t, ^ along with the information concerning which of the w% and Wu are 0 (Eqs 5.72 and 5.75), the two conditions a) and b) of Eq. 5.69 can be rewritten as in the following (see also Eq. 5.29)

214 Corollary 5.20: (Sufficient and necessary conditions for vu to be i-resolvable) v, is i-resolvable (n > i > 0) if and only if a) Ay > -(v)- r(u,,)t + (r(u) - r(u,))(5.76) and 7() = r(u,)t + (r(u)- r(u,)) (577) 6j)+ I+ (5.77) b) Y < ifn > i> 0 A(v)= r(u,)t if i = n As mentioned in Section 5.3.3 [The Approach], the w' are too statistically dependent for convenient use, and so conditions a) and b) (Eqs. 5.76 and 5.77) must be rewritten in terms of the v, (see Eq. 5.28). We use the conditions of Eqs. 5.76 and 5.77 along with the observations of Eq. 5.70-5.75. The following sections develop in detail the regions C.Y,. Section 5.3.8.1.2 [Restatement of Condition a) in Terms of the v] characterizes Eq. 5.76 in terms of the v,, while Section 5.3.8.1.3 [Restatement of Condition b) in Terms of the v) does the same for Eq. 5.77. Section 5.3.8.1.4 [The Regions Co then combines the results of Sections 5.3.8.1.2 [Restatement of Condition a) in Terms of the v.] and 5.3.8.1.3 [Restatement of Condition b) in Terms of the v.] and states the regions C,',. 5.3.8.1.2. Restatement of Condition a) in Terms of the v. Consider first condition a) (Eq. 5.76). In the derivation of this section will appear division by the term r(u,) - r(u,_.), n > j > i. It is possible that r(u,) = r(u,_-), in which case we would be dividing by zero. However, in such a case, since r(t) ~ r(ut-i) >'' r (u,) r(u,_1), then rt(u,)= r(u,_,). Thus, by Theorem 5.15 C), since r(u,) == r(u,.1), then there are no i-resolvable v,, in which case Cy = s and there are no

215 Cy, to characterize. Hence, because we are interested only in i-resolvable vu, we assume r(u,) $ r(u,_l) and division by zero will never be performed. Since every component of Eq. 5.76 is positive, and since all v, > 0, we have a collection of conditions which must satisfied: y > r(u,_l)t + ( r(u) - r(u,_l))w, > 0 (5.78) (5.79) V > r(u,_1)t + ( r(u,)- r(u,_,))w; + (r(u.._,)- r(u,_1))w-L1 > 0 V >~ r(u,-l)t + (r(u,)- r(u,.i)) > 0. (5.80) Since each of the Eqs. 5.785.80 must be satisfied, we will have a set of n - i + 1 conditions, one each for w", wR",..., wI. The restrictions on the range of uw will depend on the values of w, w..., u+. In addition, there are constraints on the wv and v,, noted in? Wt t I t Eqs. 5.70-5.72. Incorporating the information of Eqs. 5.70-5.72 into Eqs. 5.78-5.80, we will obtain qualifications on the ranges of v", v,_,...,vo. To be valid, the resulting ranges of the vJ present restrictions on the particular ranges of y which are admissible. We emphasize that the allowed ranges of the v, are determined first-the corresponding ranges of y are derived from the v,. Those restrictions on y are also noted below. We do the easy ones first. The definition of v, (Eq. 5.60) yields for each j such that i >j> 0: 0 < v, < oo i > j > 0, and (5.81) Vo0 = 0o Now suppose j = n. Then from Eq. 5.78 and Eq. 5.70:

216 o0 < vr w < (u.) <, (5.82) t - r(u.)- r(u_) and constraint X must hold, where constraint * is based on the two boundary constraints for v,: 1) 0 < v X u- >r(u )t 0 (5.83) 1) 0<v r(u)- r(u,) >= y - r(u,-_)t > 0 (since r(u,) > r(u,-_)) (5.84) = V > r{u, )t, (5.85) 2) v < t - r(u,)t (5.86) r(u) - r(u,) < y < r(u.)t. (5.87) Combining Eqs. 5.85 and 5.87, constraint 3 reduces to r(u,)t > y > r(u, _.)t. (5.88) Now we consider the other v,. From Eqs. 5.79-5.80 and Eq. 5.70, for each j such that n > > i: y — r(u.1)t- (r(uk)- r(u,-,)), v 0 < V Wt <) + (5.89)-'I~~~< t- ~ vk~(5.89) t =j+l and constraint * * must hold, where constraint 3 K is based on the two boundary corn

217 straints for v,: y r(u,_)t- r (r(uk) - r( -))k (50) 1) 0 < v~ >50 -1) 0 < V2 r(u,)- r(u,_,) > y- r(,)t- S (r(u) - r(u,-,))v > 0 (5.91) (since r(u,) > r(u,_l)) /y > r(u, -)t + j (r(u,) - r(u,,l))v., (5.92) k )+1 2) v, < t- k (5.93):j+l - r(u,i)t - E (r(u) - r(u,-,))v (594) k,,~./+1 (5.94) r(u,)- r(u,_) - t +) - 1 < r(u,)t + r(u)vk. (5.95) Eq. 5.92 and 5.95 combine, allowing constraint * * to be written r(u)t + S r(ut)vt > Y 2 r(U,_)t - E (r(ut) - r(U L))k. (5.96): I=J=.+1 "k J+l At first glance, Eq. 5.96 appears more restrictive than Eq. 5.88 concerning the range of y which allows the existence of a vector v satisfying y(v) S y. However, Eq. 5.96 follows from Eq. 5.88, which can be seen by combining Eqs. 5.88 and 5.96: r(u,)t > r(u,)t + E r(u)vk > y > r(u,_ )t- S (r(ut)- r(u,_))v tkm+l tk,~.+l (5.97) > r(u,_,)t.

218 As long as y satisfies Eq. 5.88, then clearly there are v. = (v,, vl,...,v,) for which (v) < y. The results of this section are summarized by the following: Lemma 5.21: (Sufficient and necessary conditions on the v, for y(v) < y) Let n > i >. Then T(v) < y if and only if i) r(u,) 5 r(u,_l); (5.98) ii) r(u{)t > y > r(u,_)t; (5.88 iii) for each j such that i > j > 0 0< v < oo i >j > 0, and (5.89) V = ooI iv) for j = n: (5.82) v) for each j such that n > j > i: y - r(u,i)t - S (r(u) - r(u,_- ))vk k —J-+1 O < V, <~j~j r~uJ)- r~u,(5.89) _<t- ~ vr,,. t k+

219 Note that in Eq 5.89, if r(u,) = r(u,_1) then r(u,) = r(u,_l) and so condition i) fails. Thus, division by zero does not occur. 5.3.8.1.3. Restatement of Condition b) in Terms of the v. J In order that condition b) (Eq. 5.77) of Corollary 5.20 be satisfied, we have somewhat different constraints on the v, than we had with condition a) (see the previous section). Division by the term r(u) - r(u,), j > i, occurs in the derivation of this section. It is possible that there exist v, which are i-resolvable even when r(u1) = r(u,j_), n > j > i. To avoid interrupting the discussion with special cases, we temporarily assume r(u,) > r(u,) for all j > i, and postpone considerationnn of the case r(u,) = r(u,_) tilll the end of this section. Lower bound restrictions on every v,, j > i are not required; only in certain cases does a given v, have any lower bound constraints at all. The intuition here is as follows: If A A A A ^(v) > y, then only v,, v,_,..., v,+, v, make any positive contribution to'(v). v, is defined to be t - vj (see Eq. 5.66), so we are concerned just with specifying the ranges of iJ +1 v", Vn-1..., v,+1. Once v., vo,,.., +, are set, then either v, must be at least as large as the value which causes 7(v) > y, or else, if v]-l can be large enough to cause yy(v) > y, then there is no lower bound on v, at all. As with condition a) (Eq. 5.76), there is a certain range of y for which the bounds on v, hold, and this restriction will be noted momentarily. Again we start with the easy case. For each j such that i > j > 0, the definition of v, (Eq. 5.67) yields (5.99) V0 oo When i = n, the bounds on the v, can be quickly characterized:

220 Ifi t n, then for each j such that n > j 0: A A 0 < W = V =t - t n O < v; < oo, n > j > 0 (5.100) ^ V0 = 00;-= o, n > j 0. and tautologically, the conditioning on y is y < r(u,)t. Now suppose n > i. From Eq. 5.77, we have only the following condition: y < r(u,)t+ (r(u) - r(u,))w (5.101) 1 I +l Replacing w with vj (Eq. 5.73), Eq. 5.101 is written y < r(u,)t + - (r(u) - r(u,))v,. (5.102) = 1+1 We now find lower bounds on the v,: for j i+1 (if i < n-l; a similar argument holds if i n - 1): (5.103) y - r(u,)t - S (r(u) - r(u,))v (5.103) v,+1 > kr(u1+) - r(u,), (From Eq. 5.102) r(ut, ~ ( - r(u.) (5.104) 0 <,+l t - E vk (from Eq. 5.73). k -1+2 Then combining Eqs. 5.103 and 5.104, y- r(u,)t- (r(u) - r(u,))vk (5105) t- vk > v,+l > r -. >, k=+-2 0 (u,+,)- (,)

221 The notation a > b denotes a > b if b >0 a >b:0o a>0 if b <0 Intuitively, a > b means "a can be any non-negative (or zero) number exceeding b." Now o suppose the right hand term in in Eq. 5.105 causes the inequality to be false, i.e., suppose v., nl, ~ ~, v,+2 are such that tV- r(u,)t- S (r(k) - r(u))vk (5.107) 1- Z Vk-: +1 < k ~\= +2 +t- 2 r(u+i) - r(u,) To prevent this inconsistency from occurring, we must insure that v., v,~i,..., v+2 are sufficiently large. Suppose i < n-2 (a similar argument applies if i = n - 2) and consider v,+2. The following condition prevents the situation of Eq. 5.107: - r(u,+)t - (r(uk)- r(u+l)) (5.108) V+2 > --- r(u,+2) - r(u,+l) since this would imply (r(u,+2)- r(u,+j))v,+2 > y - r(u,+1)t- S (r(u)- r(u,+l))vk (5.1 k s+3 = (r(u,+2)- r(u,+i))v,+2- r(u,)t + Z r(u,)v tk +2 > y- r(u,+l)t- E (r(u,)- r(u,+,))vk (5.110) k,+3 -r(u,)t + 3 r(u,)vk k= +2

222 r(u,+,)t -- r(u,)t + r(u,)vt k == +2 - r(u,+1)vk - r(u,+1)v,+2 (5.111) k- t+3 n n ft ft +,J > y - r(u)v)t - r(U,+2)v,+2 + S r(u,)vk => (r(ui+,)- r(u,)) t - v) kt -+2 (5.112) > y - r(u,)t - E (r(u) - r(u.,))v == t+2 - r(u,)t - S (r(u)- r(u,))vk (5.113) t - S Vk > +2(5.113) k- +2 r(u,+i)- r(u,) which precludes Eq. 5.107. So, to insure Eq. 5.105, we need to insure Eq. 5.108. Assume then that v,+2 does satisfy Eq. 5.108 (and that i < n-3; again, if i = n - 3 then a similar argument holds). Then v,+2 must also satisfy y - r(u,+I)t - (r(u) - r(ui+l))vk (5.114) t- V> ^ v,4+2 > k_ 1+3 t-.+S o r(u,+2) - r(u, 1) which is similar to the condition of Eq. 5.105. Indeed, this is exactly Eq. 5.105 with i replaced by i+l. Proceeding recursively, we find that we must place conditions on v,, vnl,-..,v,+ such that Eq. 5.114 is satisfied. There is thus a chain of conditions of the form. > - (5.115) o r(u~.)- r(u.,1)

223 and for n > j > i: y - r(u)t - (r(uk)- r(Ui)) (5.116) VI > k ~\U C~'\Y'-i~i'* (5.116) 0o (tr)- rf(U-l) Based on the above discussion, we make the following Claim 6.22: (Conditions on v, vI,,...,v,.+l for v,+. to satisfy Eq. 5.105) Suppose n > i > 0. If v,, v,,l,...,v,+l are all non-negative and satisfy Eqs. 5.115 and 5.116, then v,+l satisfies Eq. 5.105. Indeed, t > > v ru (5.117) 0 V- Vn_and for any j such that n > j > i, y- ri(u,_)t- - (r(u) - r(u1_-))vk t - v > VIt > i~ ma1. (5 118) kt-+1V1 o r(u)- ( ) For condition b) (Eq. 5.77), constraints on y are straightforward. [For the constraints on condition a), (Eq. 5.76), see * (Eqs. 5.83-5.88) and i * (Eqs. 5.90-5.97).J Because of the manner in which the above ranges of the v, are constructed, the vj automatically fall in the constrained range 0 < v, < t- vt (or 0 < v, < t, if j = n). Therefore, the only resk =j+1 triction on y is y < r(u,)t. (5.119) Now, as discussed at the start of this section, if r(u,) = r(u,_), n > j > i, there are apparently cases in Eqs. 5.117 and 5.118 when there is division by zero. However, if we reexamine the derivation of Eqs 5.117 and condej, we find that the term r(u1) - r(u,_l) is used

224 as a divisor in Eqs. 5.103 and 5.113. Referring to Eq. 5.101, if r(u,) = r(uj_), then y < r(u,)t + (r(u,) + r(u,))v, = r(u,)t + r (r(u) - r(u,))v, (5.120) -— s+l 1 t where k is the smallest m > j such that um > u. Then we see that vi does not affect the value of %y and so v] can be any value > 0. Therefore, we find that Eqs. 5.117 and 5.118 have the exceptions: t > v, 0 if u, = u,_, and (5.121) 1- ~ > vJ > 0 ifuj = u_ -. t -1+l The results of this section are summarized by the following: Lemma 5.23: (Sufficient and necessary conditions on the v, for -7u() > y) Let n > i > 0. Then -(v) > y if and only if i) y < r(u)t; (5.122) ii) for each j such that i > j > 0: 0 < V < oo, i> j > 0 (5.123) VO = oo; iii) if i J n, then:

225 a) for j = n y -(u._l)t t 2 v> > r(O,-_) (5.124) o tr^^ ) - rt > V, > 0 if r(u.) = r(u.I)); b) for each j such that n > j > i: /- r(ui)t- (r(u)- r(uj-_))v t- o v >r(u) - r(uv.-) (5.125) if r(u,) > r(u,_,1) i- - o > V >v Oil r(u,) = r(u,1) k J+1 5.3.8.1.4. The Regions C' Using the observations of the previous sections, we can now make further statements about the ranges of the v, which make a vector v, i-resolvable. By Theorem 5.19, v, is iresolvable if it satisfies both Lemma 5.21 and 5.23. Intersecting the conditions of Lemma 5.21 and 5.23, we arrive at the following key result: Theorem 5.24: (Sufficient and necessary conditions for v, to be i-resolvable) VU is i-resolvable (n > i > 0) if and only if i) r(u,) 7 r(u,_,) (Eq. 5.126); (5.126)

226 ii) r(un)t > y > r(u,_l)t (Eqs. 5.88 and 5.97 and 5.122); (5.127) iii) For each j such that i > j > 0: (5.128) O < v, < oo, i > j > 0, and v -oo 0(Eqs. 5.89 and 5.123); iv) If i = n a) forj = n: (5.129) 0 < V y r(u._)- (Eq. 5.82) r(u) - r(u)-r ) b) for each j such that n > j 0: y - r(U._i) - (r(u.) - r(u._,))v. 0 < < <..' -'. (5.130) 0_ u,_ r(u,)-r(u,,_) ( (Eq. 5.89); v) If i n then a) forj = n: y- r(u,-i)t y - r(u,,_,) r(u) - r(u,-) - o r(u,) - r(u._) if r(u,) > r(u,_,) ty- r(Ul.)t > v > 0 if r(u,) > r(u,_,1) r(u)) - r(u, l) (Eqs. 5.82 and 5.124);

227 b) for each j such that n > j > i: y - r(u,-)t - S (r(u,) - r(u,-))vk r(u,)- r(u,,1) > =1+1 > y- r(ui)t - (r(u) i - )) 0 r(u,) -r(u,-,) if r(u,) > r(u,_1) (5.131) - r(u,-)t - E (r(uk)- r(u,_.))v, ~. ~t~- ~ > V] > r(u,) - r(u,.-) if r(u,) = r(u,_) (Eqs. 5.89 and 5.118); c) forj = i: Vy - r(u,1)t - E (r(Uk)- r(u,. 1))vk - > v, > 0 (5.132) r(u,)- r(u,) (5132) (Eqs. 5.89 and 5.128). In Eq. 5.131, it r(u,) r(u,_, then r(u,) = r(u,1_), condition i) then fails, and so there can be no division by zero. Now, by definition of C,, (see Eqs. 5.54 and 5.33), the ranges of the v, described in Theorem 5.24 are the Cy,. Hence, we have the main result of this section:

228 Theorem 5.25: (Characterization of Cy,,) i) If r(u,) = r(u,.l), n > i > 0, then no v, is i-resolvable. Cy = 0 and for each j such that n > k > 0: CyJ = k. (5.133) ii) Let yE [r(uj)t,oo). Then every vu is n+l-resolvable. Cy+l = C, C- =q for n > i > 0, andfor each j such that n > j > 0 andfor each k such that n > k > 0: cy+j - o, ~o) (5.134) C.J,= 0. (5.135) iii) Let yE [r(ut_,)t,r(u1)t), n > 1> 0. Then for every i such that 1> i> 0 and r(u,) y r(u,_1), there exist vectors v, such that vU is i-resolvable. a) If i = n, then i) y, = O, r(u)- r(u-_ ) (5.136)' [y )- r(u-i)J and for each j such that n > j > 0 ii) Cy,, Ioo), n > j > and (5.137) iii) Cy,o = oo. (5.138)

229 b) lfi i n, then cr = ou,- r(u)) ru,_) - r(u, )) = 1 u) - r(u )f ) (U1) and for each j such that n > j > i y- r(u_)t k - (r(u) - r(u-,_))vk ii) CY" i, r J_> r(u,) - r(u-,) y- r(u,,)t - ( - (u.)-r _))v ] r(u,) - r(u,,) (5.140) if r(u,) > r(uj,_l) fl y - r(ui)t - (r(u)-r(u ))v >0 r(u, ) - r(u.-,) if r(u,) = r(u,,) and for j = i )yc 0 - r(u,_I)t: (: ) - ( ) r(u-.,_))u] (5141) andor each j such that i > j r ) 0 and for each j such that i > j > 0 iv) C, - = [0,o0), i > j > 0, and (5.142) C,o= oo.

230 c) For each k such that n + 1 > k > I (or k = 0) and for each j such that n > j > 0 C.,-=. (5.143) where the notation ( a, b means >0 (abJ = [O,pb if a < 0 (5.144) >0 (a,b] if a>0 In Eq. 5.140, if r(u,) - r(u,-1) for n > j > i, then r(u,) = r(u,l) and Eq. 5.140 is replaced by 4 [see case i)l. An integral solution for the PDF Fy (and hence the system's performability) is thus obtained by employing Eqs. 5.25, 5.27, 5.35, and 5.134-5.143. 5.3.8.2. Simplified Notation and Results As discussed in Section 5.3.2 [Notation] the notation used in Sections 5.3.3 [The Approach)-5.3.8.1.4 [The Regions C'*J makes explicit qualifications for the boundary cases, e.g., for situations dealing with n. Such fastidious detail is necessary for observing the exceptions occurring at the boundaries. However, such meticulousness is not necessary in order to present the results; the notation and results presented in this section are aimed at removing the special boundary cases and are arguably more elegant than those of the previous sections. We begin by defining countably many "states" "above" state n: For all k > n, let Uk have reward rate 0, i.e., r(uk) = 0, and let wt = vk = 0. r(ut), wu, and vk will allow a kind of non-contributory "overflow" above state n. In addition, define "state" u(_l) "below" uo, let u(_i) have reward rate 0 (i.e., r(u(l)) = 0), and let w(-) = v(_l) = 0. We emphasize that these definitions are notational contrivances only. There is no physica' interpretation of these additional states. This extended concept of the set of "states" will allow us to write unified expressions with no boundary conditions.

231 Also define the operator a - b if a > b (5.145). -- (5.145) a -b 0 otherwise We adopt the convention that, if b > a, X, = Xa + Xa++'- Xb k=a = X^+ ^X_ +'' + X, (5.146) k b Let i > 0. Then for a given v.: = - max(min(t- v,, v,), 0) (See Eq. 5.28) (5.147) 2 _,+1 Y = 7Y(vu) = r(u,)t + (r(u,) -r(u,))vk when Y vk < t, Vk > t k =-+1 k~+1 (5.148) (See Eq. 5.30). Next we adopt the convention that ~ = o. The notation [a, ] means 0^~~ ~>0 <oo a, b] ifO < b < oo {ab 1 (5.149) >o [a, oo) ifb = 00 <oo (a,O) if b<O where the left bracket { denotes any interval demarking symbol, e.g., {, (, or (. The nota>o tion ( is extended as follows: >O

232 (f(ij),b] ifti<j >o (5.150) ( (i,j), bl >o #oo [0,b) if i> jorf(i,j)= oo. If b < a, then the region (a, b) is empty, i.e., (a, b) = 4. Theorem 5.25 can now be written: Theorem 5.28: (Compact characterization of Cy,,) i) If r(u,) = r(u,_l), n > i > 0, then no vu is i-resolvable. Cy = 0 and for each j such that n > k > 0: CY,"=. (5.151) ii) Let y [r(u,)t,oo). Then every v. is n+l-resovable. C+'l = Cy, Cy =- for n > i > O, and for each j such that n > j > 0 and for each k such that n > k > 0: c^ -=- 1, oo) (5.152) C =,,_. (5.153) iii) Let y E [r(ull)t, r(ul)t), n > 1 > 0. Then for every i such that n+l > i > 0 and r(u,) y~ r(ul1) and for each j such that n > j > 0:

233 ( y - r(u1-)t - S (r(u)- r(u,-i))vk CY, I' o tr(u)- r(ul-1) l<] 3/y j -,ul - (. )r(uk) Ir(u_.))tk (5.154) -o o r(u,) r(u,-,) 0 Cy" O 0 c,o =oo Note that Eq. 5.154 of Theorem 5.26 captures all the information of Eqs. 5.136 - 5.143 of Theorem 5.25. The derivation of the Cy,, presented in Section 5.3.5 |[-Resolvability) can be restated using the more compact notation above. We can simplify the integrations of Eq. 5.35 by observing: i) in certain cases Cy,, = 0 (Eqs. 5.143 and 5.153) and the associated integrations of Eq. 5.35 will be 0, and ii) in certain cases, Cy,J = 10, co) (Eqs. 5.137 and 5.142) and the associated integrations of Eq. 5.35 will be 1 since the integration is over a probability density. Hence, we have: i) Let y E [r(u,)t,oo). Then n+1 Fyiu(ylu) = f f' f/v u(vu I U) dvu. (5.155) "~.~~ C, "-1 ^.0

234 0000 0 0 ff r f fv(Vu)dvu 0 0 0 + f f.. f v, dv (5.156) n+l - ii) Let y E Ir(ui )t, r(ui)t), n > l > 0. Then Fyu(ylu) = f - v vlu( Vuu)dv (5.157) pc..,.m-i C;o - f f V... ff (Vu)dvu 0 0 g. -l, 1 o + r ff Vu(Yu du + I f v f.(v.)dv. u ( 5.18) CY 2 20 0 + + ~~~~00 00 ^+ +..r f f /v Vu) dvu C, C.-l,- CY, 0 0 + S f f /v((vu.)dvu ~ ^ ^1 ^ ^ ^~~~~

235 (the last term applicable only if n > t) n-1 y fn(vdvn + fS r n(n) c" C C C (.159) H n Jlin-l,,J+l(vJ I V n-l,..., vJ+l) d)r dVn l n-l if n = I 0.c C,.-1 (5. t ( i fIJln,-1.,J+l(Vlvn, v n-1,....,+1) dv )dvn =an -1 if n > l We next present a compact version of Eq. 5.159. Adopt the convention that the product of any increasing series is 1, i.e., if a < b then for any sequence {X,} ni X,-1. (5.160) )=a If y E [r(ul_ )t, r(ue)t) for some n > I > 0, then Fylu u(yl) E f,=OC;,y. c;.._ (5.161) fn(Vn)( Jl,-,+l( l n.-l,..,V+1) dv)dv. CY, J n-I 5.3.9. Closed-Form Solutions In this section are discussed some examples of the performability solution of Eq. 5.35 and regions of Theorem 5.25. Numerical applications to engineering problems of the solution will be discussed in Section 5.3.11 [Numerical Solutionsj. Some of these results will be

236 rederived in Section 5.3.9 [Closed-Form Solutionsj using a recursive formulation of Fylu. To restate the problem, we wish to determine the distribution of reward for a finite-state acyclic nonrecoverable process. (See Section 5.2.4.1 [Definition of a Nonrecoverable Process] for the definition of a nonrecoverable process.) We consider two classes of processes: time-invariant Markovian and non-Markovian. For the Markovian process, the distribution of the sojurn time in state i is exponential with rate X, and is independent of the sojurn times of the process in any other state j and of the present time. No restrictions are placed on sojurn time distributions for the non-Markovian case. 5.3.9.1. Two State Markovian Acyclic Process As the simplest non-trivial example, suppose n = 1. See Figure 5.3. The regions CY 1, Cylo, C i, and Cy 0 are as follows; the applicable case of Theorem 5.25 is indicated in brackets. For Y E [ O,r(ul)t ), (i.e., I = 1) i = n (i.e., checking for n-resolvability): n = 1: C' = [0 - r(uo) ) case a-i) (5.162) j = n= i' 1 Cl,, I r(U) - r(Uo) ) j = i= 0: Cyo = oo [case a-ii)l (5.163) - (uo)t (5.164) (u l)- (u O) y-r (uo0) Fi(y) = Jf (v)dv = x e du e cl'. o For E [r(ul)t, oo): FY(y)= 1 (5.165)

237 f I 1A 0 Fig. 5.3 Markov state-transition-rate diagram for the case n=l

238 The performnabili y distribution is then Y- r(UO)t -1~ r(u 1)- r(uo) (5.166) Fy(y) = y e [0, r(u) ) (5.166) 1 y Ir(ul)t,oo). which is the anticipated result. If r(uo) = 0, then ),Y F ) = I1- e Y E 10, r(u,)t (5.167) 1 y E r(ut)t,oo) Note that t does not appear in the expression for Fy(y), other than to delimit the "breakpoint." 6.3.9.2. Three State Markovian Acyclic Process Consider now n = 2. See Fig. 5.4. This was the case considered in 1331. The derivation is relatively long and is presented in Appendix K. The results are: f Jf2(v2)fi(vi)dvid2 if Y E 1, r(ul)t) CY,2 CY, (5.168) l(vl)dvl + S f 2(v2vl(vl)dvldv2 Fy(y) = C C!2' 2 CYl1 if y E [r(ut)t, r(u2)t) 1 if y E [r(u2)t,0o) If y e 0, r(ul)t):

239 2X 1 2A I Fig. 5.4 Markov state-transition-rate diagram for the case n=2

240 2( - r(uo)t) r(u2)- r(uo) Fr(y) = 1- e,(y - (Uo)t) (r(ui1)- r(Uo))X2 - r(U)- (Uo) X1(r(U2) - r(uo)) + X2(r(U1) - r(uo)) ( i [I(r(UO2) - r(uo)) +X2(r(U1) o- T(u))] (Y - r(uo)t) r(u )- r(uo) If y E tr(ul), r(u2)t): X_ ( - r(uo)t) r(U2)- (Uo) Fy(y)- 1= - ie 1(y - r(Uo)t) (r(u1) - r(uo))X2 - ((u2) - v(uo) X2( r( 1) - r(uo) - X ( r(U2) - r(uo) ) (5.170) (y - r(o)t)[ 2(r(u) - r,(u ))- X(r(U2)-,(uo))] -e (r(ul )- r(o))(r(u2)-r(uo)) (y - (,)t[ x2((Ui) (- o))- X((2 -(U o)) ( (" 1) - ( o))( r(u2) - r("0o)) e If 1 E [r(ul), r(u2)t): F(y) = 1 (5.171) 5.3.9.3. Four State Markovian Acyclic Process Consider now n = 3. See Fig. 5.5. The solution is presented in Appendix L.

241 ~rT 3!, 2 Fig. 5.5di Fig. 5.5 -- Mfrkov state-transition-rate diagram for the case n-3

242 6.3.10. Recursive Formulation of Fru As an example to motivate this section, consider the following scenario. Suppose we have modeled a multiprocessor computing system having n processors as an n+l state stochastic process, where state i denotes i operational processors. Suppose too that we wish to compute the performability of this system for the cases n = 0, 1,...,nm. That is, we desire a family of performability distributions, Pso, Psi,.,Ps, where S, is the i processor system. For any system S, that contains state ut, the reward rate r(u,) as well as the distribution of the sojourn time in u, are the same. Rather than calculating each ps independently from the ground up, we may find it possible to use information about Pso, Ps,,Ps,1 to help determine ps. This section examines that possibility. In more general terms, assume that we have solved FYU for a series of increasingly complex models, where the first model 50 has only the single state sequence uo = (uo), the second model S1 has only the single sequence ul = (u1, uo), the third model S2 has only U2 = (u2, U1, Uo), and so forth, to S,_l having only u,_l = (un-l, un-l,..,U o). Assume, too, that these solutions are "parametric" (closed-form) in terms of the reward rates r(u,) and the sojourn time densities in states u,. Using the above information, we now wish to solve the next model in the series, i.e., Si, the model that only contains the sequence u, = (un, u,-i,...,uo). Below we present a formulation of FrY u (l I u.) based on the knowledge of Fyl U (* I u,,-). We need one reasonable assumption to insure some similarity between systems. Given two of the systems, the sojourn time distributions of "corresponding" states must be "parametrically equivalent" in the sense to be defined below. Let each state u, have a parameter X, which can be reflected in the sojourn time distribution of that state. For example, if the sojourn time in state u, is exponential and indepen-A v dent of the process's history, then the density of the sojourn time in state i is f,(a) -= e The parameter X, is allowed to vary from system to system, and state u, of system S, will

243 have parameter X'. For system S,, let f' be the density of the sojourn time in state i, conditioned on the history of the process, and a function of parameter X'. To recursively build FYlu of S, from the FYIU of S,-1, we require some notation of state correspondence between models. In particular, we need to know which state of system Sk corresponds to state u, of system Sj. Since the history of the process can affect sojourn time distributions, we make the number of state transitions the basis of correspondence. Thus, for S, and Sk, the pairings are u, and uk; uj_1 and ukl; *; u,_, and uk,. To understand intuitively the reasoning behind this definition, note that the densities X Xk f i-1 and f *-i are similar in the sense that both are functions of I-s I,j-1..,-'+ -, l, kt-.. t-~+l the same number of variables. Consider the systems S, and Sk. Let 0 < i < j < k. State u_, of Sj is parametrically equivalent to state ut, of Sk if, for any X, f J-lJ.J-l. -U O.llfx -(5.172) j/-Ij,{j -rJi k-li. k k,. k-i-+1 (and if i = Q, Jf =4/ ). (5.173) In other words, / and f are identical functions of the J-il j,-1l,.-s~+1 k-iI, -, -1,. k-i+ parameter X. Let j < k. We shall say that system S, is parametrically equivalent to system Sk if, for every i such that 0 < i < j, state u2, of S, is parametrically equivalent to state uk.i of Sk. X). For example, suppose the sojourn time density in state u,_, of Sj is e'" and the denXJ t sity in state uk, of Sk is e -. Then clearly u,. and uk_, are parai,,etrically equivalent. Let Xt = (X,Xtx,...,X0) and let 0< i < k. Then define Fk x ( I us) to be the distribution of Y for system S^ given the sequence uk and the vector of parameters -6. The

244 vector XV denotes the parameters of the sojourn time densities of states i, i-l,...,0 as follows: f...,f f. Note that if k / i, then the vector ~ i-l' 1I l,-l 2' o,,-1,,1 Xk will contain elements Xk-,-,, Xk-,-2 ~ ~., X0 which do not affect the distribution. In much the same manner as we generalized the notation for FylI u we now generalize the representation of the set Cy,, (see Eq. 5.154) to reflect explicitly the sequence of reward rates (r(u^), r(u^i),...,r(o)). Let m > k > i > 0, i > j > 0. and r(u^) = (r(u,), r(u,_),...,r(u)). Then let Cy be the set CI for the system Sk whose reward rates are the initial subsequence of r(um), i.e., the rates are (r(um), r(umi),..,r(umt)). The rates r(um-kl), r(um__2),..., r(uo) do not affect Cy;(u m) Eq. 5.154 then translates to / V - r(u t+, ((,)- r(u_+,))v, k;r(um )' _______8= m-t+j_+l______________ C11, >\ r(umk+,)-r(uk+) (5.174) y - r(Um —)t - (r(ul) -r(U^-,+-i))v, l m-k+J+l E r(umtk+,) r(Um-k+,-) -oo We make the following key proposition: k-I, T(u ) ek* -i,k';r(u ) Lemma 5.27: (Relationship between Cya, t and Cy, - ) Let m> k > i > O, m > k' > i> 0, and i > j > 0. Then c-y1, =.) c_, -,. (5.175) Yt-J Proof:y, Proof:

245 m yV - r(.(Um +_,_))t t ( r(U i) - r((U -k+k —.))V -i u m -k+t_+l-J -s "k;r(u r: )-k+kr-k-+.. ) r(U/m+k1k) r(Um l-+_,-l.) <0O (5.176) y - r(u.m_ +, __-)t - e r(r(u) - r(um._, +e _i)) v = rnm-e +e -ji+1 or(u.m- +e -1)- r(Um- + -J-1) y - r(um-e +kt -,-)t - E (r(ul) - r(Um-k + -il))v l- m-k' +e -I+l r(um-_ + -3) - (Um-_ +Y -i-l) 0 I II' -,k u Finally, we can write the recursive solution: Theorem 5.28: (Recursive representation of F v ) U Let S, and Sn,_ be parametrically equivalent. Let y E [ultlt, u1t), n > I > O. Then;" (y u.)... f (=x F' uf f' f (v) Y1,;r(u ) 1,;r(u ).; (u) C,. CY,-1,,1 (5.177) ( i 1/,i, n n-l,,,+1(,1~'_.. lv.,.-.,+,) d, dv,. /, n-l + F" ({y (I u,) rlu (or, ifl 1 F;' )= f f f f,'v ) Flu (1y u.) =.... (V) C, C;r(U 1,;r (U (,5.78) C,. C,-..,,a.,-1-. (5.178)1( _ (,, n1,,,,-,...,v, +) dvj drn ) /~- )

246 Proof: To reduce the "noise," we let * denote the first term of Eq. 5.177 and all of Eq. 5.178, i.e., 3= I f f. | fn (Vn);r (u;r(U );r;( ) c,:^ c,u-1 C,, (5.179) i,in-I, I +(v v-,,..., V+) dv, )d. If I = 1, then Eq. 5.178 follows directly from Eq. 5.35. Assume then that 1 < n < 1. We prove Eq. 5.177 by starting with Eq. 5.161 and Eq. 5.174, converting the regions n;'r(u ) n-, n;r(u (u ) -,;( Cy'to Cy, using Lemma 5.27 to translate Cy,, to CY,, and n-l-i,.-l;r(u ),, -l;r(u ) then converting the regions C n- to Cy,. Proper bookkeeping of the i's and j's are necessary, and translations of the densities fn,-l,,+1 are also required. The steps are: n;'l (V1.) S..| I (Vn)'y', i;,r(u ) a, ~;r(u ),;r(u n g,-~l C.Y (5.180) j -_ n-1 + j l.f f 1 (Vn) ==2 i,,;(u. ) C,;u ) t (and "inverting" the index i:) = +... f.(V). _ " -2 -,;r (u) CY"Ia Cy, -1 Cr-l (5.182) ( -- l,- I +ln J, -,+i (vJl v.., V._, I - +l) dVj dVn J "~n-1

247 (using Lemma 5.27:) - + r f f fn (Vn),= -2 a_ —l.a-l;r(u ) C-.-l,a-l;r(u ) -.-.-l-;r(u ) Sa,-1i,-2 C,.-i-l (5.183) (. J1n (I Van Vn-l * Vj+i) du)dv (and since S, is parametrically equivalent to Sn-l:) n-i - + f f (V) s n-2 a - -1, a -l;r(u ) a -1 l;r(u ) <-1, -l;(u ) -l c.-2 C, (5.184) "-'-! X"v( nn -In-, n-2, ( ll n- t..,V+1) d dvn-l J n -2 (shifting the index i by 1:) M-1+1 =+ I f) n-i a-i,-;,(u ^) C-i.a-;i(u ) -i.a-l;r(u ) n -" Cpa1 a pi..2 a~ Cp' (5.185)'jI f-2 Jl I,-1, n-2.,+1 (,,,-,,,-2, ~ ~.t,+,) d, dv, - - n-2 (and "reinverting" the index i:) l-1 _A+ f f f;f(n) ~1,,.-l;(u ) a-il;,(u ).a-l;(u )',.-1 S,, -2 c,^,-; (5.186) ( n-l,n-2,a (V,n_.. *,. +1) dvj d,l /I'a-2 _-1;+ F s' (Y un) = +F (V Itu,) I I Some examples of this substitution are presented in Section Appendices M and N.

248 5.3.10.1. Examples of Recursively Derived Solutions In this section are discussed some examples of the recursive performability solution of Theorem 5.28. To restate the problem, we wish to determine the distribution of reward for a finite-state nonrecoverable process. (See Section 5.2.4.1 [Definition of a Nonrecoverable Process] for the definition.) As in Section cpvexample, the distribution of the sojourn time in state i is exponential with rate X, and is independent of the sojourn times of the process in any other state j and of the present time. We also assume that XJ = X for all j and k. Thus, for i > 0, j > i: Xf^' X(v)= (5.187) 5.3.10.1.1. Two State Markovian Acyclic Process Consider the simple two-state Markovian acyclic process of Section 5.3.9.1 [Two State Markovian Acyclic Process]. (See Fig. 5.3.) Call the system S1; so k = 1, X1 = (X1,X0), and the only state sequence is u = (1,0). For y E [0, r(u,)t ), (i.e., - = 1): = = ci;r(u,) o r(uo)t e a-i) (5.188) j = i 0: C; = I case a1-ii); (r9) (5.1x cae F' Xl(y\ (uou0) = /f'(v1)dv

249 Y - r(o)t (5.191) r(u ) - r(uo) Y - r(u)t = - X lvd 1 -(- I) - r(UX) - j x1e dv =1-e 0 For y E [r(ul)t, c): Fy(y)= 1 (5.192) The performability distribution is then Y - "(uo)t F )= 1 - ) E [0, r(ut ) (5.193) 1 Y e [r(u1)t,oo) which is the same result of Section 5.3.9.1 [Two State Markovian Acyclic Processj. 5.3.10.1.2. Three State Markovian Acyclic Process Consider now n = 2. See Fig. 5.4. Call the system S2; so k = 2, X2 = (X2, X1, 0), and the only state sequence is u = (2, 1, 0). The derivation is presented in Appendix L. Of course, the results are the same as those in Appendix J, but the derivation is significantly shorter. 5.3.10.1.3. Four State Markovian Acyclic Process Next, consider n = 3. See Fig. 5.5. The system is S2, k = 2, 12 = (X2, X, Xo), and the only state sequence is u = (2, 1, 0). The derivation is presented in Appendix A\ The results are the same as those in Append:. K.

250 5.3.11. Numerical Solutions 5.3.11.1. METAPHOR The remaining difficulty is calculating the integrations. For certain classes of probability densities fuv, v, v,,, it may be possible to perform symbolically the integrations to obtain a true closed-form solution. For example, a closed-form solution for the performability of a queueing system is derived in [331. The symbolic integrations may be performed either (laboriously) by hand or by computer programs that manipulate symbolic quantities, e.g., MAXIMA 12001 and REDUCE2 [2011. A recursive version of Eqs. 5.35 and 5.1525.143 is derived in Section 5.3.10 [Recursive Formulation of F1U1 and may help in reducing the amount of work necessary to perform symbolically the required integrations. However, when performability is evaluated for complex systems with large state spaces, one is primarily interested in numerical results as opposed to closed-form solutions. For this purpose, we have written a numerical program based on Eqs. 5.25, 5.27, 5.35, and 5.152-5.143. This program is called "meta_continuous" and is now part of a larger performability evaluation package, METAPHOR (see Sections 3.4.4 [METAPHOR-A Performability Modeling and Evaluation Tooll and 4.4 [METAPHOR-A Performability Modeling and Evaluation Tool] for descriptions of METAPHOR). To obtain a system's performability, METAPHOR is given information about each trajectory sequence u, including the sequence itself, the conditional densities fv, v, _v,_,, v+I, and the utilization period T = 10, t]. METAPHOR then computes the regions of integrations Cy and using Gaussian quadrature, computes FYI U. (See Section 5.3.4 [Formulation of F]Ul.) meta_continuous contains approximately 4,000 lines of C [1761, including menu and project management functions. Appendix C contains the Unix manual entry for meta_continuous, while Appendix E contains the calling structure for meta_continous.

251 5.3.11.2. Multiprocessor/Air Conditioner Example To illustrate the application of the solution procedure for Fy and to exhibit its ability to deal with non-semi-Markov base models, we consider the multiprocessor/air conditioner example discussed in Section 5.2.2 [An Example of a Reward Based Performability Modelj. (See Fig. 5.1). Recall that the performance (reward) variable Y is the number of jobs processed during some utilization period T = [0, t]. State (i,j) reflects the number of processors operational and whether the air conditioner is operational; each processor is identical and has computation rate of 6 jobs/hour. The reward structure is r(i,j)= i-6. For simplicity, the example constrains the system to a single air conditioner. (A more complete example would include multiple air conditioners.) Suppose that, relative to a specified threshold y (y > 0), the system user is interested in processing more than y jobs during the bounded utilization period. For instance, the system might be a computer in a business or university during a working day, or it might be a processor handling financial transactions overnight. Note that especially in the latter category of applications, availability over the entire utilization period is not required since most of the computation could be done early in the period, or, alternatively, spread out over the entire period. In performability terms, we are considering (for a given value of y) the set of accomplishment levels BY = {b I b > y). The performability p(BY), i.e., the probability that the number of jobs processed (reward) Y is greater than y, can then be obtained from Fy since p(B^) - Prob[Y> y] = 1 - Fy(y). The stochastic properties of the processors are affected by the failure time of the air conditioner in such a way that the base model is not semi-Markov. At the beginning of the utilization period, the air conditioner and all of the processors are functioning and the temperature of the room containing the equipment is 20" C. Since the amount of hLat which the air conditioner must dissipate places stress on the compressor, the air conditioner's failure rate is influenced by the number of processors it must cool. Assume the air conditioner fails with an constant failure rate XAC(N) = N0.05 failures/hour, where N is the number of

252 processors in the room. If the air conditioner fails, the ambient temperature in the room R begins to increase to a higher steady-state temperature with an exponential rise time; the number of processors in the room affects the speed at which the temperature rises. More precisely, - N Ari R(Ar) = 550 + 2N- (55 + 2N- 200 )C (5.194) where (55~ +2N)~ C is the new steady-state temperature, Ar is the time (in hours) since the air conditioner failed, and p = 10 degrees/hour/processor is a constant reflecting the rate of temperature increase. If a processor fails, it does not shut off and so continues contributing heat to the room. The failure behavior of the processors is influenced by the ambient temperature; each processor fails at an constant rate which varies linearly with room temperature (over the range 20" C to 550 C) from 0.001 failures/hour to.1 failure/hour. If the room temperature is R" C, Xp(R) = 0.1-0.001 + (R -20 ) (5.195) 55~ - 200 There are N+1 state sequences u such that Prob[U = u] > 0 (see Fig. 5.1): {u} = {((N,1),(N,0),(N-1,0),...,(0,0)), ((N, 1), (N-, 1), (N-l, 0),.. (0, 0)),..., (5.196) ((N, 1), (N-1, 1), (N-2, 1),...,(0, 1), (0,0))}. Let u = ((N, ), (N-l, 1),..., (k, 1), (k, 0),...,(0, 0)). The probability of the sequence u is XAC N JXp X ]n.. if k<N (5 197) PrbIU.1 = kX + XAC i==k+1 j + XAc Prob[U = u] XAC XAC if k = N NXp + XAC Suppose u, = (j, 1); the conditional probability density function for the time the process

253 spends inl state t, is /v, I v. v.,.v+,. u (Vt I Vn, Vn_.. - V,+1, U) (5.198) -0XF(2o0 ) +XAC,) =(jXp(20o )+ XAC)eC )+ Suppose u, = (j,O) and that the first state in u representing a failed air conditioner is uk, k > i; the conditional probability density function for the time the process spends in state u, is f'I v\.V. v V+' V, ( 1, tl- 1, ~~' +lU) (5.199) - Kixp(R(Ar))e where Ar = vk + vk_1 +' + v,, and K is a normalization constant equal to K- jXp(R(A )-(R(a (5.200) K =/jX(R(A/ )e (5.200) 0 (where A/ = vt + vk_1 +'* + v,+l + v). That the base model is not semi-Markov can be seen from the time-varying failure rates associated with states (i, 0) [Eq. 5.1991; these rates are functions of the process' history and cannot be inferred from the present state, present time, and time of entry to the state. It is clear how Eqs. 5.197, 5.198, and 5.199 can be applied to Eqs. 5.25 and 5.27. To see how Eqs. 5.35 and 5.152-5.143 are employed to calculate Fy, consider N = 3, u = ((3,1),(3,0),(2,0),(1,0),(0,0)) and y = 1500 E [r(u1)t,r(u2)t). From Eq. 5.35, we see that Fy is the sum of six multiple integrals, three of which are 0 since they are integrated over empty Cy (these are C6, C', and CO). The other 3 Cy correspond to the sets of base model trajectories which are 3, 2, and l-resolvable. Consider C2. From Eq. 5.139: C,4 = [ (1500- 1000) ] 2.5); (5.201)

254 from Eq. 5.140: cy 2000-(300 -200)1500- 2000- (300- 200)00- ) >o0 300- 200'300- 100 (5.202) I 500- 200 v4 -.200 from Eq. 5.141: 2 1500 - 1000 - (300 - 100)vS - (300 - 100)V4 C~,C 2 2- 0~ -- CY2' 200- 100 (5.203) [0 500 - 200(V1 + v) ] 100 P and from Eq. 5.142: Cy I= [0,oo) and (5.204) C2 0 = 00 Once regions CS and C' are similarly obtained, Eq. 5.35 can then be evaluated. This process must be repeated for each state sequence u and then Eq. 5.25 can be applied to obtain the performability p(BY). Fig. 5.6 shows a plot of p(BY) = 1 - Fr(y) for N = 1, 2, 3, and 4 processors, t = 10 hours and 6 = 100 jobs/hour. Note that these evaluations provide a considerable amount of information regarding the system's ability to perform (provide reward) in the presence of faults. It is easy to show, for example, that the performability p(BY) is 0 when y is greater than or equal to N.. t (e.g., for N = 2, p(B2~0) = 0; see Fig. 5.6). Hence, to obtain a nonzero performability, the number of processors must be greater than - For 6at instance, to have a nonzero probability of accomplishing more than 1500 jobs, one must have at least 2 processors. Generally, for the values of N shown on the plot, there is a significant

255 1. 00 875. 730 - m.0- \ \\\ ~625? - -500 -.375 1 2.o0 - 0 o00.00 t1.o 1.0 2oo.00 2'50 3.00 3.50 00 y <X103) Fig. 5.6 Performability plot for the multiprocessor/air conditioner example of Section 5.3.11.2

256 gain in p(BY) for values of y above 1000 when additional processors are included in the system. For values of y below 1000, there is relatively little gain from having more than a single processor. Indeed, if the specified minimum reward is between about 500 and 1000 jobs, a single processor provides a greater probability of performing within BY than do two proces80sors. In non-critical applications, a system designer may choose to settle for a lower performability to avoid the cost of additional processors. Information such as that provided by Fig. 5.6 can be quite useful in investigating such tradeoffs. For example, suppose the threshold is y = 1500. The difference between the performability with N = 3 and with N = 4 is about 0.03, while the difference between N = 2 and N = 3 is about 0.12. The probability of accomplishing more than 1500 jobs with 3 processors is about 0.96. If this probability is adequate for the application, and the additional.03 probability is not worth the cost of an extra processor, the designer may well choose a 3-processor system.

CHAPTER 6 CONCLUSIONS 6.1. Contributions The need for combined reliability and performance measures for a broad class of systems is becoming increasingly recognized. Performability is a sufficiently general combined measure. Acceptance of performability as a measure requires simple, powerful, and automatable techniques for modeling, calculating, and using a system's performability. This thesis has addressed this issue. In particular, a) we have generalized the analysis of systems having discrete performance variables, and b) we extended the development of methods for analytically solving performability models using continuous performance variables. With regard to a): i) We have described a calculus for relating low-level structural behavior to high-level system performance. ii) We have proposed algorithms and heuristics for efficiently performing the calculations associated with the calculus. iii) We have implemented these algorithms in the computer package METAPHOR. 257

258 iv) Employing that computer package, we have analyzed example systems. In the case of b): i) We have developed the problem more generally in the context of reward models and nonrecoverable processes 1551, 159]. ii) We have derived a general solution for the pc;ferimability of a system modeled by a finite-state, acyclic, nonrecoverable process. This solution takes the form of an integral equation. The solution has been illustrated. iii) We have derived a recursive form of the solution of ii). In addition, we have presented specific examples of the recursive solution. iv) We have discussed a computer package we have written that implements the solution. v) Using the above tools, we have analyzed a nontrivial example. vi) We have begun consideration of still more general models. These contributions constitute a significant extension of performability theory, and more broadly, of performance and reliability theory for degradable systems. 6.2. Further Research The research described in this thesis has resulted in techniques for determining the performability of many useful computing systems. Building upon the work of this thesis, many questions can now be pursued, some "local" in scope, i.e., specific to the topics addressed in this dissertation, and others more "global," i.e., concerning the direction of general performability research. In the area of the DPV (discrete performance variable, Chapter 4) methodology, the following are among the obvious refinements and extensions:

259 i) Still more efficient solution techniques must be developed. Aiding such derivations would be an analysis of the complexity of the present algorithms. Approximation methods would be useful for evaluating large systems. Also, it may be possible to describe special-purpose architectures for implementing many of the evaluation algorithms. ii) The theory described in this dissertation is general, and so application of the work to areas other than those explicitly considered here should be investigated. In particular, systems such as nuclear reactors may be advantageously modeled by techniques having the time and spatial dependencies of the methodology of Chapter 4. However, more advanced solution techniques [as discussed in point i) above] would be required for large scale evaluations. For the CPV (continuous performance variable, Chapter 5) methodology, the following remains to be done: i) More efficient implementations (numerical integration programs) of the solution should be researched and the feasibility of using MACSYMA-class symbolic integration programs should be considered. Further, obtaining closed-form solutions with no integration terms for certain classes of systems (e.g., those with Markovian reward models) should be investigated. Additional study of approximation techniques, especially to remove the acyclic restriction (see Section 5.2.5.4 [Approximating Cyclic Processes)), may prove useful for studying other classes of systems, such as those with repair. ii) A different method of specifying a system to METAPHOR should be developed. A "performability language" for describing the reward model could be written and a compiler constructed. For instance, the user could specify for each state: the next states with their probabilities, the sojourn time densities, and the reward. iii) As with the DPV methodology, the CPV theory described in this dissertation is general, and so application of the work to areas other than computing systems should be

260 considered. For instance, applications to economic systems may be possible. The concept of reward can be interpreted many ways (see Section 5.2.3 [Determination of Reward Rates]); investigation of additional possible interpretations may lead to still more applications. iv) Additional techniques for using the reward models considered in this thesis to study design tradeoffs should be developed. v) The concept of reward model should be broadened. For example, the reward rate r(i) of state i could be extended so that r(i) is a function not just of i but of the entire history of the process to the present and of the sequence of states visited by the base model during 0, co). We refer to such an extension as a "history based" reward model (HBRM). The interpretation of reward in history based reward models is identical to that in reward models; only the functional description of reward rate is different. The extension may be fairly straightforward, with the terms involving products v,r, replaced with integrations. Such models would include the concepts of discounting (an exponential decay with time of the reward rate) and bonuses (instantaneous "impulse" reward upon entering a state). vi) The restriction of a nonrecoverable model should be weakened. For example, it may be possible to allow an increase in the reward rate within a relative small range, e.g., the rate in a given state can never become greater than the smallest rate of the previous state.

APPENDICES 261

262 APPENDIX A Unix manual entry for metaphor MET.PHOR ( I ) UNIX Programmer's Manual METAPHOR ( 1 ) NAME metaphor - modeling &nd evaluation tool for performabilty SYNOPSIS metaphor DESCRIPTION The modeling and evalu.tion aid for performability, metaphor has two main components: a tool for discrete performance varible models (see metadiscrete(l)) and a tool for continuous performance variable models (see meta_continuous(l)). metaphor uses menu(l) to present the user with a sinple.-nenu.based metihod of choosing between the two classes of performability analysis. For details, refer to menmet(1), meta_cont.nunus(l) and metadiscrete(l). SEE ALSO menu(l), menmet(l), meta3continuous(l) and meta discrete(l). AUTHOR D. G. Furchtgott 7th Edition UolM ECE

263 APPENDIX B Unix manual entry for meta_discrete METADISCRETE ( I ) UNIX Programmer's Manual META_DISCRETE ( I ) NAME meta_discrete - calculate performability distributions SYNOPSIS meta_discrete DESCRIPTION meta_discrete computes performability distributions using the theory developed in the author's thesis. metadiscrete is part of the performabilty modeling and evaluation package metaphor(l). meta_discrete takes as input a logic description of a performability model hierarchy and a description of the probabilistic nature of the base model. The performability is then computed. The consistency of the model hierarchy can be checked if the user desires. The following commands are recognized: help Gives a short list of all available commands. exit Leaves metadiscrete. data This command has not been implemented in the C version of meta_discrete. It does work on the MTS APL version. data allows one to view the input data. alter This command has not been implemented in the C version of metadiscrete. It does work on the MTS APL version. alter allows one to change the input data. cale This command has not been implemented in the C version of meta_discrete. It does work on the MTS APL version. calc causes METAPHOR to enter the APL caluator mode. corn Allows the user to enter comments in midst of a meta_discrete session. All following input is ignored until the single word'exit' appears by itself on a line. eval Tells meta_discrete that the model construction is complete and the user is now ready to calculate the probability of the accomplishment levels. meta_discrete assumes that each bottom level attribute is statistically independent and so, for each array product, metadiscrete does a separate set of probability calculations for each attribute. Multiplication of the resulting probabilities is done to combine the results. metadiscrete will prompt the user for the relationships between the base model and the stochastic description that will be input. For each bottom level attribute, the user must specify: the number of states in the stochastic model, the number of attribute values which each stochastic model state represents, for each stochastic model state, the specific attribute values represented, for each base model phase, the transition (P) matrix (see below), for each phase transition, the interphase (H) (or if the last phase, the (F) matrix-see below), and finally for each bottom level attribute, the initial probability (I vector). meta_discrete recognizes several interphase (P) matrices (see the METAPHOR User's Guide for more information about identity, given, nfail, and dedfail). Identity meta_discrete generates an identiy transition matrix, i.e., with probability one, th system stays in the same state. given metadiscrete prompts for each entry and checks that a genuine stochastic matrix is input. exponential mecta_discrete constructs a transition matrix that assumes a system having N identical, statistically independent components with constant failure 7th Edition UofM ECE I

264 META_DISCRETE ( I ) UNIX Programmer's Manual META_DISCRETE ( 1 ) rates. The user is prompted for the failure rates of the components and the length of the phase. nf&tl meta_discrete constructs a transition matrix that assumes a system having N groups of identical, statistically independent components with constant failure rates. Group i has ni working components in each group, i.e., (nl, n2,..., nN). The user is prompted for the failure rate of the components, the length of the phase, the number of groups, and the number of components in each group. State i of the model corresponds to the following encoding (much like the decode operator of APL): Take the binary representation of (2N)-l-i. Then the i-th digit of the binary representation (read left to right) represents the state of the corresponding component in the system, 0 if failed, 1 if not failed. dedfall meta_discrete constructs a transition matrix that assumes a system having N components, each identical, statistically independent, and having a constant failure rate. The state of the model reflects specifically which components are working, i.e., dedfail is nfail with N groups of 1 component each. The user is prompted for the failure rate of the components, the length of the phase, and the number of components. The number of states must be a power of two. State i of the model corresponds to the following encoding (much like the encode operator of APL): Assign each component a unique integer between 1 and N. Take the binary representation of (2'N)-l-i. The the i-th digit of the binary representation (read from left to right) represents the state of the corresponding component in the system, 0 if failed, 1 if not failed. sIft A special transition matrix is generated which is suitable for the SIFT example. (See Meyer, Furchtgott, and Wu, IEEE Trans. Computer, June 1980.) The user is prompted for the number of processors, the number of busses, the failure rates of the components, and the length of the phase. dualdual A special transition matrix is generated which is suitable for the dual-dual example of Hitt, Bridgman, and Robinson. The user is prompted for the failure rates of the components and the length of the phase. metadiscrete also recognizes several intraphase (G) matrices (see the METAPHOR User's Guide for more information about identity, given, nfail, and dedfail). Identity meta_discrete generates an identiy transition matrix, i.e., with probability one, the system stays in the same state. given meta_diocrete prompts for each entry and checks that a genuine stochastic matrix is input. build Initiates construction of a model hierarchy. Information is collected regarding the accomplishment levels, the first two model levels, the level-O capability function, and the level-I interlevel translation. When meta_discrete has gathered this data, the level-i function is computed. The user is asked for: the number of accomplishment levels, their names, the names of the top two hierarchy levels, the number of level-0 attributes their names, and the number of values they can assume, 7th Edition UofM ECE 2

265 META_DISCRETE( 1 ) UNIX Programmer's Manual META_DISCRETE ( 1 ) the number of level-O phases, their names, and the number of level-I attributes their names, and the number of values they can assume, the number of level-l phases, their names, and for each accomplishment level, the number of array products associated with that accomplishment level, whether the user wants consistency checked, totalness checked, or reduction, each level-O to accomplishment level array product, and for each level-O phase/attribute/value entry, the number of array products associated with that entry, whether the user wants consistency checked, totalness checked, or reduction, each level-I to level-O array product, and finally whether the user wants a printout of the derived level-i capabilty function. checkpoint Makes a copy of the current program image (via vfork(l)) and begins execution of the copy. The user can return to the current state by using the meta_discrete exit command. Any number of checkpoints can be stacked (of course, up to limits of the machine and the number of user pids allowed). echo Toggles whether input is echoed. By default, echo is off. This command is useful with the source command. brief Toggles whether most metadiscrete output is printed or suppressed. By default, brief is off. Major error messages and performability results cannot be suppressed. This command is useful with the source command. next Begin construction of the next level of the hierarchy. Most information regarding level-O is discarded, level-i is renamed to level-O, the level-i capability function is renamed to the level-O capability function, and the next level of the hierarchy is input. When meta_dlscrete has gathered this data, the level-i function is computed. The user is asked for: the number of level-I attributes their names, and the number of values they can assume, the number of level-I phases, their names, and for each level-O phase/attribute/value entry, the number of array products associated with that entry, 7th Edition UofM ECE 3

266 META_DISCRETE ( I ) UNIX Programmer's Manual META_DISCRETE ( I ) whether the user wants consistency checked, totalness checked, or reduction, each level-l to level-O array product, and finally whether the user wants a printout of the derived level-l capabilty function. source meta_discrete prompts for the name of a file and takes its input from that file. By default, meta discrete takes its input from the standard input, usually the user's terminal. sink meta_discrete prompts for the name of a file and puts its output in that file. The special filename'-' indicates that input should be printed on the standard output, usually the user's terminal. By default, meta discrete prints its output from the standard output. prb-.th Print the interphase (H) matrices. prlntlr Print the (current) level-l interlevel translation (k or kappa function). printlc Print the (current) level-l capability function. prlntm Print the interphase transition (H) matrices, the initial vector (I) and final vector (F). printp Prirt the state transition (P) matrices. reducele Reduce the (current) level-l capability function. stats Prirt statistics on cur!ent memory usage. The format is two rows of numbers. The top row shows the number of blocks of increasing sizes which the memory allocator has allocated and has since freed; this memory can be reused. The (n+2)th row denotes blocks of size (2n)-4 bytes. The bottom row shows the number of blocks which are being used. Commands should be typed in lower case. When inputing a single line of an array product, the following syntax must be employed:'1' <number>l<se[>l<group>...'i' <comment> where: <number> is an attribute value. <set> is a collection of attributes written with set brackets, e.g., {0,3}. A set means that that any trajectory in that array product must contain one of the values in every set. <grovp> is a collection of attributes written with parenthesis, e.g., (0,3). A group means that that any trajectory in that array product must contain one of the values in at leat one group. SEE ALSO menmet(l), metaphor(l), meta_continuous(l) AUTHOR D. G. Furchtgott 7th Edition UofM ECE 4

267 APPENDIX C Unix manual entry for meta_continuous METACONTINUOUS ( I ) UNIX Programmer's Manual METACONTINUOUS ( 1 ) NAME meta_continuous - calculate performability distributions SYNOPSIS meta_continuous [ option )... DESCRIPTION meta_continuous computes performability distributions using the theory developed in the author's thesis. It prints the values on standard output and optionally produces a file suitable for the plotting programs gplp(l) and bp(l). menmet(l) is a menu-driven interface to meta_continuous. There are lots of options: the following are recognized, each as a separate argument followed by at least one numerical or string argument. -A Solve the air conditioner example. The following options are then recognized: -c The processor temperature contribution at steady state with no air conditioning. Defaults to 0. -C The air conditioner failure rate. Defaults to 0. -d The computation rate for each processor. Defaults to 0. -p The final processor failure rate. Defaults to 0. -P The inital processor failure rate. Defaults to 0. -R The initial ambient room temperature. Defaults to 0. -S The first state in which the air conditioner does not work. Defaults to 0. -T The final ambient room temperature. Defaults to 0. -u The rate at which the temperature rises in the room when the air conditioner fails. Defaults to 0. -f The name of the file in which the graphic image is to be placed. Defaults to graph. -F The name of the file in which the graphic image of the input parameters is to be placed. Defaults to graph.par. -g If followed by a'y', a graphic image is produced. By default, no graph is produced. -I The number of iterations to be performed. Should be used with the -I, -z, and -Z options. Defaults to 1. meta_continuous allows the user to specify a parameter to be varied. -I The parameter to be varied with the -i, -z, and -Z options. This option should be followed by a single character specifying which parameter is to be varied. Legal characters are any of the options involving user specified data that involve a single data point (as opposed to a vector, such as with reward rates -r) e.g., n to vary the number of states and a to vary the arrival rates. If a graph is to be produced, the number of data points (-v) should not be varied. -I The failure rates from state i to state i-l. The rates are listed in order from lambda0l] to lambdalnl after the -I. The first number should always be 0. The -n option must be specified before -I. These default to 0. -M Solve the M/M/k+i/L example discussed in J. F. Meyer,'Closed-form solutions of performability,' IEEE Trans. Computers, Vol C-32, July 1982, pp. 648-657. The following options are then recognized: -a The arrival rate for the queue. Defaults to 0. -b The buffer element failure rate. Defaults to 0. -L The length of the buffer. Defaults to 0. 7th Edition UofM ECE I

268 META_CONTINUOUS ( ) UNIX Programmer's Manual META_CONTINUOUS ( 1 ) -m The service rate of each processor. Defaults to 0. -P The processor failure rate. Defaults to 0. -n The number of non-trivial states (i.e., not counting state 0). Defaults to 1. -N If followed by the character'y', noramlize the performance variable y so that it lies in the range 0 to 1. -q Number of quadrature points for the integration. Should be one of 2, 4, 8, or 16. Defaults to 4. -r The reward rates of each state. The rates are listed in order from rate[(l to rate[nj after the -r. The -n option must be specified before -r. Defaults to 0. -e The number of intervals which the density will be divided for the purposes of the quadrature. By default, a single interval is used. -t The length of the utilization interval. Defaults to 0. -v The number of data points to be printed. Defaults to 10. -y The minimum value of t which will be considered. All caculations begin at this time. -Y The maximum value of t which will be considered. All caculations end at this time. -s The values to be iterated upon, if they are integers. The -i option must be specified before -Z. -Z The values to be iterated upon, if they are reals. The -i option must be specified before -Z. SEE ALSO bp(l), gplp(l), menmet(l), metaphor(l), metadiscrete(l) AUTHOR D. G. Furchtgott 7th Edition UofM ECE 2

269 MENMET ( ) UNIX Programmer's Manual MENMET( 1 ) NAME menmet - menu based metaphor (continuous performance variable) SYNOPSIS menmet [project] I option... DESCRIPTION A modeling and evaluation aid for performability, METAPHOR(1), has two main components: a tool for discrete performance varible models (see metadiscrete(l)) and a tool for continuous performance variable models (see meta_continuous(l)). meta.continuous is a program with a myriad of options which can be specified on the run line. Indeed, metacontinuous is not designed to be run directly by a casual user. menmet is a preprocessor for generating all the commands and options necessary to run meta_continuous. menmet is based on Abe Schenker's menu(l) system. The menus are grouped logically according to function. A group of settings can be named and saved as a'project.' Thus, if you are working with more than one document, you can preserve the settings associated with each document. To recall a particular project, include the project name on the eroff command line, e.g., menmet air_conditioner will start up menmet with all the settings saved under the project'june.report.' A handy convention to know in menmet is that commands which bring up another menu are bracketed. The best way to become familiar with all the commands available in menmet is to run it and explore each of the menus. SEE ALSO menu(l), metaphor(l), metadiscrete(l), metacontinuous(l) AUTHOR D. G. Furchtgott 7th Edition UofM ECE 1

270 APPENDIX D Calling structure of meta_discrete This appendix contains the calling structure for metadiscrete, i.e., the portion of METAPHOR that deals with discrete performance variables. The calling structure provides a good overview of the program's flow structure. Most of the major functions that concern construction of the hierarchy and error checking are discussed in Section 4.3.3 [Algorithmsl. Many other functions dealing with the calculation of probailities are discussed in Furchtgott furchtgott se1128 [. furchtgott se1136 In the analysis below, external functions (e.g., system functions) and macros are denoted by "[extj." Recursive functions are delimited by "*." Functions which appear earlier in the analysis are referred by line number. 1 main 2 fprintf 3 input 4 printquad ext] 5 rscanf lextr 6 commandexit 7 fprintf 8 exit [ext{ 9 strlen Let] 10 strcmp lext) 11 strcpy. [ext 12 print ext 13 getc lext 14 comman 15 commandhelp 16 fprintf 17 commandexit... [see line 6J 18 commanddata 19 fprintf 20 commandalter 21 fprintf 22 commandcalc 23 fprintf 24 commandecho 25 commandcom 26 printluad [extl 27 getc lextl 28 commandexit... [see line 61 29 fprintf 30 commandbrief 31 commandeval 32 getstates 33 sprintf [ext] 34 print [extj 35 strcpy [lxt1 36 strcmp lext]

271 37 ^ input 38 atoi lextl 39 getnumstateq 40 print, [ctj 41 sprintr iext 42 strcpy rfxt] 43 strcmp ext 44 input 45 atoi [extj 46 fprintf 47 alloce 48 calloc 49 sizeof jext] 50 fprintf 51 exit [ext] 52 gete 53 calloc 54 sizeof ext! 55 fprintr 56 exit [ext) 57 sprintf Ie$tJ 58 print lextl 59 strcpy [ext 60 strcmp extl 61 A input 62 EVALUE [ext] 63 atoi [extl 64 free 65 ASSERT [ext] 66 getpmatrices 67 allocp 68 calloc 69 sizeof ext] 70 fprintr 71 exit (extl 72 print [e t 73 sprint[ lext] 74 generatepmatrix 75 strcpy [\xt. 76 strcmp [eJtl 77 print lextl 78 input 79 pgiven 80 print [ext] 81 strcpy 1fxt] 82 strcmp ext 83 sprintT ext[ 84 A input85 checkprob 86 atof 87 fprintf 88 checkone 89 atof 90 fprintf 91 PVALUE lextl 92 atof 93 dedfail 94 log [extj 95 fprint 96 print Lext] 97 strcpy Ifxt] 98 strcmp lexti 99 ^ input 100 atof 101 valueinfo 102 PVALU [eext] 103 exp L[xtl 104 pow lexl 105 nfail 106 print [extl 107 strcpy [extl 108 strcmp [ext] 109 input^ 110 atof 111 atoi [extl

272 112 fprintf 113 choose 114 PVALU1 lext) 115 exp lext 116 pow ext 117 sift 118 print lext] 119 strcpy lextj 120 strcmp ext 121 input 122 atof 123 atoi [ext] 124 fprintf 125 PVALUE extj 126 choose 127 exp e 128 pow exi 129 pidentity 130 PVALUE lext] 131 exponential 132 print lext] 133 strcpy lext. 134 strcmp lext] 135 input 136 atof 137 PVALUE [ext] 138 exp {ext 139 dualaual 140 print [ext) 141 strcpy lext) 142 strcmp ext 143 input 144 atof 145 exp extl 146 PVA E xt 147 fprintf 148 gethmatrices 149 alloch 150 calloc 151 sizeof lext] 152 fprintf 153 exit [ext] 154 print [etj 155 sprint[ { xt] 156 generatehenatrix 157 strcpy lexti 158 strcmp lext 159 print {ext) 160 input 161 hgiven 162 print [ext] 163 strcpy {ext 164 strcmp Jext] 165 sprintf extf 166 input 167 checkprob... [see line 851 168 checkone.. [see line 881 169 HVALUE [ext 170 atof 171 hidentity 172 HVAUE lext 173 getivector 174 strcpy I[xt 175 strcmp leidc 176 print e ctI 177 sprinti ext) 178 input 179 checkprob... [see line 85] 180 checkone... [see line 881 181 atof 182 getbasicvarprob 183 allocb 184 calloc 185 sizeof lextl 186 fprintf

273 187 exit [ext] 188 strcpy [ext 189 strcmp led I 190 print let) 191 sprint text] 192' input 193 checkprob...[see line 8i1 194 checkone... [see line 88 195 BVALUE [ext] 196 atof 197 getperformability 198 allocg 199 calloc 200 sizeof Iext] 201 fprintf 202 exit [extl 203 getacclevprob 204 calcarrayprob 205 PVALUE ext 206 GVALUE ext 207 HVALUE ext 208 BVALUE ext 209 bVALUE ext 210 LCBOUNDARY 211 expand 212 GVALUE [ext] 213 valueinfo 214 LCARRAYVALUES lext] 215 EVALUE lext! 216 expandf 217 valueinfo 218 LCARRAYVALUES [ext] 219 EVALUE extl 220 expandb 221 bVALUE [ext] 222 valueinfo 223 BASICARRAYVALUES [ext] 224 printperformability 225 print lext] 226 sprintr extl 227 fprintf 228 free... see line 64) 229 commandbuild 230 getacclev 231 print lext) 232 strcpy I xt) 233 strcmp ext 234 input 235 atoi [ex 236 sprintf [ext 237 getname 238 sprintf let) 239 print ext 240 getc [ext] 241 commandexit... [see line 6] 242 fprintf 243 strlen lext] 244 print {ext 245 getname... [see line 237] 246 getlevelO 247 getattr0val 248 print lextJ 249 sprintf extI 250 strcpy Ilxt 251 strcmp ext 252' input 253 atoi [ext) 254 getname [see line 237] 255 strlen [ext] 256 getattrO0 257 print [extl 258 sprintf jext 259 strcpy [Ixt[ 260 strcmp Lexti 261 input

274 262 atoi [ext] 263 getphase0val 264 print [ec t 265 sprintl lexI 266 strcpy [ xt 267 strcmp [ex 268 ^ input 269 atoi [ext] 270 getnarne. [see line 2371 271 strlen text 272 getbasicval 273 sprintf [lelt 274 print [extj 275 getname [see line 2371 276 strlen lext) 277 getbasicvar 278 print extl 279 sprintJ extJ 280 strcpy [xxtl 281 strcmp [ext] 282 ^ input 283 atoi [ext] 284 getlevell 285 getattrlval 286 print le tj 287 sprintr text 288 strcpy Irxt1 289 strcmp ext 290 input 291 atoi [(extj 292 getnanle. see line 2371 293 strlen [ext 294 getattrl 295 print Letl 296 sprintr extj 297 strcpy [extr 298 strcmp [ext 299 ^ input 300 atoi lext] 301 getphase va 302 print ext 303 sprint lex] 304 strcpy [xt. 305 strcmp [ext 306' input 307 atoi [ext] 308 getname... [see line 237] 309 strlen lextj 310 getlcarray 311 getlccount 312 calloc 313 sizeof ext] 314 fprintf 315 exit [extl 316 print e tl 317 sprintr lext] 318 strcpy l[xt. 319 strcmp lex 320' input' 321 LCCOUNT lext] 322 atoi [ext] 323 getlcarrays 324 calloc 325 sizeof lextl 326 fprintf 327 exit [ext|] 328 print [ext] 329 inyes 330 printqqad Jextl 331 fscanf[ extf 332 commandexit... [see line 61 333 print [exfl 334 &etc extf 335 LCBOUNDARY 336 LCCOUNT [extl

275 337 getonelcarrayset 338 alloclc 339 fprintf 340 exit lext] 341 calloc 342 siz of ext] 343 print Lect 344 sprint lext 345 LCARRAY 346 strcpy kxt 347 strcmp lext 348 charinpqt 349 getc [ext) 350 commandexit... see line 6 351 print [e t] 352 strcmp lextJ 353 ^ command 354 strc y ext 355 LCARRYATTR 356 fprintf 357 exit lext] 358 parseO 359 parserrO 360 fprintf 361 sprintf [e t] 362 print lextl 363 GENAVRAYVALUES [ext] 364 isdig t 1ext1 365 ato lext] 366 parsebasic 367 parserrbasic 368 fprintf 369 sprintf [evt 370 print lextl 371 GENBASICARRAYVALUES lext] 372 isdigjt lextj 373 atol extl 374 BASIC RRAYATTR 375 expandarray 376 allocarray 377 fprintl 378 exit lextl 379 calloc 380 sizeof extl 381 GENARRAYVALUES [extl 382 GENBASICARRAYVALUES [extj 383 free... (see line 64] 384 checkicarray 385 LCBOUNDARY 386 LCARRAYVALUES extl 387 BASICARRAYVALUES ext] 388 fpri tf 389 free... see line 64] 390 checktotal 391 copy array 392 allocarra...jsee line 3761 393 GENARRAYVALUES [ext] 394 compresstest 395 squeeze 396 free... [see line 641 397 fprintf 398 ALLONES lext] 399 GEN4RFAVVALUES lext] 400 print lext 401 freearray 402 free, ee line 641 403 sprintf [exti 404 fflush 405 inyes.,. [se line 3291 406 printf [ext/ 407 printreducedarray 408 print [extJ 409 printqu l[ext 410 sprintf lex) 411 printOarray

276 412 valueinfo 413 sprintf (ext 414 fprintf 415 exit lext 416 print Iext] 417 printlarray 418 valueinfo 419 sprintf ext) 420 fprintf 421 exit text] 422 print lextl 423 complementarray 424 allocarray... see line 3761 425 calloc 426 sizeof Jextl 427 fprintt 428 exit [ext) 429 complement 430 allocarra...[se line 3761 431 ALLONES lextl 432 GENARRAYVALUES lextl 433 GENBASICARRAYVALUES lextj 434 print extJ 435 GENKNOWBOUNDARY lext] 436 GENARRAYVALUES [ext1 437 comptree 438 allocarrav... see line 3761 439 GENKN'OWBOUNDARY ext) 440 GENARRAYVALUES text 441 comp ree 442 free... [see line 64) 443 reducearray 444 compresste t 445 squeeze... (see line 3951 446 printreducedarray... Isee line 407) 447 freearry... Isee lipe 401) 448 free... see line 641 449 reducelcarray 450 print textl 451 LCBOUNDARY 452 compresstest 453 squeeze... [see line 395) 454 inyes... (see line 329) 455 printlcarray 456 print lextJ 457 printquif (ext] 458 sprintf lext 459 LCBOUNIARY 460 print0array... see line 411] 461 LCARRAY 462 printbasicarray 463 valueinfo 464 sprintf text 465 fprintf 466 exit (ext) 467 print text) 468 getkarray 469 getkcount 470 calloc 471 sizeof ext] 472 fprintf 473 exit [ext] 474 print leyct) 475 sprintf ]extl 476 strcpy (ext 477 strcmp ext 478 input 479 KCOUNT [ext) 480 atoi [ext) 481 getkarrays 482 calloc 483 sizeof Jext] 484 fprint 485 exit [ext| 486 print [ext)

277 487 inyes... [se line 3291 488 sprintf lext 489 KCOUNIT ext 490 KBOUNDARY 491 getonekarrayset 492 allock 493 fprintf 494 exit [ext 495 calloc 496 siz of ext] 497 print ext 498 sprintf Jextj 499 strcpy [xtr 500 strcmp lext) 501 charinput... [see line 3481 502 parsel 503 parserrl 504 fprintf 505 sprintf [e t 506 print lext 507 GENARRAYVALUES [extl 508 isdigit lext] 509 atol extl 510 expand rray... see line 3751 511 free...see line 641 512 checkkarray 513 KBOUNDARY 514 KARRAYVALUES [extl 515 fprintf 516 free... see line 641 517 checktotal... see line 3901 518 setbasicvar 519 sprintf [ext] 520 strlen lext) 521 fprintr 522 reducekarray 523 print exl 524 KBOUNDARY 525 compresste t 526 squeeze... see line 3951 527 inyes... [see line 3291 528 printkarray 529 print (ext 530 printquad ext 531 sprintf lextl 532 KBOUVNDARY 533 printlarray... [see line 417) 534 KARRAY 535 intarrays 536 allocarray... [see line 3761 537 calloc 538 sizeof iext] 539 fprintr 540 exit lextL 541 GEN4RAYVALUES [extl 542 print lext 543 GENLC OUNDARY 544 LCBOUNDARY 545 GENBASICARRAYVALUES [ext) 546 BASICARRAYVALUES lextl 547 buildtree 548 allocarray... [see line 3761 549 KBOUNDARY 550 fprintf 551 sprintf [extl 552 mstats 553 valueinfo 554 LCARRAYVALUES lext] 555 GENARRAYVALUES [extl 556 GENBASICARRAYVAL ES [ext] 557 copyarray..s. see line 391 558 commandexit..[. see line 61 559 reducearray... [see line 443) 560 quickreduce 561 compresstest

278 562 squeeze... [see line 3951 563 buildtree 564 free... [see line 641 565 free... [see line 641 566 setnext 567 freearr y... Isee lipe 401) 568 free... see line 641 569 strcpy [ext) 570 strncpy lextl 571 reducelcrray... Isee line 4491 572 inyes. see. line 329 573 prlntlcarray... [see line 455) 574 command ext 575 sprintf ext 576 print [ext] 577 getname... [see line 2371 578 getlevell... lsee line 2841 579 getkarray...see line 46_) 580 intarrays... [see line 535 581 commandsource 582 print lext) 583 charin ut... see line 348) 584 fclose [ext 585 fopen ext 586 fprintf 587 commandsink 588 print [ext) 589 charin ut... see line 348) 590 fclose ext 591 fopen ext 592 fprintf 593 commandprinth 594 fprintf 595 HVALUE [ext) 596 printkarray... {see line 5281 597 printlcarray... lsee line 455] 598 commandprintm 599 fprintf 600 GVALUE lext) 601 commandprintp 602 fprintf 603 PVALUE [exti 604 reducelcarray... {see line 449) 605 mstats 606 commandcheckpoint 607 fork ext 608 fprintf 609 sigigqore [ext] 610 wait ext) 611 fclose [ext] 612 print [ext 613 sigset lext 614 commandbotch 615 fprintf 616 realloc 617 malloc 618 morecore 619 sbrk [extJ 620 vlimi t ext 621 write [ext) 622 copymen 623 asm lext 624 free [see line 64

279 APPENDIX E Calling structure of meta_continuous This appendix contains the calling structure for meta_continuous, i.e., the portion of METAPHOR that deals with continuous performance variables. The calling structure provides a good overview of the program's flow structure. In the analysis below, external functions (e.g., system functions) and macros are denoted by "[ext]." Recursive functions are delimited by "'." Functions which appear earlier in the analysis are referred by line number. 1 main 2 init 3 sprintf 4 input 5 fetch [ext] 6 fprint [xt] 7 exit [extl 8 iterate 9 printf lext{ 10 checkit 11 floor lext] 12 print tf Iet 13 exit [ext] 14 queuesetup 15 ipow 16 fprintf [xtl 17 exit lext 18 fact 19 fprintf ext] 20 exit Iex 21 acsetup 22 intnorm 23 Loww 24 Uppp 25 Funevv 26 exp [extl 27 print let 28 pow ext 29 point 30 printsetue 31 printf ext 32 calc 33 perf 34 {C} 35 multint 36 Low 37 adj [e4xtJ 38 printf lextl 39 sum lexti 40 Upp 41 adj [extJ 42 prinf [letl 43 sum i extl 44 log [extj

280 45 Funev 46 intnorm... [see line 22] 47 adj Lextl 48 exp lext] 49 fprintf extj 50 printf [ext 51 pow lexti 52 printit 53 printf lext] 54 tfflush 55 fflush [exti 56 print f let] 57 exit lextJ 58 info 59 printf [ext 60 rintit... [see line 52J 61 plotit 62 plotinit 63 plots [ext) 64 name ext 65 plot [extj 66 scale lex] 67 plot [extl 68 axisv {extj 69 axis [xt 70 dline ext) 71 plotinpu 72 plotipit.. see line 62J 73 plot ext) 74 doprint 75 symbol 76 sprintf 77 symbol 78 dline [ext 79 plot lext)

281 APPENDIX F METAPHOR session for the simple reliability network example This appendix contains a METAPHOR session for evaluating the series-parallel reliability network example of Section 4.5.1 [Simple Reliability Network Example]. Michigan EvaluaTion Aid for PerpHORmability version 3.0 type help for assistance #: build number of accomplishment levels? #: 2 getting the name of each accomplishment level What is the name of accomplishment level 0? SUCCESS What is the name of accomplishment level I? FAIL getting the names of the top two levels: What is the name of model hierarchy level-0 (the highest level)t THE MISSION LEVEL What is the name of model hierarchy level-i (the second highest level)? THE COMPONENT LEVEL number of level-0 (THE MISSION LEVEL) attributes? #: 1 getting the name of each level-0 (THE MISSION LEVEL) attribute: What is the name of attribute 0? RELIABITY GRAPH CONNECTIVITY number of values per level-0 (THE MISSION LEVEL) attributed #: 2 number of level-0 (THE MISSION LEVEL) phases? #: 1 getting the name of each level-0 (THE MISSION LEVEL) phase: What is the name of phase 0? LEVEL 0 MISSION number of level 1 (THE COMPONENT LEVEL) attributes? #: 4 getting the name of each level 1 (THE COMPONENT LEVEL) attribute: What is the name of attribute 0? COMPONENT A What is the name of attribute 1? COMPONENT B What is the name of attribute 2? COMPONENT C What is the name of attribute 3? COMPONENT D

282 number of values per level 1 (THE COMPONENT LEVEL) attribute? #: 2222 number of level 1 (THE COMPONENT LEVEL) phases? #: 1 getting the nam-.f each level 1 (THE COMPONENT LEVEL) phase: What is the name of phase O? LEVEL 1 MISSION for each accomplishment level enter the number of array products associated with that accomplishment level. accomplishment level 0 (SUCCESS) #: 1 accomplishment level 1 (FAIL) #: 1 partial capability function array product specification will you want consistency checked? #: yes will you want totalness checked? #: yes will you want reduction? #: yes enter each array product corresponding to each accomplishment level accomplishment level 0 (SUCCESS): array product 0 the level-O (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) #THIS IS THE TRAJECTORY SET FOR ACC LEV 0 the level-O (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) 101 accomplishment level 1 (FAIL): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) #THIS IS THE TRAJECTORY SET FOR ACC LEV 1 the level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) the inverse capability function is already reduced for each value that each level-0 (THE MISSION LEVEL) attribute and phase can assume, enter the number of array products to be input for that value, phase, and attribute. if an attribute and phase is to be basic, enter BASIC. level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 0: #: 5 value = 1: #: 2 interievel translation array product specification will you want consistency checked? #: yes will you want totalness checked? #: yes will you want reduction? #: yes enter each array product corresponding to each combination of level-0 (THE MISSION LEVEL) attribute, phase, and value

283 level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value - 0 array prod ict C the level-i (TH-: 7OM!'ONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIV!T = a set 1 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) 10 1 the level-i (THE COMPONENT LEVEL) attribute I (COMPONENT B) ol the level-i (THE COMPONENT LEVEL) attribute 2 (COMPONENT C)'1' the level-I (THE COMPONENT LEVEL) attribute 3 (COMPONENT D)'ll level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 0 array product 1 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY = 0 set 2 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) the evel-l (THE COMPONENT LEVEL) attribute I (COMPONENT B) the evel-l (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) 10 the level-I (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) I d I level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 0 array product 2 the level-i (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY = 0 set 3 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) 1 1 the level-i (THE COMPONENT LEVEL) attribute 1 (COMPONENT B) the level-I (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) I I the level-I (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) 0 1 level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 0 array product 3 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY - 0 set 4 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) the level-I (THE COMPONENT LEVEL) attribute I (COMPONENT B) I 1 I the level-I (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) the level-I (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 0 array product 4 the level-I (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY = 0 set 5 the level-i (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) I1o the level-I (THE COMPONENT LEVEL) attribute 1 (COMPONENT B) I I1

284 the level-i (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) the level-l (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) o0 level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 1 array product 0 the level-i (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY = 1 set 1 the level-i (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) I I the level-l (THE COMPONENT LEVEL) attribute 1 (COMPONENT B) the level-1 (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) I I the level-i (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) I I1 level-0 (THE MISSION LEVEL) attribute 0 (RELIABITY GRAPH CONNECTIVITY) phase 0 (LEVEL 0 MISSION) value = 1 array product 1 the level-1 (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) #CONNECTIVITY = I set 2 the level-i (THE COMPONENT LEVEL) attribute 0 (COMPONENT A) the level-l (THE COMPONENT LEVEL) attribute 1 (COMPONENT B) the level-l (THE COMPONENT LEVEL) attribute 2 (COMPONENT C) I I the level-i (THE COMPONENT LEVEL) attribute 3 (COMPONENT D) II the inverse interlevel translation is already reduced do you want a list of the array products for the capability functiont #: no recommended commands are checkpoint, next, or #: eval getting the stochastic model information for attribute 0 (COMPONENT A)t number of stochastic model states for attribute 0 (COMPONENT A)t number of stochastic model states for attribute 0 (COMPONENT A)? #: 2 getting the number of stochastic model states corresponding to value of bottom-level attribute 0 (COMPONENT A): value 0 #: 1 value 1 #: 1 for bottom attribute 0 (COMPONENT A), getting the states corresponding to each value: value 0 of attribute 0 (COMPONENT A) #: 0 value 1 of attribute 0 (COMPONENT A) #: 1 for bottom level attribute 0 (COMPONENT A), specify the p matrices for each phase, 1 phase at a time phase 0 (LEVEL 1 MISSION): what type of p matrix? #: exponential enter phase length #: 10 enter component failure rate #:.0005

285 for bottom level attribute 0 (COMPONENT A), specify the h matrices for each phase, 1 phase at a time for the stochastic model corresponding to bottom level attribute 0 (COMPONENT A) enter the inital probabilities of each state, beginning with the lowest numbered state: #: 0.0 1.0 getting the stochastic model information for attribute 1 (COMPONENT B)? number of stochastic model states for attribute 1 (COMPONENT B)? number of stochastic model states for attribute 1 (COMPONENT B)? #: 2 getting the number of stochastic model states corresponding to value of bottom-level attribute 1 (COMPONENT B): value 0 #: 1 value 1 #: 1 for bottom attribute 1 (COMPONENT B), getting the states corresponding to each value: value 0 of attribute 1 (COMPONENT B) #: 0 value 1 of attribute I (COMPONENT B) #: 1 for bottom level attribute 1 (COMPONENT B), specify the p matrices for each phase, 1 phase at a time phase 0 (LEVEL 1 MISSION): what type of p matrix? #: exponential enter phase length #: 10 enter component failure rate #:.0004 for bottom level attribute 1 (COMPONENT B), specify the h matrices for each phase, 1 phase at a time for the stochastic model corresponding to bottom level attribute 1 (COMPONENT B) enter the inital probabilities of each state, beginning with the lowest numbered state: #: 0.0 1.0 getting the stochastic model information for attribute 2 (COMPONENT C)? number of stochastic model states for attribute 2 (COMPONENT C)? number of stochastic model states for attribute 2 (COMPONENT C)t #: 2 getting the number of stochastic model states corresponding to value of bottom-level attribute 2 (COMPONENT C): value 0 #: 1 value 1 #: 1 for bottom attribute 2 (COMPONENT C), getting the states corresponding to each value: value 0 of attribute 2 (COMPONENT C) #: 0 value 1 of attribute 2 (COMPONENT C) #: 1 for bottom level attribute 2 (COMPONENT C), specify the p matrices for each phase, 1 phase at a time phase 0 (LEVEL 1 MISSION):

286 what type of p matrix? #: exponential enter phase length # 10 enter component failure rate #:.001 for bottom level attribute 2 (COMPONENT C), specify the h matrices for each phase, 1 phase at a time for the stochastic model corresponding to bottom level attribute 2 (COMPONENT C) enter the inital probabilities of each state, beginning with the lowest numbered state: #: 0.0 1.0 getting the stochastic model information for attribute 3 (COMPONENT D)? number of stochastic model states for attribute 3 (COMPONENT D)t number of stochastic model states for attribute 3 (COMPONENT D)? #: 2 getting the number of stochastic model states corresponding to value of bottom-level attribute 3 (COMPONENT D): value 0 #: 1 value 1 #: 1 for bottom attribute 3 (COMPONENT D), getting the states corresponding to each value: value 0 of attribute 3 (COMPONENT D) #0 value 1 of attribute 3 (COMPONENT D) #: 1 for bottom level attribute 3 (COMPONENT D), specify the p matrices for each phase, 1 phase at a time phase 0 (LEVEL 1 MISSION): what type of p matrix? #: exponential enter phase length #: 10 enter component failure rate #:.001 for bottom level attribute 3 (COMPONENT D), specify the h matrices for each phase, 1 phase at a time for the stochastic model corresponding to bottom level attribute 3 (COMPONENT D) enter the inital probabilities of each state, beginning with the lowest numbered state: #: 0.0 1.0 performability for this mission: accomplishment level 0 (SUCCESS): 9.99999e-01 accomplishment level I (FAIL): 8.840545e-07 #: exit bye.

287 APPENDIX G METAPHOR session for the simple air transport example This appendix contains a METAPHOR session for evaluating the simple air transport mission example of Section 4.5.2 [Simple Air Transport Mission Example). Michigan EvaluaTion Aid for PerpHORmability version 3.0 type help for assistance #: build number of accomplishment levels? #:5 getting the name of each accomplishment level What is the name of accomplishment level 0? ALL GOOD (000) What is the name of accomplishment level 1T BAD FUEL EFFICIENCY (001) What is the name of accomplishment level 2? DIVERSION (010) What is the name of accomplishment level 3? BAD FUEL EFFICIENCY AND DIVERSION (011) What is the name of accomplishment level 4? 1 * * (CRASH) getting the names of the top two levels: What is the name of model hierarchy level-0 (the highest level)t THE MISSION LEVEL What is the name of model hierarchy level-i (the second highest level)? THE AIRCRAFT FUNCTIONAL TASK LEVEL number of level-0 (THE MISSION LEVEL) attributes? #: 3 getting the name of each level-0 (THE MISSION LEVEL) attribute: What is the name of attribute 0? FUEL CONSUMPTION What is the name of attribute 1? DIVERSION What is the name of attribute 2? SAFETY number of values per level-0 (THE MISSION LEVEL) attributet #: 222 number of level-0 (THE MISSION LEVEL) phases? #: I getting the name of each level-0 (THE MISSION LEVEL) phase: What is the name of phase 0? MISSION number of level 1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attributes? #: 2 getting the name of each level I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute:

288 What is the name of attribute 0? ACTIVE CONTROL What is the name of attribute I? WEATHER number of values per level 1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute? #: 42 number of level 1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) phases? #: 2 getting the name of each level 1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) phase: What is the name of phase 0? TAKEOFF AND CRUISE What is the name of phase I? LANDING for each accomplishment level enter the number of array products associated with that accomplishment level. accomplishment level 0 (ALL GOOD (000)) #: 1 accomplishment level 1 (BAD FUEL EFFICIENCY (001)) #: 1 accomplishment level 2 (DIVERSION (010)) #: 1 accomplishment level 3 (BAD FUEL EFFICIENCY AND DIVERSION (011)) #: 1 accomplishment level 4 (1 * (CRASH)) #: 1 partial capability function array product specification will you want consistency checked? #: YES will you want totalness checked? #: YES will you want reduction? #: YES enter each array product corresponding to each accomplishment level accomplishment level 0 (ALL GOOD (000)): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) #THIS IS THE TRAJECTORY SET FOR ACC LEV 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) 1oi the level-0 (THE MISSION LEVEL) attribute I (DIVERSION) io0 the level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) Iol accomplishment level 1 (BAD FUEL EFFICIENCY (001)): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) #ACC LEV 1 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) HI1 the level-0 (THE MISSION LEVEL) attribute I (DIVERSION) 101 the level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) 01o accomplishment level 2 (DIVERSION (010)): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) #ACC LEV 2 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) lol the level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION)

289 Ill the level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) o01 accomplishment level 3 (BAD FUEL EFFICIENCY AND DIVERSION (011)): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) #ACC LEV 3 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) 11I the level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION) I1 the level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) o01 accomplishment level 4 (1 * (CRASH)): array product 0 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) #ACC LEV 4 the level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) I I the level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION) 1.' the level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) t e inverse capability function is already reduced for each value that each level-0 (THE MISSION LEVEL) attribute and phase can assume, enter the number of array products to be input for that value, phase, and attribute. if an attribute and phase is to be basic, enter BASIC. level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) phase 0 (MISSION) value = 0: #: 1 value = 1: #: 2 level-0 (THE MISSION LEVEL) attribute I (DIVERSION) phase 0 (MISSION) value = 0: #: 2 value = 1: #: 1 level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 0: #: 4 value - 1: #: 3 interlevel translation array product specification will you want consistency checked? #: YES will you want totalnesa checked? #: YES will you want reduction? #: YES enter each array product corresponding to each combination of level-0 (THE MISSION LEVEL) attribute, phase, and value level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) phase 0 (MISSION) value - 0 array product 0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #FUEL CONSUMPTION=0 the level-l (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 1 {0,1) {0,1} I

290 the level-l (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) I X * I level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) phase 0 (MISSION) value = 1 array product 0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #FUEL CONSUMPTION=1 the level-1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) {(2,3) I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) I * l level-0 (THE MISSION LEVEL) attribute 0 (FUEL CONSUMPTION) phase 0 (MISSION) value — 1 array product 1 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) I (0,1) (2,3} I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) I'. I level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION) phase 0 (MISSION) value = 0 array product 0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #DIVERSION=0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 0 * I the level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute I (WEATHER) level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION) phase 0 (MISSION) value = 0 array product 1 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 1({1,2,3)} I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) I 0 I level-0 (THE MISSION LEVEL) attribute 1 (DIVERSION) phase 0 (MISSION) value - 1 array product 0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #DIVERSION=1 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) ( {1,2,3) * I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) 1 1 1 level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 0 array product 0 the level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #SAFETY —0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 1 I1 the level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) 1I 11 level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 0 array product 1

291 the level-i ('rTlE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 10 (0,2)} the level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER). * I level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 0 array product 2 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 10(1,3) I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) 10 * I level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 0 array product 3 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 12 (0,1) the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute I (WEATHER) 1 * I level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 1 array product 0 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) #SAFETY==i the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 13 * the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) * 1 level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 1 array product 1 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 12 {(2,3) 1 the level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute I (WEATHER) I* * I level-0 (THE MISSION LEVEL) attribute 2 (SAFETY) phase 0 (MISSION) value = 1 array product 2 the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) 10 {(1,3) I the level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) 11 * I the inverse interlevel translation is already reduced do you want a list of the array products for the capability function? #: NO recommended commands are checkpoint, next, or #: next getting the name of the next level (level 2): What is the name of model hierarchy level-2? THE COMPUTATIONAL TASK LEVEL number of level 2 (THE COMPUTATIONAL TASK LEVEL ) attributeat #: 2 getting the name of each level 2 (THE COMPUTATIONAL TASK LEVEL ) attribute: What is the name of attribute 0? FUEL REGULATION COMPUTATIONS What is the name of attribute 1t

292 AUTOLAND COMPUTATIONS number of values per level 2 (THE COMPUTATIONAL TASK LEVEL ) attribute? #: 22 number of level 2 (THE COMPUTATIONAL TASK LEVEL ) phases? #: 3 getting the name of each level 2 (THE COMPUTATIONAL TASK LEVEL ) phase: What is the name of phase 0? CRUISE I What is the name of phase 1? CRUISE II What is the name of phase 2? APPROACH AND LANDING for each value that each level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute and phase can assume, enter the number of array products to be input for that value, phase, and attribute. if an attribute and phase is to be basic, enter BASIC. level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 0 (TAKEOFF AND CRUISE) value = 0: #: I1 value = 1: #: 1 value = 2: #: 2 value = 3: #: 1 level-l (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 1 (LANDING) value = 0: #: 1 value - 1: #: 1 value = 2: #: 1 value = 3: #: 1 level-l (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute I (WEATHER) phase 0 (TAKEOFF AND CRUISE) value = 0: #: level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 1 (WEATHER) phase I (LANDING) value = 0: intcrlevel translation array product specification will you want consistency checked? #: YES will you want totalness checked? #: YES will you want reduction? #: YES enter each array product corresponding to each combination of level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute, phase, and value level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 0 (TAKEOFF AND CRUISE) value = 0 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(ll)=o the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COM T.',T^ ) Ioo.l the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL)

293 phase 0 (TAKEOFF AND CRUISE) value = 1 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(11)=l the evel-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) 1oo* the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 0 (TAKEOFF AND CRUISE) value = 2 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(11)=2, TRAJ 1 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) 1 0o * the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute I (AUTOLAND COMPUTATIONS) lo *. I level-l (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 0 (TAKEOFF AND CRUISE) value = 2 array product 1 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) #Y(11)=2, TRAJ 2 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) 11.I the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) level-1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 0 (TAKEOFF AND CRUISE) value = 3 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(11)=3 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) II 1) the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) level-I (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase I (LANDING) value == 0 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(12)=4 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) I. 01 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) I * * level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase I (LANDING) value = 1 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) #Y(12)=5 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) [*.11 level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phaae 1 (LANDING) value = 2 array product 0

294 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y(12)=6 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) l * I* the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) 1 * 0 level-1 (THE AIRCRAFT FUNCTIONAL TASK LEVEL) attribute 0 (ACTIVE CONTROL) phase 1 (LANDING) value = 3 array product 0 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) # Y 12)=7 the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) the level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) I*'ll the inverse interlevel translation is already reduced do you want a list of the array products for the capability function? #: NO recommended commands are checkpoint, next, or #: next getting the name of the next level (level 3): What is the name of model hierarchy level-3? THE COMPUTER (BOTTOM) LEVELL number of level 3 (THE COMPUTER (BOTTOM) LEVELL) attributes? #: 1 getting the name of each level 3 (THE COMPUTER (BOTTOM) LEVELL) attribute: What is the name of attribute 0? HARDWARE STATE number of values per level 3 (THE COMPUTER (BOTTOM) LEVELL) attribute? #: 5 number of level 3 (THE COMPUTER (BOTTOM) LEVELL) phases? #: 3 getting the name of each level 3 (THE COMPUTER (BOTTOM) LEVELL) phase: What is the name of phase 0? CRUISE I What is the name of phase I? CRUISE II What is the name of phase 2? APPROACH AND LANDING for each value that each level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute and phase can assume, enter the number of array products to be input for that value, phase, and attribute. if an attribute and phase is to be basic, enter BASIC. level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 0 (CRUISE I) value = 0: #: 1 value = 1: #: I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase I (CRUISE II) value = 0: #: value = 1: #: 2 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 2 (APPROACH AND LANDING) value - 0: #: 1 value = 1: #: 2 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase 0 (CRUISE I)

295 value = 0: #: level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase I (CRUISE 11) value = 0: #: 1 value = 1: #: 2 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase 2 (APPROACH AND LANDING) value = 0: #: 1 value = 1: #: 2 interlevel translation array product specification will you want consistency checked? #: YES will you want totalness checkedt #: YES will you want reduction? #: YES enter each array product corresponding to each combination of level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute, phase, and value level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 0 (CRUISE I) value = 0 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-11 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1(2,3,4}) * level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 0 (CRUISE I) value =- array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-11 2 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) (0,1} * I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase I (CRUISE II) value = 0 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-12 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1 {2,3,4} (2,3,4} * I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase I (CRUISE II) value = 1 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-12 2 TRAJI the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) I* {,1) * I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 1 (CRUISE II) value = 1 array product 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE)

296 #X-12 2 TRAJ2 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1 {0,1} {2,3,4}) t level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 2 (APPROACH AND LANDING) value = 0 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-13 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) I* {(4,3) (4,3} 1 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 2 (APPROACH AND LANDING) value - 1 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-13 2 TRAJ1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) * * (0,1,2} 1 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 0 (FUEL REGULATION COMPUTATIONS) phase 2 (APPROACH AND LANDING) value - 1 array product 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-13 2 TRAJ2 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1. {0,1,2) {4,3) 1 level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase I (CRUISE II) value = 0 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-22 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) {3,4) {3,4} *I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase 1 (CRUISE II) value - 1 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-22 2 TRAJI the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1 {3,4} {0,1,2} ~ I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase I (CRUISE II) value = 1 array product I the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-22 2 TRAJ2 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1{0,1,2} * * I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase 2 (APPROACH AND LANDING) value = 0 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-23 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) I * {4,3,2) {4,3,2} I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute I (AUTOLAND COMPUTATIONS)

297 phase 2 (APPROACH AND LANDING) value = 1 array product 0 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-23 2 TRAJ1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) I' {2,3,4) {0,1) I level-2 (THE COMPUTATIONAL TASK LEVEL ) attribute 1 (AUTOLAND COMPUTATIONS) phase 2 (APPROACH AND LANDING) value = 1 array product 1 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) #X-23 2 TRAJ2 the level-3 (THE COMPUTER (BOTTOM) LEVELL) attribute 0 (HARDWARE STATE) 1* (0,1) I the inverse interlevel translation is already reduced do you want a list of the array products for the capability function? #: NO recommended commands are checkpoint, next, or #: eval getting the stochastic model information for attribute 0 (HARDWARE STATE)t number of stochastic model states for attribute 0 (HARDWARE STATE)? #: 5 getting the number of stochastic model states corresponding to value of bottom-level attribute 0 (HARDWARE STATE): value 0 #: 1 value I #; 1 value 2 #: 1 value 3 #: 1 value 4 #: 1 for bottom attribute 0 (HARDWARE STATE), getting the states corresponding to each value: value 0 of attribute 0 (HARDWARE STATE) #: 0 value 1 of attribute 0 (HARDWARE STATE) #: 1 value 2 of attribute 0 (HARDWARE STATE) #: 2 value 3 of attribute 0 (HARDWARE STATE) #: 3 value 4 of attribute 0 (HARDWARE STATE) #: 4 for bottom level attribute 0 (HARDWARE STATE), specify the p matrices for each phase, 1 phase at a time phase 0 (CRUISE I): what type of p matrix? #: nfail enter phase length #: 2.5 enter component failure rate #:.001 enter number of component groups #: 1 enter number of components for each group #: 4 phase 1 (CRUISE II): what type of p matrix? #: nfail

298 enter phase length #: 2.5 enter component failure rate #:.001 enter number of component groups #: 1 enter number of components for each group #: 4 phase 2 (APPROACH AND LANDING): what type of p matrix? #: nfail enter phase length #:.5 enter component failure rate #:.001 enter number of component groups #: 1 enter number of components for each group #: 4 for bottom level attribute 0 (HARDWARE STATE), specify the h matrices for each phase, 1 phase at a time phase 0 (CRUISE I): what type of h matrix? #: identity phase 1 (CRUISE II): what type of h matrix? #: identity for the stochastic model corresponding to bottom level attribute 0 (HARDWARE STATE) enter the inital probabilities of each state, beginning with the lowest numbered state: #: 1.0 0.0 0.0 0.0 0.0 enter the 2 probabilities for basic variable 0 level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL), attribute 1 (WEATHER), phase 0 (TAKEOFF AND CRUISE) #: 0.981 0.019 enter the 2 probabilities for basic variable 1 level-i (THE AIRCRAFT FUNCTIONAL TASK LEVEL), attribute 1 (WEATHER), phase 1 (LANDING) #: 0.0 1.0 enter the 2 probabilities for basic variable 2 level-2 (THE COMPUTATIONAL TASK LEVEL ), attribute 1 (AUTOLAND COMPUTATIONS), phase 0 (CRUISE I) #: 0.0 1.0 performability for this mission: accomplishment level 0 (ALL GOOD (000)): 9.998208e-01 accomplishment level 1 (BAD FUEL EFFICIENCY (001)): 3.371873e-05 accomplishment level 2 (DIVERSION (010)): 0.000000e+00 accomplishment level 3 (BAD FUEL EFFICIENCY AND DIVERSION (011)): 1.449596e-04 accomplishment level 4 (1 * * (CRASH)): 5.093372e-07 #: exit bye.

299 APPENDIX H METAPHOR input data for the SIFT computer example This appendix contains input to METAPHOR for evaluating the SIFT computer example of Section 4.5.3 [SIFT Computer Examplel. echo build 5 ALL GOOD (0000) ECONOMIC PENALTIES (1000) OPERATIONAL PENALTIES (*100) CHANGE IN MISSION PROFILE (,*10) FATALITIES (,**1) THE MISSION LEVEL THE AIRCRAFT LEVEL 4 ECONOMICS OPERATIONS MISSION PROFILE SAFETY 2222 1 MISSION 9 AIDS VOR/DME AIR DATA INERTIAL AUTOLAND ACTIVE FLUTTER CONTROL ENGINE CONTROL ATTITUDE CONTROL WEATHER 222222222 4 TAKEOFF AND CRUISE A CRUISE B CRUISE C LANDING 4 YES YES YES #THIS IS THE TRAJECTORY SET FOR ACC LEV 0 #ACC LEV 1 111

300 0 0 0 #ACC LEV 2 1 0 0 #ACC LEV 3 1 MACC LEV 4 0 1 #ACC LEV 4 0 #ACC LEV 4 NO 1 0 1 3 YES YES YES #ACC LEV 4 0 0 1 1 NO 3 3 YES YES YES #ECONOMICS=0 0000 $ 00 0 * 00 0 * 0 $ 0.. $. #OPERATIONN-=0 * 000 * 00 * 00 * 00

301 OPERATIONS- #MISSION PROFILE=O TRAJ 1 #MISSION PROFILE=O TRAJ 2 *. 0 001'.*.e *.e. * 0C. #MISSION PROFILE=0 TRAJ 1 0 O0 1 #MISSION PROFILE=0 TRAJ 2 000. #MISSION PROFILE=I TRAJ 2 *.. #MISSION PROFILE-1 TRAJ 1.. *. #MISSION PROFILE=1 TRAJ 2 (1) (1) I... COO. ~eO. 0 0 1

302 * *00 ~ *00 #SAFETY=0 TRAJ 1 0* * 000 00 0 0000 0000 #SAFETY —O TRAJ 2 0* 00 * * 00 0000 0000 0000 #SAFETY=0 TRAJ 2 * 0 0 0000... * 000 #SAFETY=0 TRAJ 3. 0 0 0 * 00 0000 #SAFETY=0 TRAJ 4 * 000 00** 0* 0 0 0 * * * #SAFETY=1 TRAJ 1 ~* 00 * ~ * 0 #SAFETY=1 TRAJ 2 0 * 000 0000 0000 0000 0 0 10

303 NO #checkpoint next THE COMPUTATIONAL TASK LEVEL 1 COMPUTATIONS 7 8 TAKEOFF CLIMB CRUISE I CRUISE II CRUISE III DESCENT APPROACH LANDING # AIDS 1 I 1 1 1 1 1 # VOR%DME 1 1 1 1 1 1 1 # AIR DATA 2 3 # INERTIAL 1 1 2 # AUTOLAND 1 1 1 1 1 1 1 1 # AUTOLAND 1 1 1 1 1 1 1 # ACTIVE FLUTTER CONTROL I 1 1 1 1 1 # ENGINE CONTROL 1 1

304 1 1 1 1 # ATTITUDE CONTROL 1 1 1 1 I # WEATHER - JUST NEED 4 OF THESE BASIC BASIC BASIC BASIC YES YES YES # AS(1)=0 1666, *, # AS(1)=l 1 (0,1,2,3,4,5) (0,1,2,3,4,5) (0,1,2,3,4,5) *' ~' # AS(2)=0 * *66 **** # AS(2)=1 I * * (0,1,2,3,4,5) (0,1,2,3,4,5)''. { # AS(3)=0 I ** 6 6 6 {5,6} * #AS(3)=1 I *. * (0,1,2,3,4,5) (0,1,2,3,4,5) (0,1,2,3,4,5) (0,1,2,3,4) ~ I # AS(3)=-0 * * *. * {5,6} {5,6) j # AS(3)=1 l * * * * * (0,1,2,3,4) (0,1,2,3,4) # o(l)=0 I {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} * * * * I # VO(1)l (0) (O) (O). * # VO(2)-0 I * 0 {1,2,3,4,5,6} {1,2,3,4,5,6}' ~, I # VO(2=1 1 ~(0)()'(0) ~ # Vo(3)=o I * {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6}' I # vo(3)1 o* (0) (0)(0) (0) I # VO(4)= i. * * * {1,2,3,4,5,6} {1,2,3,4,5,6} 1 # V(4)==1 I * * * (0) (0) # AD()=-0 1 {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6}'' * *' I # AD(1)=1 1(0) (0) (0) ~ ~ I # AD(2)=0 I * {1,2,3,4,5,6} {1,2,3,4,5,6}' *' I # AD(2)=1 I * * (0)(o) (0) # AD(3)-0 I. * { 1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} 4 1 # AD(3)=-1 I * ( * (0) (0) (0)) ( I # AD(4)=0::::: {4, 5,6} {4,5,6) ~ * * 4* * {1,2} 2 # AD(4)=1. * {.. * 4,56} {0,1,2,3})1. * * ** 0,3 * I. ~. * 1,2) {0,1,3,4,5,6} | # IN( )=o0 1 {5,6} {5,6} {5,6}''..' # IN(1)=1

305 (0,1,2,3,4) (0,1,2,3,4) (0,1,2,3,4) * * * * # IN(2)=o I * * {5,6} {5,6} * * # IN(2)=1 1 (0,1,2,3,4) (0,1,2,3,4) 4 * * # IN(3)-O I' * * {5,6} {5,6) {5,6) {3,4,5,6) * I # IN(3)=1 I * * * (0,1,2,3,4) (0,1,2,3,4) (0,1,2,34) (01,2) * # IN(4)=0 * * * * * (3,4,56} (1,2,3,4,5,6} # IN(4)-1 I..* * (0,1,2) (0) # AL(1)=0 {1,2,3,4,,6) (1,2,3,4,5,6) {1,2,3,4,5,6) * * * * * ^ AL(1)=1 (o) (0) (0) (0).. # AL(2)=O0 * (1,2,3,4,5,6 (1,2,3,4,5,6) * } * * # AL(2)=1 I * (0) (0)* * * # AL(3)=0 I *. (1,2,3,4,5,6) {1,2,3,4,5,6) (1,2,3,4,5,6) {1,2,3,4,5,6} * I # AL(3)=1 1( * )()(0)(0) (0) I # AL(4)=0 I *. ** * (1,2,3,4,5,6) {1,2,3,4,5,6)} # AL(4)=1 * * * * (o) (0) ^ AF(1)=o I {1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) * * * * # AF(1)=1 (0) (0) (0) * # AF(2)=0 *, ({1,2,3,4,5,6 {(1,2,3,4,5,6} ~ *, I~ A AF(2)=1 * (0) (0) * * # AF(3)=0 * * {1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) * I # AF(3)=1 * * ( (0 ))(0)(0 ) I # AF(4)=0 i * * {1,2,3,4,5,6) {1,2,3,4,5,6) # AF(4)=1 * * * * * (0) (0) # EC()= — 1 1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) a* * a a # EC(1)=1 I (0) (0) (O)*. ~ 1 # EC(2)=0 ~I a {1,2,3,4,5,6) (1,2,3,4,5,6) * * * 1 # EC(2)=1 I * (0) (0)' I # EC(3)=0 * * * (1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6) * { # EC(3)=1 1 * (0)(0)(0)(0) * # EC(4)=0 i * *' * {2,3,4,5,6) {2,3,4,5,6) 1 # EC(4)=1 * *, * (0,1) (0,1) # AC(1)-O I {1,2,3,4,5,6) {1,2,3,4,5,6) {1,2,3,4,5,6} * a * I # AC(1)=1 l (O) (O) (o) * # AC(2)=0 I * (1,2,3,4,5,6) (1,2,3,4,5,6} * * a I # AC(2)=1 1 a (0)(o) (0) * I # AC(3)=0 a * * {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} {1,2,3,4,5,6} I # AC(3)=1 I a a * (0) (0) (0) (0) * I # AC(4)=0 l* * * * * * {1,2,3,4,5,6} {1,2,3,4,5,6} I

306 # AC(4)-1 I * * * * * (0) (0) NO # (SASR 4) 2' is represented as 2 herein and # (SASR 4) 2 is reprseneted as 1 31 5 1 7 13 19 25 2 8 14 20 26 3 9 15 21 27 4 10 16 22 28 5 11 17 23 29 6 12 18 24 30 # TAKEOFF sift 0.01 666666666666666 666 & 5 6 5 #CLIMB sift 5 0 #CRUISE I 17 13 19 25 28 14 2026 sift 3 9 1521 27 101666666628666666666666 #CRUISE 112329 6 12 18 24 30 # TAKEOFF sift 0.016666666666666666666 6 6 #CLIMB #CRUISE III sift 0.41666666666666 6 6 #DESCENT #CRUISE I sift 0.01666666666 6 #APPROACH #CRUISE II sift 6 6 #LANDING sift 0.0166666666666666666 6 6 #TAKEOFF identity #CLIMB identity #APPROACHUISE I identity #CRUTJISE II identity #CRUISE III identity #DESCENT sift 6 6 #LANDING sift 0.01666666666666666666 6 6 #TAPPREOACHF identity #CLIMBidentity #CRUISE I identity #CRUJISE II identity #CRUISE III identity #DESCENT identity #APPROACH identity 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 00 0.0 0.989 0.011

307 0.989 0.011 0.989 0.011 0,989 0.011

308 APPENDIX I Scenario for the dual-dual computer example DUAL-DUAL SYSTEM Figure 8 represents a portion of a digital flight control system which is dual-dual fail-operating. The servo amplifiers and monitor elements and servo sets connected to the actual sensors are not shown to keep the problem within bounds. The sensors are cross-strapped to two-remote terminals which convert the sensor signals to digital signals which are tranmitted, on command, over one of the redundant busses for each remote terminal to the flight control computers. The principal functions to be performed are the state estimation function and the command generation/execution function. Note that a single radar altimeter, attitude heading reference set, and inertial navigation system are carried. Dual-digital air data systems, VOR/ILS receivers and DME receivers are carried and input to both remote terminals. Each remote terminal terminal has a dual redundant bus which interfaces with a bus interface unit that interfaces with the flight control computer bus and hence flight control computer. The dual redundant data bus also interfaces with the remote terminal. In o:her words, aft remote terminal one and sensor remote terminal one have dual redundant busses IA and IB and aft remote terminal two and sensor remote terminal two have busses 2A and 2B. Flight control mode selection is redundant and interfaces with each of the flight control computers through a serial input/output panel. Scenario The mission consists of three phases. The first phase is a takeoff/ climb phase and is fifteen minutes in duration. The second phase is a cruise phase of forty-five minutes duration. The descent and landing phase consists of fifteen minutes. Assume all equipment is operating at takeoff. During cruise, weather conditions at the scheduled destination develop requiring Category II capability. As stated in FAA Advisory Circular 120-29, CateRory II conditions require both ILS and glide slope receivers to be operable, the radar altimeter to be operable, both flight control computers to be operable, as well as an attitude reference source such as the attitude heading L.- c

RADAR ALT DIGITAL~ AIR DATA 1, 2B1 VOR/ILSL2SENSORS 1 I _ _ _ AFT ATTITUDE REMOTE REMOTE HAirDaING 200 HEADING I I TE~IIAI, 214 TERMINAL _ _ _ TERMINAL REFERENCE ) _ ^A NERTIAL C FCC I FCC I FCC 2 FC 2 NAV SYST.. aBUS INTERFACE BIU flIU } BIU 2 FCC ~ ~~ M 1000IUNIT (BIU) 1 2 VOR/ILS RIIVR' 180 FCC I1} BUS FCC t2 HUS DMEE RCVK 1,2 MEMRY F'LIGHT SERIAL SERIAL FCC 2 MDIORV CONTROL I/O I/O CPU COMPUTER J CPU Equipment MTBP (hours) DHEI 2 1000 VOR/lLSI,2 10 I FCl1 CONTROL ) Air Datai,2 2000 MODE S 9 FCC CPU/Memory 500 FCC BIU 1000 INS 300 AHR 80U Radar Alt. 700 Rteioce Terminals 500 Fit. Cntrl Mode Select 2000 F1(CURE 8. DIUAL-DUI)IAL SYST'I'EM

310 or inertial navigation system. Both digital air data systems must also be operable. Table 2 lists the components required for each of the mission phases. For the purpose of the analysis, the final approach and touchdown phase lasts for two minutes. Table 2 lists the equipment required for a safe flight, the equipment required to initiate the Category II landing at time equal to 73 minutes, and the equipment required to complete the Category II landing. The analysts calculated the probability of failure to initiate the landing and hence divert to the alternate airport due to loss of equipment required to initiate the landing, probability of successfully landing at the original destination, and probability of loss of the aircraft (unsafe flight) using the data in Figure 8 and Table 2. At all times, each component is either totally operating or totally failed. The hardware and software associated with detecting component failures and removing failed elements is assumed to be perfectly accurate and perfectly reliable. Failures in each component have an exponential (Poisson) distribution. The Category II Approach and Landing can be aborted any time until T - 75 minutes.

MINIMUM COMPONENT KEQUIR m-NTS' Safe Flight Initiate CA' II Complete CAT II Component (bothl phases).Landing (1'=73 urin) ILanding (T=75 min) Radar Alt. I 1 Digital Air Data 1 2 l AIms I 1{I or or r INS I j I VOit 2 1 DME I Sensor RT 1 2 1 PU —I 1 1 PU-II FCMS 1 2 1 Aft RT 1 2 1 whe re PU processing unit PU-I: one FCC witlh one associated BIU PU- I: omle FCC will bI othl associated BIUs TABLE 2. COMPONENT REQUIREMENTS FOR MISSION PERFORMANCE LEVELS

312 APPENDIX J METAPHOR input data for the dual-dual computer example This appendix contains input to METAPHOR for evaluating the dual-dual computer example of Section 4.5.4 [Dual-Dual Examplej. echo #brief build 3 SAFE, DESTINATION SAFE, ALTERNATE UNSAFE THE MISSION LEVEL THE COMPONENT LEVEL 2 NON-DIVERSION SAFETY 22 1 MISSION 11 RADAR ALTIMETER DIGITAL AIR DATA AHRS INS VOR DME SENSOR RT FTMP COMPUTER FCMS AFT RT WEATHER 23223336332 3 TAKEOFF/CLIMB CRUISE APPROACH AND LANDING 1 1 1 YES YES YES #THIS IS THE TRAJECTORY SET FOR ACC SET 0 01 0 #THIS IS THE TRAJECTORY SET FOR ACC SET 2; I 3 15

313 36 255 YES YES YES # FOR CAT II LANDING =0 (INITIATE) SET 1 RADAR ALT. DAD AHRS INS VOR DME SENSOR RT FTMP COMPUTER FCMS AFT RT * WEATHER # FOR CAT II LANDING =0 (INITIATE) SET 2 1, RADAR ALT. 2 * DAD I 1 AHRS * * INS 2 * VOR (1,2} I DME 2 * SENSOR RT * FTMP COMPUTER *2 FCMS *2 AFT RT * 1 WEATHER # FOR CAT II LANDING =0 (INITIATE) SET 3 *. RADAR ALT. 2 * DAD 0 AHRS *I INS *2 VOR (1,2)} DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT I* 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 1 * 0 RADAR ALT. e * DAD * e AHRS *. * INS * VOR ** DME * * SENSOR RT. FTMP COMPUTER *.. FCMS * AFT RT * I * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 2 1 * RADAR ALT. * {0,1) * DAD * * AHRS *.* INS **. VOR. * DME.. SENSOR RT. FTMP COMPUTER *. FCMS * * AFT RT * 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 3 I * RADAR ALT. 2 DAD'0 AHRS 0 INS * VOR. DME. SENSOR RT. FTMP COMPUTER. FCMS

314 I * AFTRT * 1 * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 4.1 * RADAR ALT. * 2 DAD 1 * AHRS * * INS,{0,1) * I VOR * DME *. SENSOR RT. * FTMP COMPUTER FCMS * AFT RT * 1* WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 5.*. RADAR ALT. *2* DAD *0. AHRS 1 * INS {0,1) * I VOR. * DME * * SENSOR RT.. FTMP COMPUTER * * FCMS *. AFT RT * 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 6 1I * RADAR ALT. *2 *DAD 1 AHRS ** * INS *2 *VOR o00 DME *0* SENSOR RT * FTMP COMPUTER.*.FCMS. AFT RT * 1 * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 7 * RADAR ALT.'2* DAD * 0 AHRS.1. INS *20 VOR * 0 DME.*~ SENSOR RT * * FTMP COMPUTER * * FCMS 0 * AFT RT * 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 8 1 * RADAR ALT. *2 DAD 1 * AHRS *. INS 2 * VOR'(1,2} DME (0,1) * SENSOR RT. * FTMP COMPUTER * * FCMS s.* AFT RT 4 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 9 I * RADAR ALT. *2 DAD *0 AHRS 1 * INS 2 VOR 1,2} DME 0,1 * SENSOR RT * 0 FTMP COMPUTER * * FCMS. * AFT RT 1I * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 10

315 1 * RADAR ALT. *2 DAD * AHRS.. INS *2 * VOR * 1,2) * I DME 2 * I SENSOR RT (0,1,2,3,4) * I FTMP COMPUTER. FCMS * AFT RT * 1 * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 11 1 * RADAR ALT. 2 * DAD 0o* AHRS I 1 INS * 2 VOR * (1,2} * I DME 2 * I SENSOR RT * (0,1,2,3,4) * I FTMP COMPUTER e F FCMS * AFT RT * 1 * WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 12 1 * RADAR ALT. 2 * DAD I * AHRS * INS 2 * VOR * {1,2) * DME 2 * SENSOR RT *: FTMP COMPUTER ({0,1} * I FCMS ** AFTRT *1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 13 1 * RADAR ALT. *2 DAD 0 * AHRS 1 INS 2 * VOR (1,2} * I DME 2: SENSOR RT 5 * { FTMP COMPUTER {0,1} * I FCMS ~ * * AFT RT: I WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 14 1 * RADAR ALT. 2 * DAD 1 * AHRS. * INS 2 * VOR (1,2) { DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS ~ (0,1) I AFT RT * 1 WEATHER # FOR CAT II LANDING =1 (DON'T INITIATE) SET 15 1 * RADAR ALT. 2' DAD 0 * AHRS I 1 INS *2 VOR 1,2 * I DME 2 SENSOR RT 5 * FTMP COMPUTER FCMS { 0,1 * | AFT RT WEATHER # FOR SAFETY=0 SET I * I* RADAR ALT. ~ {1,2} I DAD * * I AHRS

316 * INS * VOR DME' 1,2) SENSOR RT 1,2,3,4,5) I FTMP COMPUTER (1,2} FCMS * 1,2) AFT RT 0 * WEATHER # FOR SAFETY-0 SET 2. * I RADAR ALT. *{1,2} DAD.0 AHRS * * INS *. VOR *. DME * (1,2)} SENSOR RT 1,2,3,4,5} ( FTMP COMPUTER 1,2} FCMS 1,2} 1 AFT RT * 0 WEATHER FOR SAFETY-O SET 3 * 1 1 RADAR ALT. * 2 1,2)' DAD *11 AHRS 0 * INS 2 {1,2} VOR * (1,2) * DME * 2 1,2} SENSOR RT * 5 1,2,3,4,5) I FTMP COMPUTER 2 (1,2)} FCMS 2 (1,2) AFT RT * 1 * WEATHER # FOR SAFETY=0 SET 4 * 11 I RADAR ALT. 2 {1,2) DAD 01 AHRS 1 * INS 2 (1,2} VOR (1,2) { DME 2 1,2} SENSOR RT 5 1,2,3,4,5) I FTMP COMPUTER 2 11,2} FCMS 2 (1,2} AFT RT 1 *I WEATHER # FOR SAFETY-=0 SET 5 * 1 I RADAR ALT. 2 {1,2} I DAD 10 AHRS **1 I INS * 2 ({1,2) VOR * 1,2)' DME 2 (1,2) SENSOR RT *5 1,2,3,4,5) I FTMP COMPUTER *2 1,2} FCMS 2 1,2} 1 AFT RT * 1 I WEATHER # FOR SAFETY-O SET 6 * 1 1 RADAR ALT. 2 {1,2}1 DAD *00 AHRS' 11 INS * 2 {1,2} VOR * (1,2}' DME 2 1,2} SENSOR RT * 5 1,2,3,4,5) I FTMP COMPUTER *2 1,2) FCMS 2 1,2} AFT RT * 1 * I WEATHER # FOR SAFETY==0 SET 7 * 0 * I RADAR ALT. * * {1,2} 1 DAD ~**1 AHRS ~ * * INS ~ * * VOR ~ * * DME

317 ~ {1,2) 1 SENSOR RT (1,2,3,4,5) i FTMP COMPUTER 1,2} FCMS 1,2) AFT RT * 1 *WEATHER # FOR SAFETY=0 SET 8 0 I RADAR ALT. (1,2}1 DAD 0 AHRS 1 INS. VOR * DME 1,2} I SENSOR RT 1,2,3,4,5) I FTMP COMPUTER 1,2) FCMS 1,2} I AFT RT I* 1 I WEATHER FOR SAFETY=O SET 9 I * RADAR ALT. (0,1) (1,2} I DAD I *1 AHRS ~ INS * VOR DME (1,2) I SENSOR RT 1,2,3,4,5} I FTMP COMPUTER ( 1,2} ) FCMS (1,2)1 AFT RT *1 *I WEATHER FOR SAFETY=0 SET 10 1 I RADAR ALT. {(0,1) (1,2) 1 DAD * 0 AHRS * 1 INS * VOR ~ DME * * 1,2) I SENSOR RT * {1,2,3,4,5} I FTMP COMPUTER {1,2} FCMS'.(1,2) AFT RT 1 * I WEATHER # FOR SAFETY=0 SET 11 * 1 * RADAR ALT. 2 (1,2} I DAD 01 AHRS *0 INS ~ VOR DME * {1,2) 1 SENSOR RT * (1,2,3,4,5} I FTMP COMPUTER * (1,2} FCMS *,21 I AFT RT * I WEATHER FOR SAFETY=0 SET 12 I* I RADAR ALT. 2 {1,2} I DAD * 0 AHRS 0 1 INS *. VOR * * DME {1,2} I SENSOR RT * {1,2,3,4,5) I FTMP COMPUTER. (1,2} FCMS * * 1,2 AFT RT 1 * WEATHER # FOR SAFETY=0 SET 13 * 1 * I RADAR ALT. 2 {1,2) I DAD 11 AHRS.. INS {0,1 * I VOR * * DME {1,2}) SENSOR RT 1,2,3,4,5) I FTMP COMPUTER'*(1,2) I FCMS

318 * (1,2}) AFTRT 1 * WEATHER # FOR SAFETY=O SET 14 * 1 RADAR ALT. 2 {1,2) 1 DAD 1 1 AHRS''1 I INS'{0,1 * I VOR * * DME *'1,2}) SENSOR RT (1,2,3,4,5} 1 FTMP COMPUTER (1,2} FCMS * (1,2} AFT RT I* WEATHER # FOR SAFETY=0 SET 15 1 * I RADAR ALT. 2 (1,2}) DAD 011 AHRS * 1 * INS (0,1) I VOR * 1 DME * (1,2} ) SENSOR RT * (1,2,3,4,5)} FTMP COMPUTER.. 1,2} FCMS {1,2) AFT RT 1 * WEATHER # FOR SAFETY=- SET 16 * 1 I RADAR ALT. * 2 (1,2} DAD *0 0 AHRS ~1 1 INS {0,1 * I VOR * DME 1 *1,2) 1 SENSOR RT * 1,2,3,4,5} 1 FTMP COMPUTER. {1,2} FCMS' 1,21, AFT RT * * I WEATHER # FOR SAFETY=O SET 17 I* 1 I RADAR ALT. 2 {1,2} 1 DAD 11 AHRS INS 2 * VOR'0 DME 1,2} 1 SENSOR RT ( 1,2,3,4,5} I FTMP COMPUTER ( 1,2} FCMS 1,2} AFT RT 1 * I WEATHER # FOR SAFETY=0 SET 18 1 I RADAR ALT. 2 (1,2} 1 DAD 1 0 AHRS I1 INS 2 * VOR 0 * DME {1,2} SENSOR RT 1 *,2,3,4,5} I FTMP COMPUTER * 1,2) I FCMS (1,2) AFT RT'I' WEATHER # FOR SAFETY== SET 19 * 1 1 RADAR ALT. * 2 1,2} I DAD'01 AHRS I' INS 2 * VOR 0 * DME 1* 1,2)} SENSOR RT * 1,2,3,4,5)} FTMP COMPUTER (1,2) FCMS * (1,2)1 AFT RT * I WEATHER # FOR SAFETY==0 SET 20

319 1 * 1 RADAR ALT. 2 {1,2) 1 DAD 0 0 AHRS I I INS *2 VOR 0 * DME ** 1,2} I SENSOR RT. (1,2,3,4,5} I FTMP COMPUTER 1,2)1 FCMS 1,21 AFT RT 1 WEATHER # FOR SAFETY=0 SET 21 * 1 * I RADAR ALT. 2 {1,2)} DAD 11 AHRS *. INS 2 * VOR { 1,2 * I DME 01}{(1,2 1 SENSOR RT * 1,2,3,4,5)} FTMP COMPUTER. 1,2} FCMS.1,2} AFT RT * * WEATHER # FOR SAFETY=0 SET 22 * 1 * I RADAR ALT. * 2 1,2} 1 DAD 1 0 AHRS * I INS 2 * VOR {1,2)} DME {0,} {1,2} 1 SENSOR RT {1,2,3,4,5} ( FTMP COMPUTER.. 1,2} ) FCMS 1,2 I AFT RT 1 *I WEATHER # FOR SAFETY=0 SET 23 1 I RADAR ALT. 2 {1,2} DAD 0 1 AHRS * * INS *2 VOR {1,2) *I DME {0, (1{,2)1 SENSOR RT * 1,2,3,4,5) I FTMP COMPUTER * 1,2) FCMS * 1,2} AFT RT * * I WEATHER # FOR SAFETY=0 SET 24 * 1 I RADAR ALT. 2 {1,2} I DAD *00 AHRS I11 INS 2 VOR {1,2} * I DME * (0,1} 1,2) SENSOR RT * 1,2,3,4,5) I FTMP COMPUTER (1,2)} FCMS * 1,2} AFT RT * I WEATHER # FOR SAFETY=-0 SET 25 * 1 * I RADAR ALT. * 2 (1,2) I DAD 1 1 AHRS *. INS 2 * VOR * 1,2) * DME 2 (1,2} SENSOR RT * (0,1,2,3,4} {(1,2,3,4,5)} FTMP COMPUTER: (1,2) FCMS ** {1,2} AFT RT I * I WEATHER # FOR SAFETY=0 SET 28 * 1 * I RADAR ALT. * 2 {1,2} I DAD * 1 0 I AHRS

320 *1 INS *2 VOR {1,2} * DME 2 (1,2) SENSOR RT {0,1,2,3,4) {1,2,3,4,5} I FTMP COMPUTER * (1,2} FCMS * 1,2 AFT RT * 1 * WEATHER # FOR SAFETY-0 SET 27 1 * RADAR ALT. 2 {1,2} DAD 0 1 AHRS I * INS 2 * VOR * (1,2) * DME *2 {1,2 SEN3OR RT * 0,1,2,3,4} {1,2,3,4,5)} FTMP COMPUTER * {1,2) FCMS 1,2) AFT RT I *I WEATHER # FOR SAFETY=0 SET 28 * 1 RADAR ALT. 2 {1,2) 1 DAD 0 0 AHRS *I1 I INS *2 VOR * {1,2)} DME 2 1,2} SENSORRT * {0,1,2,3,4} {1,2,3,4,5} { FTMP COMPUTER ~ {1,2} FCMS *{1,2} AFT RT * 1 * WEATHER # FOR SAFETY=0 SET 29 * 1 * I RADAR ALT. * 2 1,2} I DAD 1 1 AHRS * I NS 2 * VOR {1,2) * DME 2 {1,2} SENSOR RT 5 (1,2,3,4,5} { FTMP COMPUTER {0,1} {1,2} I FCMS * (1,2} 1 AFT RT 1 *( WEATHER # FOR SAFETY=0 SET 30 * 1 * I RADAR ALT. 2 {1,2) 1 DAD 1 0 AHRS o* I INS 2 * VOR * {1,2} * DME 2 (1,2) SENSOR RT 5 1,2,3,4,5)} FTMP COMPUTER {0,1)} {1,2} I FCMS * {1,2) 1 AFT RT 1 * I WEATHER FOR SAFETY=0 SET 31 1 * RADAR ALT. 2 (1,2} 1 DAD *01 AHRS *I * INS *2 VOR * (1,2} * DME 2 1,2} SENSOR RT 5 1,2,3,4,5} i FTMP COMPUTER {0,1) (1,2}) FCMS ** (1,2} I AFT RT *1 I WEATHER # FOR SAFETY=0 SET 32 1 * I RADAR ALT. *2 1,2} I DAD * 00 AHRS 1 11 INS * 2 * VOR. {1,2} * I DME

321 *2 1,2}) SENSOR RT * 5 1,2,3,4,5) I FTMP COMPUTER * (0,1} {1,2} ) FCMS * * 1,2} I AFT RT 1 * WEATHER # FOR SAFETY=O SET 33 * 1 * I RADAR ALT. * 2 {1,2) DAD 1 1 AHRS *. INS *2 VOR {1,2} * DME 2 1,2} SENSOR RT 5 1,2,3,4,5} ( FTMP COMPUTER 2 1,2} I FCMS {0,} {1,2} 1 AFT RT * I WEATHER # FOR SAFETY=0 SET 34 1 * I RADAR ALT. 2 {1,2) I DAD 10 AHRS * 1 INS 2 * VOR {1,2} * DME 2 (1,2} SENSOR RT 5 (1,2,3,4,5)} FTMP COMPUTER 2 {1,2} I FCMS {0,1 {1,2} 1 AFT RT 1 * I WEATHER # FOR SAFETY=0 SET 35 * 1 * I RADAR ALT. 2 (1,2) I1 DAD 01 AHRS * * INS 2 VOR {1,2 }) DME 2 (1,2) SENSOR RT 5 (1,2,3,4,5} FTMP COMPUTER 2 1,2} I FCMS * 0,11 1,2} I AFT RT 1 * WEATHER # FOR SAFETY=0 SET 36 I iI RADAR ALT. 2 {1,2} DAD * 0 0AHRS 1 1 INS *2 * VOR * (1,2} * DME * 2 {1,2} SENSOR RT 5 {1,2,3,4,5)} FTMP COMPUTER 2 {1,2} { FCMS * {0,1} {1,2} AFT RT * 1 * WEATHER FOR SAFETY=1 SET 1 # GOOD WEATHER, THEN CRASH. 10 OF THEM.. RADAR ALT..0 DAD ** AHRS *. INS.. VOR ** DME SENSOR RT FTMP COMPUTER FCMS. AFT RT *0 * WEATHER # FOR SAFETY=1 SET 2 ** *I RADAR ALT. ** {1,2} DAD.0 AHRS 0 INS. VOR. DME * SENSOR RT * FTMP COMPUTER

322 ~ * FCMS * * AFT RT * 0 * WEATHER # FOR SAFETY=1 SET 3 a I RADAR ALT. *1,2} 1 DAD I AHRS * INS. VOR.* DME o SENSOR RT * FTMP COMPUTER. FCMS ~* AFT RT 0 * WEATHER # FOR SAFETY=1 SET 4 * I RADAR ALT. l* 1,2} ) DAD 0 AHRS eI INS ~ VOR ~* DME 0 SENSOR RT * FTMP COMPUTER.* FCMS ee AFT RT 0 * WEATHER # FOR SAFETY=1 SET 5 * * RADAR ALT. (1,2}) DAD ~ * AHRS ~ * INS ~ * VOR ee* DME (1,2} I SENSOR RT 0 FTMP COMPUTER * FCMS e * AFT RT 0 * WEATHER # FOR SAFETY=1 SET 6 e * * I RADAR ALT. e e {1,2) I DAD O0 AHRS e 1 INS * VOR.* DME e (1,2) I SENSOR RT ~ e 0 FTMP COMPUTER * FCMS ** AFT RT 0 * WEATHER # FOR SAFETY=1 SET 7 * * RADAR ALT. * {1,2} 1 DAD **I AHRS ~ * INS cc VOR ~ * DME * {1,2} SENSOR RT * { 1,2,3,4,6)} FTMP COMPUTER ~ 0 FCMS e* AFT RT o 0 e WEATHER # FOR SAFETY== SET 8 o * I RADAR ALT. ~ {1,2) ) DAD * 0 AHRS *eI INS. VOR ccc DME c{1,2} I SENSOR RT (1,2,3,4,5)} FTMP COMPUTER * 0 FCMS c * AFT RT 0 * WEATHER

323 # FOR SAFETY=1 SET 9. * I RADAR ALT. * {1,2} 1 DAD I AHRS * INS * VOR * DME 1,2} 1 SENSOR RT 1,2,3,4,5} I FTMP COMPUTER 1,2) I FCMS 0 AFT RT * 0 * WEATHER # FOR SAFETY=1 SET 10 # BAD WEATHER, NO DIVERSION, CRASH - SHOULD BE 26 OF THEM * * RADAR ALT. {1,2) } DAD.0 AHRS s I INS * VOR DME *1,2) I SENSOR RT 1,2,3,4,5) I FTMP COMPUTER 1,2} I FCMS.0 AFT RT 0 * WEATHER # FOR SAFETY=1 SET 11 * 1 0 RADAR ALT. 2 * DAD 1 * AHRS. INS 2 * VOR {(1,2 * I DME ~ 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT 1 * WEATHER # FOR SAFETY=1 SET 12 1 0 RADAR ALT. 2 * DAD 0 * AHRS 1.* INS *2* VOR * (1,2 * I DME *2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 02 AFT RT * 1 WEATHER # FOR SAFETY=1 SET 13 I I RADAR ALT. 20 DAD 1 * AHRS. 0 INS 2 * VOR * 1,2) * DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT *1 * WEATHER # FOR SAFETY=1 SET 14 11 RADAR ALT. 2 0 DAD 00 *AHRS 1 0 INS 02 VOR {1,2) I DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT 1 * WEATHER # FOR SAFETY=1 SET 15 * I 11 RADAR ALT.

324 2 1,2) I DAD'1 0 AHRS. 0 INS 2' VOR (1,2) * I DME 2 * SENSOR RT * 5 ~ FTMP COMPUTER 2 * FCMS 2 AFT RT 1 * WEATHER # FOR SAFETY==1 SET 16 1 11 RADAR ALT. 2 {1,2) } DAD I0 0 AHRS 1 0 INS 2 * VOR (1,2) * I DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT * * WEATHER # FOR SAFETY=1 SET 17 * 11 I RADAR ALT. * 2 {1,2} DAD * AHRS I1 INS 2 0 VOR 1,2) * DME ~ 2 * SENSOR RT *5 FTMP COMPUTER 2 FCMS 2 * AFT RT 1 * WEATHER # FOR SAFETY=1 SET 18 1 I RADAR ALT. 2 (1,2) I DAD *0* AHRS'11 INS 2 0 [ VOR ~ (1,2} * I DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 FCMS *2 * AFT RT 1 * WEATHER # FOR SAFETY=1 SET 19 *11 RADAR ALT. 2 0 DAD'11 AHRS''1 INS 2 0 VOR (1,2' I DME 2 SENSOR RT 5 FTMP COMPUTER 22 FCMS 22 AFT RT ~1 * WEATHER # FOR SAFETY=I1 SET 20 * 1 1 RADAR ALT. *20 DAD 01 AHRS'111 INS' 2 0 VOR (1,2) I DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT 1 * WEATHER # FOR SAFETY=1 SET 21' 1 I RADAR ALT. ~ 2 {1,2} I DAD I *' AHRS * * I INS

325 2 {1,2) VOR * (1,2} * DME 2 0 SENSOR RT 5 FTMP COMPUTER 2 * FCMS 2 * AFT RT * 1 WEATHER # FOR SAFETY=1 SET 22 11 I RADAR ALT. *2 {1,2} DAD 0 * AHRS 11 INS 2 (1,2) VOR (1,2) * DME 2 0 SENSOR RT 5 * FTMP COMPUTER *2 FCMS 2 * AFT RT * 1* WEATHER # FOR SAFETY=1 SET 23 * 11 RADAR ALT. *2 0 DAD 11 AHRS * I INS 2 (1,2} VOR (1,2} * DME * 2 0 SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2. AFT RT * 1 * WEATHER # FOR SAFETY=1 SET 24 * 11 RADAR ALT. 2 20 DAD 01 AHRS.11 INS 2 {1,2} VOR { (1,2} { DME * 2 0 SENSOR RT 5 * FTMP COMPUTER 2 * FCMS 2 * AFT RT I 1 * WEATHER FOR SAFETY=1 SET 25 1I I RADAR ALT. 2 {1,2} 1 DAD 1 * AHRS * 1 INS * 2 (1,2 VOR * (1,2} * DME * 2 {1,2} SENSOR RT ~ 5 0 FTMP COMPUTER 2 * FCMS 2 * AFT RT * 1 WEATHER # FOR SAFETY=1 SET 26 1 I RADAR ALT. 2 1,2}) DAD 0 A AHRS'11 INS 2 {12} VOR { (1,2} < DME 2 (1,2} SENSOR RT 5 0 FTMP COMPUTER 2 * FChMS 2 * AFT RT 1 WEATHER # FOR SAFETY=1 SET 27, 11 RADAR ALT. 2 0 DAD *11 AHRS * 1 INS 2 (1,2} VOR. {1,2} * DME. 2 {1,2} SENSOR RT

326 * 5 0 FTMP COMPUTER 2 * FCMS * 2 * AFT RT * 1 * WEATHER # FOR SAFETY=1 SET 28 11 RADAR ALT. 2 0 DAD 0 1 AHRS 1 1 INS 2 (1,2) VOR *1,2} * DME 2 {1,2) SENSOR RT 5 0 FTMP COMPUTER 2* FCMS 2* AFT RT * WEATHER # FOR SAFETY=1 SET 29 * 1 [ RADAR ALT. * 2 (1,2) DAD *I * AHRS * I INS * 2 (1,2) VOR * 1,2} * DME *2 1,2} SENSOR RT * 5 1,2,3,4,5) I FTMP COMPUTER 2 0 FCMS * 2 * AFT RT * 1 * WEATHER # FOR SAFETY=1 SET 30 * 11 I RADAR ALT. 2 {1,2) I DAD 01* AHRS 11 INS 2 {1,2) VOR * (1,2) * DME *2 1,2} SENSOR RT 5 1,2,3,4,5)} FTMP COMPUTER 2 0 FCMS 2 2 AFT RT I* * WEATHER # FOR SAFETY=1 SET 31 1 1 1 RADAR ALT. * 2 0 DAD.11 AHRS * I INS 2 ({1,2) VOR * 1,2} * DME * 2 1,2} SENSOR RT, 5 {1,2,3,4,5} I FTMP COMPUTER 2 0 FCMS 2 * AFT RT * 1 * WEATHER # FOR SAFETY=1 SET 32 11 RADAR ALT. 2 0 DAD 0 I AHRS 1 1 INS *2 (1,2) VOR * 1,2}' DME 2 1,2} SENSOR RT 5 {1,2,3,4,5)} FTMP COMPUTER 2 0 FCMS 2 * AFT RT * 1 WEATHER # FOR SAFETY=1 SET 33 * 1 I RADAR ALT. * 2 (1,2} DAD ~ 1 AHRS * * INS 2 {1,2) VOR ~ (1,2}) DME'2 (1,2} SENSOR RT 6 (1,2,3, 4,5} I FTMP COMPUTER *2 1,2} I FCMS ~ 2 0 I AFT RT

327 I 1 * I WEATHER # FOR SAFETY=1 SET 34 11 I RADAR ALT. 2 (1,2) 1 DAD 0 o AHRS I I INS 2 (1,2} VOR * (1,2) * DME 6 2 {1,2) SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER *2 (1,2) FCMS ~ 2 0 AFT RT 1 * WEATHER # FOR SAFETY=1 SET 35 * 11 RADAR ALT. 2 0 DAD *11 AHRS *1 INS *2 (1,2) VOR * {1,2) DME 2 (1,2} SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER.2 1,2} ) FCMS 20 AFT RT *1 * WEATHER # FOR SAFETY=1 SET 36 11 RADAR ALT. 20 DAD 01 AHRS *11 INS 22(12} VOR {(1,2I DME 2 {1,2} SENSOR RT 5 (1,2,3,4,6}) FTMP COMPUTER 2 1,2} FCMS 2 0 AFT RT 11 WEATHER # BAD WEATHER, DIVERSION, CRASH - # SHOULD BE 15*2*10=300 OF THEM # LET METAPHOR GENERATE AND REDUCE THEM FOR US. # THE FOLLOWING WERE GENERATED BY METAPHOR #............................ # trajectory set number 0 (out of 218) ~ 0 * RADAR ALTIMETER ~* * 0 DIGITAL AIR DATA ~* * SAHRS ~A ~* *. INS ~* *. VOR ~ * * DME * * * SENSOR RT * ~ * * FTMP COMPUTER *.* ~ * FCMS ~* ~~ * * AFT RT * I * WEATHER #............................ # trajectory set number 1 (out of 218) 0 * RADAR ALTIMETER (*1,2) I DIGITAL AIR DATA 0 AHRS ~ * 0 INS ~ * * VOR ~* * * DDME N* *. SENSOR RT ~*.* FTMP COMPUTER ~* ~ * * FCMS ~* * AFT RT *1 * WEATHER # # trajectory set number 2 (out of 218)

328 0 ~ I RADAR ALTIMETER ~, { (1,2} I DIGITAL AIR DATA * * 0 AHRS * *1 I NS ~. * *~ VOR *~.~ * DME * ~ * 0 SENSOR RT ~* ~ * * FTMP COMPUTER R * ~ * FFCMS * *~~ * AFT RT * 1 * WEATHER:............................ # trajectory set number 3 (out of 218) * 0 * IRADAR ALTIMETER * ~ * ({1,2} I DIGITAL AIR DATA * * 0 AHRS * * 1 INS ~*~ * * VOR * *~ *( DME {*1,2} ( SENSOR RT *4 * 0 FTMP COMPUTER ~* * ~ * FCMS * o~* * AFT RT * I' WEATHER #............................ # trajectory set number 4 (out of 218) * 0 I RADAR ALTIMETER { 1,2} IDIGITAL AIR DATA * * 0 AHRS ~* * 1 INS * ~ * ~ * VOR ~*~ * * DME ~* * ({1,2} I SENSOR RT 1,2,3,4,5} FTMP COMPUTER * * 0 FCMS ~ *~ ~ * AFT RT ~ 1 * WEATHER #............................ # trajectory sea number 5 (out of 218) * 0 * RADAR ALTIMETER {*1,2} I DIGITAL AIR DATA * * 0 AHRS * * 1 INS ~. * *~ VOR ~*~ * * DME *{1,2} ISENSOR RT {1,2,3,4,5} I FTMP COMPUTER 1,2} FCMS ~| * 0 AFT RT * 1 * WEATHER............................ # # trajectory set number 6 (out of 218) 0 * I RADAR ALTIMETER {*1,2} I DIGITAL AIR DATA * I 1 AHRS ~* ~ * ~ INS ~~* ~ * VOR * ~ * * ~ DME ~* * 0 SENSOR RT * ~ * ~* FTMP COMPUTER *. *~ * FCMS ~* ~ * * AFT RT * 1' WEATHER #............................ # # trajectory act number 7 (out of 218)

329 (0 * RADAR ALTIMETER ~* *, {1,2) IDIGITAL AIR DATA * * 1 AHRS ~*~ * ~ * INS ~. ~ * * VOR ~*~ * * DME 1,2 SENSOR RT ~* * 0 FTMP COMPUTER ~*~ * * FCMS ~ ~* ~* AFT RT * 1 * WEATHER #............................ # # trajectory set number 8 (out of 218) ~ 0 * I RADAR ALTIMETER * ~ * ({1,2} DIGITAL AIR DATA * * 1 AHRS *~ * * ~ INS ~* ~ * * VOR ~* * * DDME (1,2) I SENSOR RT * * {(1,2,3,4,5} FTMP COMPUTER *~ ~ * 0 FCMS ~e * ~* AFT RT * 1 * WEATHER t............................ # trajectory set number 9 (out of 218) 0 * IRADAR ALTIMETER ~* * {(1,2} I DIGITAL AIR DATA * * I AHRS ~* * * ~ INS ~* *~ VOR ~*~ * * DME,~ *~ {1,2} SENSOR RT ^ *, {(1,2,3,4,5} I FTMP COMPUTER * ~ {*(1,2) FCMS * 0 AFT RT * 1 I WEATHER #............................ # trajectory set number 10 (out of 218) * 1 * RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 0 * AHRS * 0 * INS * * * VOR * * * DME * * * SENSOR RT * * * FTMP COMPUTER ~ * * FCMS * * * AFT RT *1 * WEATHER #............................ # trajectory set number 11 (out of 218) 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 * AHRS * 1 * INS {0,1)} IVOR * * * DMr, * * * SENSOR RT * * * FTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER # # trajectory aet number 12 (out of 218)

330 * 1 0 RADAR ALTIMETER ~ 2 0 DIGITAL AIR DATA * 0 ~ AHRS * 1 I INS * 2 ~ VOR *O'0 DME * * * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 13 (out of 218) * 1 0 RADAR ALTIMETER * 2 0 DIGITAL AIR DATA * 0 ~ AHRS * 1 * INS 2 ~ VOR {1,2} * DME {0,1 * SENSOR RT ~* * * FTMP COMPUTER ~* * * FCMS ~* * * AFT RT I * WEATHER............................ # trajectory set number 14 (out of 218) ~* 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 0 AHRS * 1 ~ INS 2 * VOR {1,2)' DME * 2 * SENSOR RT {0,1,2,3,4} I FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 15 (out of 218) * 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 ~ AHRS * 1 * INS *2 * VOR {1,2} DME 2 * SENSOR RT * 5 * FTMP COMPUTER * (0,1) I FCMS * * I AFT RT * 1 * WEATHER #............................ # trajectory set number 16 (out of 218) ~* 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 0 AHRS * 1 * INS 2 * VOR {1,2} I DME 2 * SENSOR RT 5 * FTMP COMPUTER 2 * FCMS {0,1} IAFTRT * I * i WEATHER #............................ # trajectory aet number 17 (out of 218)

331 * 1 0 RADAR ALTIMETER ~ 2 0 DIGITAL AIR DATA * 1' AHRS * * * INS (0,1} IIVOR ~* * * DME ~ ~* * SENSOR RT ~e* * FTMP COMPUTER e * * ~ FCMS ~* * * AFT RT * 1 * WEATHER............................ # trajectory set number 18 (out of 218) ~ 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * I * AHRS ~ * * INS * 2 * VOR 0 * DME * * * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT ~ 1 * WEATHER F........................... # trajectory set number 19 (out of 218) # * 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * I * AHRS * * * INS 2 * VOR {1,2}: DME ~ {0,1) * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ trajectory set number 20 (out of 218) ~ 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 * AHRS * * * INS 2 * VOR (1,2) I DME *2 ISENSOR RT (0,1,2,3,4}' {FTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 21 (out of 218) ~* 1 0 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 * AHRS ~* * INS 2 * VOR 1,2) * I DME * 2 * SENSOR RT 5 $ FTMP COMPUTER * (0,1)} I FCMS * * * AFT RT * 1 * WEATHER #............................ # # trjectory set number 22 (out of 218) 0,

332 * 1 0 RADAR ALTIMETER * 2 0 DIGITAL AIR DATA * 1' AHRS * * * INS 2 * VOR (1,2} I DME ~ 2 ~ SENSOR RT * 5 * FTMP COMPUTER * 2 * FCMS *0,1} ~ IAFTRT 1 ~ WEATHER #............................ # # trajectory set number 23 (out of 218) ~* 1 e l RADAR ALTIMETER 2 {1,2} IDIGITAL AIR DATA ~ 1 0 AHRS * * 0 INS *0,1} ISVOR ~ * * DME ~ * ~* SENSOR RT ~* * * FTMP COMPUTER ~* * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 24 (out of 218) * 11 RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA 1 0 AHRS * * 0 INS 2 * VOR 0 DME * * * SENSOR RT ~* * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 25 (out of 218) ~ 1 * RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS ~ * 0 INS 2 * VOR * {1,2} * DME * 0,1} * SENSOR RT * *. FTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory act number 26 (out of 218) * 1 * I RADAR ALTIMETER * 2 {1,2} 1 DIGITAL AIR DATA * 1 0 AHRS ~* * 0 INS 2 * VOR {1,2} ~ IDME 2 * SENSOR RT {0,1,2,3,4} * I FTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER t............................ # # trajectory act number 27 (out of 218)

333 * 1 *I I RADAR ALTIMETER 2 (1,2} DIGITAL AIR DATA. 1 0 AIIRS * * 0 INS *2 * VOR * 1,2) * I DME * 2 SENSOR RT 6 * FTMP COMPUTER {0,1) I FCMS AFT RT 1 * I WEATHER #............................ # trajectory set number 28 (out of 218) I 1 * RADAR ALTIMETER * 2 (1,2) DIGITAL AIR DATA * 1 0 AHRS * * 0 INS *2 * VOR * 1,2) * DME 2 * SENSOR RT 5 * FTMP COMPUTER * 2 * FCMS {0,1) I AFT RT * 1 * WEATHER............................ # trajectory set number 29 (out of 218) * 1 * RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS 2 * VOR {(1,2) * I DME * 2 0 ISENSOR RT * {0,1,2,3,4) * I FTMP COMPUTER ~ * * FCMS ~* * AFT RT ~ 1 * WEATHER............................ # trajectory set number 30 (out of 218) # * 1 * I RADAR ALTIMETER * 2 {1,2) IDIGITAL AIR DATA * 1 0 AHRS ~ * 1 INS 2 * VOR {1,2) I DME 2 0 SENSOR RT 5 * FTMP COMPUTER {0,1) * IFCMS * * AFT RT * 1 * WEATHER #............................ trajectory set number 31 (out of 218) 1 * I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 1 0 AHRS * * I INS 2 * VOR {1,2) I DME 2 0 SENSOR RT 5 * FTMP COMPUTER * 2 * FCMS * {0,1) * IAFT RT * 1 * I WEATHER #............................ # # trajectory set number 32 (out of 218) o.

334 * 1I * IRADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS ~* * 1 INS * 2 * VOR (1,2) DME 2 {1,2} SENSOR RT. 5 0 FTMP COMPUTER {0,1) I FCMS * 1 * WEATHER #............................ # # trajectory set number 33 (out of 218) 2* 1 * t RADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS 2 * VOR {1,2) DME * 2 {1,2} SENSOR RT 5 0 FTMP COMPUTER 2 * FCMS (0,1} I AFTRT * 1 * WEATHER............................ # trajectory set number 34 (out of 218) * 1 * RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA * 1 0 AHRS * * 1 INS *2 * VOR {1,2} * DME * 2 (1,2} SENSOR RT $5 (1,2,3,4,5} IFTMP COMPUTER * 2 0 IFCMS {0,1) * AFT RT * 1 * I WEATHER #........................... # # trajectory set number 35 (out of 218) # 2* 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS *2 * VOR {1,2} DME * 2 {1,2} SENSOR RT * 5 {1,2,3,4,5} FTMP COMPUTER * 2 {1,2} FCMS (0,1} 0 AFT RT * 1 * WEATHER............................ # # trajectory set number 36 (out of 218) # * 1 * RADAR ALTIMETER * 2 {1,2) DIGITAL AIR DATA * 1 0 AHRS * * 1 INS *2 * VOR * 1,2} DME * 2 {1,2} SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER {0,1} 0 FCMS * * * AFT RT * 1 * WEATHER #............................ R # # trajectory sact number 37 (out of 218)

335 * 1I * IRADAR ALTIMETER * 2 (1,2} IDIGITAL AIR DATA * 1 0 AHRS * ~ * 1 INS *2 * VOR (1,2} * DME * 2 {1,2} SENSOR RT 5 (1,2,3,,5) I FTMP COMPUTER * 0,1} {1,2} FCMS ** 0 AFT RT * 1. WEATHER #............................ # # trajectory set number 38 (out of 218) * 1 *'RADAR ALTIMETER * 2 {1,2 I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS * 2 * VOR {1,2} DME *2 {1,2} SENSOR RT * 0,1,2,3,4) 0 IFTMP COMPUTER * * * FCMS * *~ * AFT RT * 1 * WEATHER #........................... # trajectory set number 39 (out of 218) * 1 * IRADAR ALTIMETER * 2 {1,2) IDIGITAL AIR DATA' 1 0 AHRS * * 1 INS *2 * VOR {1,2) * DME 2 {1,2) SENSOR RT {0,1,2,3,4) {1,2,3,4,5) FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 40 (out of 218) 1 (: * I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS *2 * VOR {1,2) * DME 2 {1,2) SENSOR RT {0,1,2,3,4) { 1,2,3,4,5} FTMP COMPUTER * * {(1,2) FCMS * *~ 0 AFT RT * 1 * WEATHER............................ # trajectory set number 41 (out of 218) * 1 * I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 1 0 AHRS * * I INS * 2 * VOR {',2} ~ DME {0,1) 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 42 (out of 218),o

336 * 1 * RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS * 2 * VOR * (1,2} I DME (0,1 {1,2) I SENSOR RT * *~ 0 FTMP COMPUTER * * * FFCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 43 (out of 218) 2* 1 * RADAR ALTIMETER 2 (1,2} DIGITAL AIR DATA * 1 0 AHRS * 1* I INS * 2 * VOR * (1,2 * IDME * {0,1 (1,2} SENSOR RT {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 44 (out of 218) * * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS *2 * VOR'(1,2D} * DME * (0,1 {1,2} I SENSOR RT * * {(1,2,3,4,5} I FTMP COMPUTER,* * {1,2} IFCMS ~* * 0 AFT RT * 1 * WEATHER............................ # trajectory set number 45 (out of 218) 1 * I RADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA * 1 0 AHRS * * 1 INS * 2 * VOR * 0 * DME * * 0 SENSOR RT * * * FTMP COMPUTER *F * FFCMS ~* * * AFT RT * i * WEATHER............................ # # trajectory set number 46 (out of 218) * I * RADAR ALTIMETER * 2 1,2} I DIGITAL AIR DATA * 1 0 AHRS * * 1 INS * 2 * VOR * 0 * DME * {1,2} I SENoOR RT * * 0 FTMP COMPUTER * * * FCMS ~* * * AFT RT * 1 * WEATHER #..tra................. # trajectory set number 47 (out of 218)

337 * 1 I RADAR ALTIMETER, 2 {1,2} I DIGITAL AIR DATA * 1 0 AHRS ~* * I INS * 2 * VOR. 0 * DME * *| {1,2} I SENSOR RT ~ * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * I WEATHER #............................ # trajectory set number 48 (out of 218) 1 * I RADAR ALTIMETER * 2 {1,2 I DIGITAL AIR DATA * 1 0 AHRS * * 1I INS *2 * VOR * 0 DME * * {1,2} I SENSOR RT, {(1,2,3,4,5} I FTMP COMPUTER * * (1,2} IFCMS * * 0 AFT RT * 1 * WEATHER #............................ # trajectory set number 49 (out of 218) * 1 * RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 1 0 AHRS (~* * 1 INS {(0,1} I VOR * * * DME * * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER # # trajectory set number 50 (out of 218) I* 1 * RADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA * 1 0 AHRS ~A 1 I INS (0,1}' IVOR * * IDME * * {1,2} I SENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 51 (out of 218) * * I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 1 0 AHRS * * I INS {(0,1} I VOR * *, I DME * * {1,2} ISENSOR RT {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER # trajectory set number 52 (out of 218)

338 * 1 * I RADAR ALTIMETER ~ 2 {1,2) I DIGITAL AIR DATA *1 0 AHRS ~* * 1 INS {(0,1} I VOR ~. ~ ~, IDME * {1,2} SENSOR RT (1,2,3,4,5} FTMP COMPUTER ~4 * {1,2} FCMS ~* * 0 AFT RT * 1 * WEATHER #............................ # trajectory set number 53 (out of 218) * 1 0 I RADAR ALTIMETER ~ 2 {1,2} ] DIGITAL AIR DATA * 1 1 AHRS ~ * * INS * 2 * VOR {1,2} * I DME 2 0 ISENSOR RT * {0,1,2,3,4} I FTMP COMPUTER * * FFCMS ~ * ~ AFT RT ~ 1 * WEATHER #........................... # # trajectory set number 54 (out of 218) *1 0 IRADAR ALTIMETER *2 {1,2} IDIGITAL AIR DATA * 1 1 AHRS * * * INS 2 * VOR {1,2} I DME 2 0 SENSOR RT * S * FTMP COMPUTER (0,1) * I FCMS * 1 * WEATHER.......................... # trajectory set number 55 (out of 218) *1 0 IRADAR ALTIMETER *2 {1,2) DIGITAL AIR DATA ~* * * INS *2 * VOR {1,2} I DME * 2 0 SENSOR RT * 5 * FTMP COMPUTER * 2 * FCMS {0,1} IAFT RT I* * WEATHER #............................ # trajectory set number 56 (out of 218) *1 0 RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA ~1 1 AHRS..* * INS 2 * VOR (1,2} DME * 2 (1,2} SENSOR RT 5 0 I FTMP COMPUTER (0,1) a I FCMS * 1 * WEATHER # trajectory set number57 (outof21)..............

339 * 1 0 IRADAR ALTIMETER *2 (1,2} I DIGITAL AIR DATA * 1 1 AHRS * * ~* INS 2 * VOR {(,2) } DME, 2 {1,2} SENSOR RT * 0 FTMP COMPUTER 2 * FCMS * 0,1 * IAFT RT *1 * WEATHER #............................ # trajectory set number 58 (out of 218) * 1 0 RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 1 1 AHRS * * * INS 2 * VOR {1,2} DME 2 {1,2} SENSOR RT 5 {1,2,3,4,5} FTMP COMPUTER *2 0 IFCMS (0,1} I AFT RT I * WEATHER #............................ # # trajectory set number 59 (out of 218) I 1 * I RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA * 1 1 AHRS * * * INS 2 * VOR {1,2)' IDME 2 (1,2} ISENSOR RT 5 {1,2,3,4,5)} FTMP COMPUTER 2 (1,2) FCMS {0,1} 0 AFT RT 1 I WEATHER #............................ # # trajectory set number 60 (out of 218) *1 0 IRADAR ALTIMETER *2 {1,2} IDIGITAL AIR DATA * 1 1 AHRS * * * INS 2 * VOR (1,2} * DME * 2 {1,2} SENSOR RT 5 (1,2,3,4,5} I FTMP COMPUTER {0,1} 0 IFCMS * * AFT RT * 1 * WEATHER............................ # # trajectory set number 61 (out of 218) 1 * I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA * 1 1 AHRS ~ * * INS * 2 * VOR {1,2} DME 2 {1,2) SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER (o{0,1}) ({1,2} FCiMS * * 0 AFT RT * 1 * WEATHER #............................ # # trajectory set number 62 (out of 218)

340 ~ 1 0 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA *I 1 1 AHRS ~* * * INS * 2 * VOR (1,2} * DME 2 {1,2} SENSOR RT {0,1,2,3,4} 0 I FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # # trajectory set number 63 (out of 218) * 1 0 RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA *1 1 1 AHRS * * * INS 2 * VOR {1,2} * DME * 2 {1,2} SENSOR RT {0,1,2,3,4} 1,2,3,4,5} FTMP COMPUTER * * 0 FCMS ~ * * AFT RT * 1' WEATHER............................ # # trajectory set number 64 (out of 218) * 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS * * * INS 2 * VOR {1,2}' DME * 2 {1,2} SENSOR RT {0,1,2,3,4} {1,2,3,4,5} FTMP COMPUTER * {(1,2) FCMS ** 0 AFT RT 1 * WEATHER #............................ # trajectory set number 65 (out of 218) *1 0 I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * I 1 AHRS * * 6 INS 2 * VOR * {1,2} DME {0,1} 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS ~* * AFT RT * 1 * WEATHER........................... # # trajectory set number 66 (out of 218) * 1 0 RADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA ~ 1 1 AHRS ~ * * INS 2 * VOR * {1,2 * IDME *{0,1 {1,2} I SENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER # # trajectory set number 67 (out of 218) 11

341 * 1 0 IRADAR ALTIMETER * 2 (1,2} I DIGITAL AIR DATA * 1 1 AHIRS *~ * * INS *2 * VOR {1,2} * IDME 0,1 {1,2) I SENSOR RT *X * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS *~ * * AFT RT *I * WEATHER #............................ # trajectory set number 68 (out of 218) * 1 * I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 1 1 AHRS * * INS *2 * VOR {1,2} * {DME * {0,1 {(1,2 ( SENSOR RT ~ {(1,2,3,4,5} IFTMP COMPUTER ~* (1,2) IFCMS ~ *~ 0 AFT RT * I WEATHER............................ # trajectory set number 69 (out of 218) * 1 0 iRADAR ALTIMETER * 2 {1,2) DIGITAL AIR DATA *1 1 AHRS * * * INS *2 * VOR * 0 DME ~* * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT o 1 * * WEATHER s............................' trajectory set number 70 (out of 218) *, 1 0 IRADAR ALTIMETER *2 (1,2} DIGITAL AIR DATA *I I1 AHRS * * * INS *2 * VOR 0 * DME * * {1,2) ISENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 71 (out of 218) * 1 0 IRADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS * * * INS 2 o VOR * 0 * DME * * {1,2) I SENSOR RT * {(1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER t............................o # trajectory set anumber 72 (out of 218) o.

342 * I * I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA ~ 1 1 AHRS ~ *~ * INS ~ 2 * VOR 0 * DME ~ *~ {1,2} I SENSOR RT,* * {1,2,3,4,5} I FTMP COMPUTER ~, * {1,2} I FCMS * 0 AFT RT o I WEATHER #............................ # # trajectory set number 73 (out of 218) *1 I RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA, * * INS *{0,1} * IVOR ~* * * DME * * 0 SENSOR RT ~ * * FTMP COMPUTER ~o * * FCMS ~o * * AFT RT * 1 * WEATHER #............................ # trajectory set number 74 (out of 218) * 1 * RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA * 1 1 AHRS ~ * * INS * 0,1) * IVOR f *~ * [DME * {1,2} I SENSOR RT ~* * 0 FTMP COMPUTER o * * FFCMS ~* * o AFT RT I * WEATHER............................ # # trajectory set number 75 (out of 218) * 1 * |IRADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA * 1 1 AHRS * * * INS * 0,1} IVOR * 0* IDME {1,2} I SENSOR RT * {(1,2,3,4,5} I FTMP COMPUTER * *~ 0 FCMS ~ * * AFT RT 1I * WEATHER............................ # trajectory set number 76 (out of 218) * 1 * jRADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA * 1 1 AHRS * * * INS * (0,1} * IVOR ~* * I DME ~ * { (1,2} I SENSOR RT * {1,2,3,4,5} FTMP COMPUTER, *. {1,2} I FCMS * * 0 AFT RT * 1 * WEATHER #............................ # # trajectory aet number 77 (out of 218)

343 * 1 * RADAR ALTIMETER * 2 {1,2) IDIGITAL AIR DATA 0 0 AHRS * 1 INS 0,1} I VOR * * * ~ DME * *~ * SENSOR RT * * * ~ FTMP COMPUTER * * * ~ FCMS ~* * * AFT RT * 1'* WEATHER #............................ # trajectory set number 78 (out of 218) I1 * I RADAR ALTIMETER 2 {1,2} 1 DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS *2 * VOR * 0 DME ~ * * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 79 (out of 218) * 1 *I IRADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA 0 0 AHRS * 1 0 INS *2 * VOR (1,2} [ IDME (0,1) ISENSOR RT * * FTMP COMPUTER * * ~ * FCMS * * * ~ AFT RT * 1 * WEATHER -............................ # trajectory set number 80 (out of 218) 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS * 2 * VOR {1,2} I DME * 2 * SENSOR RT * (0,1,2,3,4) I FTMP COMPUTER,* * * FCMS ~ *. AFT RT * 1 * WEATHER............................ # trajectory set number 81 (out of 218) I* * 1 * RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA 0 0 AHRS * 1 0 INS * 2 * VOR {1,2} *! DME * 2 * SENSOR RT 5 * FTMP COMPUTER {0,1), I FCMS * * ~ * AFT RT * 1 I WEATHER # trajectory set number 82 (out of 218) o,

344 * I I RADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS 2 * VOR {1,2) IDME ~ 2 * SENSOR RT * 5 * FTMP COMPUTER 2 * FCMS {0,1} * AFT RT 1 * WEATHER #............................ # # trajectory sct number 83 (out of 218) I* * RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA 0 0 AHRS ~* 1 INS 2 * VOR {1,2}' IDME *2 0 ISENSOR RT * {0,1,2,3,4) * I FTMP COMPUTER * * * FCMS ~* * * AFT RT * 1 * WEATHER............................ # trajectory set number 84 (out of 218) I1 ( RADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR 1,2} IDME 2 0 SENSOR RT 5 & ~ FTMP COMPUTER (0,1)} I FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 85 (out of 218) I* 1 I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 0 0 AHRS * 1 1 INS * 2 * VOR (1,2} * IDME 2 0 SENSOR RT & * FTMP COMPUTER 2 * FCMS {0,1} * IAFT RT * 1 I WEATHER 1............................ # trajectory set number 86 (out of 218) 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA ~ 0 0 AHRS ~ 1 1 INS * 2 ~ VOR {1,2} * DME 2 {1,2} SENSOR RT 5 0 I FTMP COMPUTER {0,1} * FCMS * I * WEATHER # trajctory set number 87 (out.........f 218) # trajectory act number 87 (out of 218)

345 * 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 0 0 AHRS * 1 1 INS * 2 * VOR (1,2) DME 2 {1,2} SENSOR RT * 5 0 FTMP COMPUTER 2' FCMS {0,1} * IAFT RT * 1 * iWEATHER #............................ # # trajectory set number 88 (out of 218) * 1 I RADAR ALTIMETER *2 (1,2) I DIGITAL AIR DATA 0 0 AHRS * 1 I INS 2 * VOR 1,2) DME 2 {1,2) SENSOR RT {1,2,3,4,5} I FTMP COMPUTER 2 0 I FCMS {0,1) I AFT RT 1 * (WEATHER............................ # trajectory set number 89 (out of 218) * 1 * I RADAR ALTIMETER * 2 {1,2} 1 DIGITAL AIR DATA 0 0 AHRS * 1 1 INS * 2 * VOR 1,2) DME 2 {1,2) SENSOR RT 6 1,2,3,4,5) I FTMP COMPUTER 2 (1,2) FCMS {0,1) 0 AFT RT 1 * [ WEATHER #............................ # trajectory set number 90 (out of 218) I 1 1 I RADAR ALTIMETER 2 (1,2)} DIGITAL AIR DATA 0 0 AHRS * 1 1 INS * 2 * VOR {1,2) * DME 2 {1,2) SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER (0,1} 0 I FCMS * * * AFT RT * 1' WEATHER #............................ # # trajectory set number 91 (out of 218) I 1 * I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR {1,2} * DME * 2 {1,2} SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER *0,1} {1,2} I FCMS * 1 * WEATHER # # trajectory set number 92 (out of 218) o,

346 * 1 * I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 ~ VOR (1,2} DME 2 {1,2} SENSOR RT {0,1,2,3,4} 0 ] FTMP COMPUTER ~ ~ ~ FCMS ~* * AFT RT * 1' WEATHER #............................ # # trajectory set number 93 (out of 218) * 1 * RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR {1,2)} DME' 2 {1,2} SENSOR RT {0,1,2,3,4) (1,2,3,4,5} FTMP COMPUTER * * 0 FCMS ~* * AFT RT ~ 1 * WEATHER,............................ # trajectory sat number 94 (out of 218) * 1 * RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR {1,2} * DME ~ 2 {1,2} SENSOR RT {0,1,2,3,4) {1,2,3,4,5} FTMP COMPUTER ~* {1,2} FCMS ~* 0 AFT RT * 1' WEATHER L............................ # trajectory set number 95 (out of 218) * I * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 0 AHRS * 1 1 INS 2 * VOR {1.2} * DME {0,1 0 SENSOR RT ~ * * FTMP COMPUTER * * * FCMS ~ * * AFT RT ~ 1 * WEATHER............................ # # trajectory set number 96 (out of 218) 1 * I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA 0 0 AHRS * I 1 INS 2 * VOR * {1,2} ~ IDME {0.1) {1,2} I SENSOR RT ~* 0 FTMP COMPUTER ~* * FCMS * * ~ AFT RT * 1 * WEATHER #............................ # trajectory set number 97 (out of 218)

347 * I * I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA ~ 0 0 AHRS * 1 1 INS ~ 2 * VOR {1,2} * I DME ( 0,1( {1,2} ISENSOR RT, {' 1,2,3,4,5} FTMP COMPUTER ~* * 0 FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 98 (out of 218) ~ * I I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR 1,2} IDME (0,1 {1,2} I SENSOR RT * * {1,2,3,4,5} FTMP COMPUTER ~ * (1,2} IFCMS * 0 AFT RT * 1' WEATHER............................ # trajectory set number 99 (out of 218) ~* 1 * I RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR 0 * DME * * 0 SENSOR RT ~* * FTMP COMPUTER * * * FCMS * * * AFT RT 1I * WEATHER #............................ # trajectory set number 100 (out of 218) 1 * I RADAR ALTIMETER 2 {1,2} IDIGITAL AIR DATA 0 0 AHRS * I 1 INS 2 * VOR 0 0 DME * (1,2} I SENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT *I * WEATHER............................ # trajectory set number 101 (out of 218) * 1 * I RADAR ALTIMETER 2 {1,2) DIGITAL AIR DATA 0 0 AHRS * 1 1 INS 2 * VOR *0 * DME {1,2} I SENSOR RT * * {1,2,3,4,5) I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 102 (out of 218),.

348 ~ 1 * I RADAR ALTIMETER * 2 1,2} I DIGITAL AIR DATA * 0 0 AHRS 1I 1 INS * 2 * VOR *0 DME * * {(1,2) I SENSOR RT ~* * {(1,2,3,4,5} I FTMP COMPUTER * * {(1,2} I FCMS ~. * 0 AFT RT * 1 * WEATHER............................ # trajectory set number 103 (out of 218) * 1 * IRADAR ALTIMETER * 2 {1,2) IDIGITAL AIR DATA * 0 AHRS * 1 1 INS *0,1) * VOR ~ * * DME * * 0 SENSOR RT ~* * * FTMP COMPUTER * * * FCMS * * * AFT RT ~ 1 * WEATHER #............................ # trajectory set number 104 (out of 218) * 1 * I RADAR ALTIMETER 2 (1,2} 1 DIGITAL AIR DATA * 1 1 INS (0,1} * VOR * * * IDME * * {1,2} SENSOR RT ~ * 0 FTMP COMPUTER * * * FCMS * * * AFT RT ~ 1 * WEATHER #............................ # trajectory set number 105 (out of 218) 1 * I RADAR ALTIMETER * 2 {1,2} IDIGITAL AIR DATA 0 0 IAHRS I 1 INS (0{,1} * VOR * * * DME }DME ~* * {1,2) SENSOR RT ~| * { (1,2,3,4,5) I FTMP COMPUTER * ~~0 IFCMS * * AFT RT * 1 * WEATHER............................ # # trajectory set number 106 (out of 218) * 1 * IRADAR ALTIMETER *2 {1,2) 1 DIGITAL AIR DATA 0 0 AHRS * 1 1 INS (0,1) ~ IVOR'* * IDME * {1,2} I SENSOR RT {1,2,3,4,5) 1 FTMP COMPUTER * * {1,2} IFCMS * 0 AFT RT * 1 * WEATHER #............................A T R # trajectory set number 107 (out of 218)

349 *1 * I RADAR ALTIMETER * 2 1,2) I DIGITAL AIR DATA * 0 0 AHRS * 0 0 INS * * * VOR * * * DME * *. * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 108 (out of 218) 1 I RADAR ALTIMETER 2 (1,2} ( DIGITAL AIR DATA * 0 0 AHRS *0 1 INS * * * VOR * * * DDME ~* * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 109 (out of 218) * 1 *I IRADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 0 0 AHRS *0 1 INS ~* * * VOR * * * DME {1,2) I SENSOR RT * * 0 FTMP COMPUTER * * o FCMS ~ * * AFT RT * 1 * WEATHER #............................ # trajectory set number 110 (out of 218) 1 * I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA 0 0 AHRS 0 1 INS * * * VOR *~ * * DME, * {1,2} ISENSOR RT * * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * I * WEATHER #............................ # trajectory set number lll (out of 218) * * I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA * 0 0 AHRS * 0 1 INS * * * VOR * * * DME * {1,2} I SENSOR RT * 1,2,3,4,5} I FTMP COMPUTER {1,2) I FCMS * * 0 AFTRT * 1 * WEATHER # trajectory set number 112 (out of 218),J

350 * 1 0 IRADAR ALTIMETER ~ 2 (1,2} I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS 2 * VOR * 1,2} * DME * 2 0 ISENSOR RT {0,1,2,3,4} * I FTMP COMPUTER * * * FCMS * * * AFT RT * I WEATHER............................ # trajectory set number 113 (out of 218) ~ 1 0 IRADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR {1,2} * IDME 2 0 SENSOR RT 5 FTMP COMPUTER (0,1} * IFCMS * 1 * WEATHER #............................ # # trajectory set number 114 (out of 218) * 1 0 IRADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR {1,2) * IDME 2 0 SENSOR RT 5 * FTMP COMPUTER *2 * FCMS {0,1} I AFTRT 1 * iWEATHER #............................ # trajectory set number 115 (out of 218) 1 0 I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR 1,2} DME 2 (1,2) SENSOR RT * 5 0 FTMP COMPUTER {0,1} I FCMS * * AFT RT * I WEATHER #............................ # trajectory set number 116 (out of 218) * 1 0 )RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA 0 1 AHRS ~ 1 * INS *2 * VOR * 1,2) DME 2 {1,2) SENSOR RT 5 0 FTMP COMPUTER 2 * FCMS'{0,1) * IAFTRT * * I WEATHER #............................ # # trajectory aset number 117 (out of 218)

351 * 1 0 I RADAR ALTIMETER ~ 2 {1,2) DIGITAL AIR DATA 0 1 AHRS ~ 1 * INS * 2 ~ VOR {1,2 * DME 2 {1,2} SENSOR RT 5 1,2,3,4,5) FTMP COMPUTER * 2 0 FCMS {0,1} I AFTRT * 1 * IWEATHER #........................... # trajectory set number 118 (out of 218) * 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS 2 ~ VOR (1,2} DME * 2 {1,2} SENSOR RT 5 (1,2,3,4,5) FTMP COMPUTER ~ 2 {1,2) FCMS {0,1) 0 AFT RT I 1 * WEATHER............................ # trajectory set number 119 (out of 218) * 1 0 I RADAR ALTIMETER * 2 (1,2) I DIGITAL AIR DATA 0 1 AHRS * 1 * INS * 2 * VOR {1,2} * DME ~ 2 {1,2)} SENSOR RT 5 {1,2,3,4,5} I FTMP COMPUTER {0,1) 0 I FCMS * 1 * WEATHER............................ # # trajectory set number 120 (out of 218) * 1 *I i RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 * INS * 2 * VOR {1,2) DME 2 (1,2} SENSOR RT {1,2,3,4,5} I FTMP COMPUTER {0,1) {1,2} I FCMS * * 0 AFT RT * 1I * WEATHER............................ # trajectory set number 121 (out of 218) ~* 1 0 IRADAR ALTIMETER *2 {1,2} I DIGITAL AIR DATA 0 1 AHRS *I 1 INS 2 ~ VOR (1,2} DME * 2 {1,2} SENSOR RT * 0,1,2,3,4} 0 I FTMP COMPUTER ~* * * FCMS * * * AFT RT I1 * WEATHER #............................ # # trajectory Jet number 122 (out of 218)

352, 1 0 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS * 2 * VOR * {1,2) * DME * 2 (1,2} SENSOR RT {0,1,2,3,4} {1,2,3,4,5} FTMP COMPUTER o* * 0 FCMS * * * AFT RT * 1'* WEATHER #............................ # trajectory set number 123 (out of 218) * 1 * I RADAR ALTIMETER, 2 1,2) I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS *2 * VOR (1,2} * DME * 2 (1,2) SENSOR RT {0,1,2,3,4) {1,2,3,4,5) FTMP COMPUTER * * {1,2} FCMS * 0 AFT RT * 1' WEATHER #............................ # # trajectory set number 124 (out of 218) * 1 0 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS *2 * VOR * {1,2} * DME * 0,1) 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 125 (out of 218) * 1 0 IRADAR ALTIMETER * 2 {1,2) I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS * 2 * VOR * {1,2 * IDME (0,1 {1,2} I SENSOR RT * * 0 FTMP COMPUTER ~* * * FCMS * * * AFT RT * 1 * WEATHER............................ # # trajectory act number 126 (out of 218) *1 0 IRADAR ALTIMETER *2 {1,2) I DIGITAL AIR DATA 0 1 AHRS * 1 * INS * 2 * VOR * (1,2) IDME (0,1 {1,2) I SENSOR RT * { 1,2,3,4,5} IFTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER #........................... # # trajectory set number 127 (out of 218)

353 I1 ~ IRADAR ALTIMETER 2 (1,2} DIGITAL AIR DATA 0 1 AHRS ~* 1 * INS 2 * VOR * ~ (12} ) IDME * 0,1) {1,2 I SENSOR RT * (1,2,3,4,5) I FTMP COMPUTER * {(1,2} { FCMS * 0 AFT RT * I * WEATHER -............................ # # trajectory set number 128 (out of 218) * 1 0 IRADAR ALTIMETER 2 {1,2} IDIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR 0 DME * * 0 SENSOR RT * * * FTMP COMPUTER ~ * * FCMS ~ *. AFT RT * 1 * WEATHER #............................ # trajectory set number 129 (out of 218) ~ 1 0 IRADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * I * INS *2 * VOR 0* DME {1,2} SENSOR RT. * 0 FTMP COMPUTER * * * FCMS. * * AFT RT {* I * WEATHER............................ # trajectory set number 130 (out of 218) 1 0 I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR * 0 * DME * {1,2} SENSOR RT * * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 131 (out of 218) # 1 * I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA 0 1 AHRS ~ 1 * INS 2 * VOR 0 0 DME * {1,2} I SENSOR RT * * {1,2,3,4,5} I FTMP COMPUTER (1,2} I FCMS * * 0 AFT RT * 1 * WEATHER #............................ # trajectory act number 132 (out of 218)

354 ~ 1 * I RADAR ALTIMETER 2 {12} I DIGITAL AIR DATA * 0 1 AHRS * 1 * INS {0,1} * VOR * * * DME * * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS ~ * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 133 (out of 218) * 1 * I RADAR ALTIMETER 2 (1,2} DIGITAL AIR DATA * 1 * INS (0,1} * IVOR * ~ I DME * {(1,2) ISENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # # trajectory set number 134 (out of 218) * 1 * RADAR ALTIMETER * 2 {1,2) IDIGITAL AIR DATA 0 1 AHRS 1 INS {0,1} * IVOR * * jN DME * {(1,2) I SENSOR RT * * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS ~ * * AFT RT * 1 * WEATHER............................ # trajectory set number 135 (out of 218) * 1 * I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS ~ 1 INS {(0,1} VOR ~ * I DME.* {1,2} I SENSOR RT {1,2,3,4,5} I FTMP COMPUTER * {(1,2} FCMS * 0 AFT RT * I WEATHER #............................ # trajectory set number 136 (out of 218) ~ 1 * I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA 0 1 AHRS ~ 0' INS ~ * ~ VOR *.* DME ~ * 0 SENSOR RT ~ * * FTMP COMPUTER ~ * * FCMS ~ * * AFT RT * 1 * WEATHER #............................ # # trajectory sat number 137 (out of 218)..

355 * 1 * RADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA 0 1 AHRS 0 * INS * *~ * VOR ~* * * DME * {1,2) I SENSOR RT ~* * 0 FTMP COMPUTER ~* * * FCMS ~* * * AFT RT * 1 * WEATHER #............................ # trajectory set number 138 (out of 218) * 1I RADAR ALTIMETER 2 (1,2} I DIGITAL AIR DATA 0 1 AHRS * 0 * INS *' * * VOR * * * DME ( 1,2) I SENSOR RT {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1' WEATHER #............................ # trajectory set number 139 (out of 218) 1 * RADAR ALTIMETER *2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 0 * INS * * * VOR * * * DME * {1,2} { SENSOR RT * 1,2,3,4,5} I FTMP COMPUTER ( 1,2} FCMS ** 0 AFT RT * I' WEATHER #............................ # trajectory set number 140 (out of 218) * 1 * I RADAR ALTIMETER {0,1} 0 I DIGITAL AIR DATA ~~* * * AHRS ~H ~~ *. INS ~* ~ * * VOR ~* * ~ * DME * *~ *~ SENSOR RT ~* * * ~FTMP COMPUTER ~. ~ * * FCMS ~F * ~* AFT RT * 1' WEATHER............................ # trajectory set number 141 (out of 218) * 1* I RADAR ALTIMETER {0,1) (1,2} I DIGITAL AIR DATA * * 0 AHRS * * 0 INS ~~* * * VOR ~* * * DME ~*~ * * SENSOR RT * ~ * * ~FTMP COMPUTER ~*~ * * FCMS ~ * * AFT RT 1I * WEATHER #............................ # trajectory set number 142 (out of 218),o

356 * 1 * RADAR ALTIMETER ( 0,1} {1,2) DIGITAL AIR DATA'* * 0 AlIRS * * 1 INS * * ~ * ~VOR ~~* * * DME ~* * 0 SENSOR RT ~* * *~ FTMP COMPUTER ~ ~ * * ~ FCMS ~. * * ~ AFT RT ~ 1 * WEATHER #............................ # # trajectory set number 143 (out of 218) *I * I RADAR ALTIMETER {0,1) {1,2} 1 DIGITAL AIR DATA ~ * 0 AHRS * ~ 1 INS ~ * ~ * ~VOR ~* * ~ * DME {*1,2} I SENSOR RT ~* * 0 FTMP COMPUTER ~* * ~ * FCMS ~* * ~ * AFT RT * 1 * WEATHER............................ # trajectory set number 144 (out of 218) # * 1 I RADAR ALTIMETER *0,1} {1,2} I DIGITAL AIR DATA * * 0 AHRS * * 1 INS * ** VOR * * * DME. *. {1,2} I SENSOR RT * * ({1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS ~* * ~ ~* AFT RT * 1 * WEATHER #............................ # trajectory set number 145 (out of 218) I * I RADAR ALTIMETER ( 0,1) {1,2} I DIGITAL AIR DATA * * 0 AHRS ~* * 1 INS ~* ~ ~ * VOR ~*~ * *~ DME * * ~ ({1,2} ISENSOR RT (1,2,3,4,5} IFTMP COMPUTER 1,2) I FCMS * 0 AFT RT * 1 * WEATHER.................................... # trajectory set number 146 (out of 218) 1 * RADAR ALTIMETER {*0,1} {1,2} I DIGITAL AIR DATA * * 1 AHRS ~~~* * * INS ~* ~ ~ * VOR ~<* * * DDME *~ * 0 SENSOR RT ~ ~ * * ~ FTMP COMPUTER ~~* * * CFCMS ~~~* * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 147 (out of 218)..

357 * I * IRADAR ALTIMETER * (0,1} (1,2} I DIGITAL AIR DATA * * 1 AlHRS ~~~* * * INS.* *.* VOR ~ (* * * DME * (1,2} I SENSOR RT ~* * 0 FTMP COMPUTER * * ~ * FFCMS ~~* * * AFT RT * 1 * WEATHER #............................ # trajectory set number 148 (out of 218) * I * {RADAR ALT!METER (0,1) {1,2} I DIGITAL AIR DATA * * I AHRS ~~~* * * INS ~~* * ~ * VOR * * * DME 1,2} I SENSOR RT 1,2,3,4,5} FTMP COMPUTER * ~ 0 FCMS ~~* * *~ AFT RT * 1 * WEATHER............................ # trajectory set number 149 (out of 218) * 1 I RADAR ALTIMETER {0,1} {1,2} I DIGITAL AIR DATA * ~1 AHRS * * * INS INS ~~~* * * VOR'* ~' * DME ~* *' {1,2} ISENSOR RT * * {1,2,3,4,5} I FTMP COMPUTER **1,2) I FCMS ~* * 0 AFT RT * 1 * WEATHER............................ # # trajectory set number 150 (out of 218) * I 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 0 AHRS * 1 1 INS * * * VOR * * * DME * * * SENSOR RT * * * FTMP COMPUTER ~ * * FCMS * * * AFT RT * I * WEATHER #............................ # # trajectory set number 151 (out of 218) * 1 I RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 0 1 AHRS * 1 * INS {0,1} IVOR ~ * * DME * * * SENSOR RT * * * FTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory act number 152 (out of 218),J

358 * 1 1 RADAR ALTIMETER, 2 0 DIGITAL AIR DATA ~ 0 1 AIRS * I * INS * 2 * VOR 0 ~ DME *t * * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 153 (out of 218)' 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 1 AHRS.I 1 INS 2 * VOR {1,2), DME {0,1 ~ SENSOR RT ~ * * FTMP COMPUTER. *. FCMS.* * * AFT RT * 1 * WEATHER............................ # # trajectory set number 154 (out of 218) # I 1 I RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 1 AHRS * 1 * INS 2 * VOR ( 1,2} I DME 2 * SENSOR RT {0,1,2,3,4} * I FTMP COMPUTER * * * FCMS ~ * * AFT RT I * WEATHER #............................ # trajectory set number 155 (out of 218) ~ I 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 1 AHRS * I, INS 2 * VOR {1,2} 1 DME 2 * SENSOR RT 5 * FTMP COMPUTER * 0,1) * IFCMS ~* * * AFT RT * 1 * WEATHER............................ # trajectory set number 156 (out of 218) # *1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 1 AHRS * 1' INS 2 * VOR {1,2} * DME 2, SENSOR RT 5 * FTMP COMPUTER 2 * FCMS ~ {0,1}. IAFTRT * 1 * WEATHER #............................ # trajectory set number 157 (out of 218)..

359 * I I RADAR A IlTIMETlER * 2 0 DIGITAL AIR DATA * 0 1 AHRS * 1 1 INS * 2 {1,2) VOR (1,2} DME * 2 {1,2} SENSOR RT 5 (1,2,3,4,5)} FTMP COMPUTER *2 ({1,2 FCMS *2 (1,2) AFT RT I1 WEATHER #............................ # trajectory set number 158 (out of 218) ~ I 1 RADAR ALTIMETER *2 0 DIGITAL AIR DATA 0 1 AHRS * 1 1 INS. 2 {1,2) VOR * 1,2)} DME * 2 (1,2} SENSOR RT * 5 {1,2,3,4,5) FTMP COMPUTER *2 (1,2} IFCMS 2 0 AFT RT'~ 1 0 WEATHER #............................ # # trajectory set number 159 (out of 218) * 1 I RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 0 AHRS * 1 0 INS (0,1} * IVOR * * * DME * * * SENSOR RT' * * * FTMP COMPUTER * * * FCMS * * * ~ AFT RT * 1 * WEATHER #............................ # trajectory set number 160 (out of 218) ~ 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS 2 * VOR 0 * DME * * * SENSOR RT * * * FTMP COMPUTER * * FFCMS ~ * * AFT RT * 1' WEATHER #............................ # trajectory set number 161 (out of 218) * I 1 RADAR ALTIMETER * 2 0 DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS *2 * VOR * (1,2} DME {0,1) * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER # tjo sn1..................... # trajectory set number 162 (out of 218)

360 * 1 1 RADAR ALTIMETER ~ 2 0 DIGITAL AIR DATA * 0 0 AHRS * 1 0 INS * 2 * VOR {(1,2} * IDME * 2 * ISENSOR RT {0,1,2,3,4} * I FTMP COMPUTER ~ *< * FCMS ~ ~* * AFT RT I * WEATHER............................ # trajectory set number 163 (out of 218) * 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 0 AHRS * 1 0 INS ~ 2 * VOR ( 1,2} * DME 2 * SENSOR RT 5 * FTMP COMPUTER {0,1} * IFCMS * 1 * WEATHER #............................ # # trajectory set number 164 (out of 218) # * 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA 0 0 AHRS * 1 0 INS 2 * VOR {1,2} I DME 2 * SENSOR RT 5 * FTMP COMPUTER *2 * FCMS * 0,1} * AFT RT * 1 * WEATHER............................ # trajectory set number 165 (out of 218) ~I 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 0 AHRS. * 1 INS ~. ~ * * VOR ~. ~ * * DME.~* * * SENSOR RT ~. ~ * * FTMP COMPUTER *.* * OFCMS ~. ~ * * AFT RT * 1 * WEATHER #............................ # trajectory set number 166 (out of 218) ~ 1 I RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 I AHRS.* * INS * {0,1} I IVOR ~ * * DME ~ * * SENSOR RT ~ * * FTMP COMPUTER * *. FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory act number 167 (out of 218) L

361 ~I I RADAR ALTIMETER ~ 2 0 DIGITAL AIR DATA * 1 1 AHRS * * * INS * 2 * VOR 0 * DME ~ * * SENSOR RT ~ * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 168 (out of 218) * 1 1 RADAR ALTIMETER * 2 0 DIGITAL AIR DATA ~ 1 1 AHRS ~ * * INS 2 * VOR * {1,2) DME (0,1 * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 169 (out of 218) * 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 1 AHRS * * * INS 2 ~ VOR {1,2} * I DME *2 * SENSOR RT (0,1,2,3,4) ~ FTMP COMPUTER ~ * * FCMS * ~ * AFT RT * 1 * WEATHER............................ # trajectory act number 170 (out of 218) ~ 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 1 AHRS ~* ~ * INS * 2 * VOR (1,2)' I DME 2 * SENSOR RT * 5 * FTMP COMPUTER {0,1}' I FCMS ~ * ~ AFT RT 1 * WEATHER #............................ # # trajectory set number 171 (out of 218) * 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 1 AHRS ~~ * INS 2 * VOR {1,2) * IDME 2 * SENSOR RT * 5 * FTMP COMPUTER * 2 * FCMS * {0,1) * IAFT RT * 1 * WEATHER #............................ # trajectory act number 172 (out of 218) L

362 * 1 1 RADAR ALTIMETER, 2 0 DIGITAL AIR DATA * 1 1 AHRS * * 1 INS * 2 {1,2} VOR {1,2} * DME * 2 {1,2} SENSOR RT * 5 {1,2,3,4,5} I FTMP COMPUTER 2 1,2 FCMS 2 {1,2 AFT RT * I * WEATHER............................ # trajectory set number 173 (out of 218) * 1 1 RADAR ALTIMETER *2 0 DIGITAL AIR DATA * 1 0 AHRS. * 0 INS {(0,1) * I VOR ~ * * DME ~. * * SENSOR RT ~ * * FTMP COMPUTER ~ * * FCMS * * * AFT RT ~ 1 * WEATHER............................ # trajectory set number 174 (out of 218) ~* 1 1 RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 0 AHRS * * 0 INS 2 * VOR * 0 DME * * * SENSOR RT * * * FTMP COMPUTER * * * FCMS * * * AFT RT * I * WEATHER #............................ # trajectory set number 175 (out of 218) # 1 1 RADAR ALTIMETER *2 0 DIGITAL AIR DATA * 1 0 AHRS ~ * 0 INS 2 * VOR { 1,2). DME {0,1 * SENSOR RT * * * rFTMP COMPUTER ~ * * FCMS * * * AFT RT * 1 * WEATHER............................ # # trajectory set number 176 (out of 218) * I I RADAR ALTIMETER 2 0 DIGITAL AIR DATA * 1 0 AHRS * 0 INS *2 * VOR {1,2} I DME *2 * ISENSOR RT * {0,1,2,3,4} * IFTMP COMPUTER ~ * * FCMS * * * AFT RT 1I * WEATHER #............................ # trajectory set number 177 (out of 218)

363 * 1 1 RADAR ALTIMETER * 2 0 DIGITAL AIR DATA ~ 1 0 AIIRS * *~ 0 INS * 2 * VOR ( 1,2)}, DME 2 * SENSOR RT * 5 * FTMP COMPUTER {0,1} IFCMS ~* * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 178 (out of 218) # * 1 1 RADAR ALTIMETER *2 0 DIGITAL AIR DATA * 1 0 AHRS * * 0 INS * 2 * VOR {1,2} I DME *2' SENSOR RT * 5 FTMP COMPUTER 2 * FCMS (0,1} IAFTRT * 1 * I WEATHER #............................ # # trajectory set number 179 (out of 218) # * I I RADAR ALTIMETER * 2 (1,2} I DIGITAL AIR DATA 0 1 AHRS'* 1 0 INS 2 * VOR ~* * * DME * *~ 0 SENSOR RT * *~ * FTMP COMPUTER ~* * * FCMS ~* * * AFT RT * 1 * WEATHER............................ # # trajectory set number 180 (out of 218) I 1 I RADAR ALTIMETER *2 {1,2} IDIGITAL AIR DATA 0 1 AHRS * 1 1 INS 2 * VOR {1,2}, I DME 2 0 ISENSOR RT {0,1,2,3,4} * I FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 181 (out of 218) I 1 1 RADAR ALTIMETER * 2 (1,2) I DIGITAL AIR DATA 0 1 AHRS * 1 1 INS 2 ~ VOR {1,2} I DME *2 0 SENSOR RT *5 * FTMP COMPUTER {0,1} I FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 182 (out of 218),,

364 *1 1 I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 1 INS 2 * VOR {1,2} IDME * 2 0 SENSOR RT * $ * FTMP COMPUTER 2 * FCMS (0,1) IAFTRT * 1 * WEATHER #............................ # trajectory set number 183 (out of 218) *; 1 IRADAR ALTIMETER * 2 {1,2) DIGITAL AIR DATA 0 1 AHRS * 1 0 INS 2 * VOR * * * DME * * {1,2) ISENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 184 (out of 218) * 1 1 IRADAR ALTIMETER 2 {1,2) 1 DIGITAL AIR DATA 0 1 AHRS * 1 1 INS 2 * VOR {1,2) * DME * 2 {1,2) SENSOR RT 5 0 FTMP COMPUTER {0,1) * IFCMS * * AFT RT * I * WEATHER #............................ # trajectory set number 185 (out of 218) * 1 1 IRADAR ALTIMETER 2 {1,2) DIGITAL AIR DATA 0 1 AHRS * 1 INS 2 * VOR {1,2) * DME ~ 2 {1,2) SENSOR RT 5 0 FTMP COMPUTER 2 * FCMS *0,1) * IAFT RT ~ 1 * WEATHER............................ # # trajectory set number 186 (out of 218) 1 11 I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA 0 1 AHRS * 1 0 INS 2 * VOR * * * DME {1,2) I SENSOR RT * {(1,2,3,4,5}) FTMP COMPUTER ~ * 0 FCMS ~* * * AFT RT * I * WEATHER #............................ # # trajectory acset number 187 (out of 218)..

365 * 1 1 I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 1 INS *2 * VOR * 1,2) DME *2 (1,2) SENSOR RT ~ 5 {1,2,3,4,5) I FTMP COMPUTER * 2 0 IFCMS 0,1} I AFT RT * I * IWEATHER #............................ # trajectory set number 188 (out of 218) ~* 1 1 I RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS * 1 0 INS 2 0 VOR {1,2) * DME 2 (1,2) SENSOR RT * 5 {1,2,3,4,5) 1 FTMP COMPUTER ~ 2 {1,2} IFCMS * 2 * AFT RT * 1 * WEATHER #............................ # trajectory set number 189 (out of 218) * 1 1 I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 0 1 AHRS * 1 1 INS * 2 ~ VOR * {1,2} DME 2 {1,2} SENSOR RT 5 {1,2,3,4,5) FTMP COMPUTER {0,1} 0 IFCMS * * { AFT RT * 1 * WEATHER r............................ # tnjectory set number 190 (out of 218) # i * 1 I RADAR ALTIMETER * 2 (1,2} 1 DIGITAL AIR DATA * 0 AHRS { * 1 1 INS -* 2 * VOR (1,2 * I DME * ~2 {1,2} SENSOR RT {0,1,2,3,4) 0 I FTMP COMPUTER * * * FCMS ~* * * AFT RT * I * WEATHER #............................ # trajectory set number 191 (out of 218) ~*I 1 I RADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 0 AHRS * 1 I INS 2' VOR {1,2) * DME * 2 (1,2} SENSOR RT {0,1,2,3,4) {1,2,3,4,5} FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER # # trajectory act number 192 (out of 218)

366 *1 1 IRADAR ALTIMETER * 2 1,2) I DIGITAL AIR DATA * 0 1 AIIRS * I I INS * 2 * VOR, {1,2) * DME * 0,1} 0 SENSOR RT ~*.* FTMP COMPUTER * ~ * * FCMS ~* * * AFT RT * 1 * WEATHER #............................ # trajectory set number 193 (out of 218) * 1 1 IRADAR ALTIMETER * 2 1,2) I DIGITAL AIR DATA 0 1 AHRS * 1 1 INS *2 * VOR * {1,2 * I DME * 0,1 {1,2} ISENSOR RT ~ * ~ 0 FTMP COMPUTER ~* * * FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 194 (out of 218) # * 1 1 IRADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA 0 1 AHRS * 1 1 INS 2 * VOR {1,2}, IDME {0,1 {(1,2 I SENSOR RT * {1,2,3,4,5 I FTMP COMPUTER ~* * 0 FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 195 (out of 218) * I 1 RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA 0 1 AHRS *1 1 INS * 2 * VOR * 0 DME * * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS ~ * * AFT RT * 1 * WEATHER:............................ # # trajectory set number 196 (out of 218) # * 1 1 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 0 1 AHRS * 1 1 INS *2 * VOR *0 * DME * * {1,2) I SENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER # # trajectory set number 197 (out of 218)

367 ~ 1 1 I RADAR ALTIMETER * 2 (1,2) I DIGITAL AIR DATA 0 1 AHRS ~. 1 1 INS * 2 * VOR ~ 0 * DME * (1,2) I SENSOR RT ~* * {1,2,3,4,5} I FTMP COMPUTER ~* * 0 FCMS * * * AFT RT * I WEATHER............................ # # trajectory set number 198 (out of 218) *I 1 I RADAR ALTIMETER ~ 2 {1,2} IDIGITAL AIR DATA ~ 0 1 AHRS * 1 0 INS 2 {1,2} VOR (1,2) * DME * 2 {1,2} SENSOR RT * 5 {1,2,3,4,5) 1 FTMP COMPUTER * 2 {1,2} FCMS 2 0 AFT RT ~ I * WEATHER............................ # trajectory set number 199 (out of 218) 1 1 I{RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS ~* * 0 INS 2 * VOR ~* * * DME * * 0 SENSOR RT ~ * * FTMP COMPUTER ~* * * FCMS *~ * * AFT RT ~ 1 * WEATHER #............................ # # trajectory act number 200 (out of 218) *1 I I RADAR ALTIMETER 2 (1,2) { DIGITAL AIR DATA * 1 1 AHRS * * I INS 2 * VOR {1,2} * DME 2 0 ISENSOR RT ~ (0,1,2,3,4} * IFTMP COMPUTER * * * FCMS * * * AFT RT * I * WEATHER #............................ # # trajectory set number 201 (out of 218) 1 1 I RADAR ALTIMETER * 2 {1,2) DIGITAL AIR DATA * 1 1 AHRS ~* * 1 INS * 2 * VOR {1,2) * DME 2 0 SENSOR RT 5 5 * FTMP COMPUTER (0,1}. { FCMS * * * AFT RT * 1' WEATHER #............................ # # trajectory set number 202 (out of 218),,

368 *1 1 iRADAR ALTIMETER 2 (1,2) I DIGITAL AIR DATA * 1 1 AllRS ~* * 1 INS * 2 * VOR (1,2} I DME * 2 0 SENSOR RT ~ 5 * FTMP COMPUTER * 2 * FCMS {0,1} * IAFT RT * 1 * WEATHER............................ # # trajectory set number 203 (out of 218) I 1 IRADAR ALTIMETER *2 {1,2} 1 DIGITAL AIR DATA * I I AHRS * * 0 INS 2 * VOR * *~ * DME * * {1,2} SENSOR RT ~* * 0 FTMP COMPUTER * * * FCMS * * * AFT RT * 1 * WEATHER:#............................ # trajectory set number 204 (out of 218) * 1 1 I RADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * I 1 AHRS ~* * 1 INS * 2 * VOR * 1,2) } DME 2 {1,2} SENSOR RT 5 0 IFTMP COMPUTER (0,1) * IFCMS * * * AFT RT * 1' WEATHER #............................ # trajectory set number 205 (out of 218) * 1 1 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS * * 1 INS *2 * VOR * 1,2) * DME * 2 {1,2} SENSOR RT * 5 0 FTMP COMPUTER * 2 * FCMS *0,1} * IAFT RT * 1 * IWEATHER ^~~............................... # # trajectory act number 206 (out of 218) * 1 1 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS * *~ 0 INS *2 * VOR * * * DME DME * {1,2} ISENSOR RT * * {1,2,3,4,5} FTMP COMPUTER * 0 FCMS * * * AFT RT * 1 * WEATHER # t........................... # # trajectory set number 207 (out of 218)..

369 * I 1 IRADAR ALTIMETER * 2 {1,2) I DIGITAL AIR DATA * 1 1 AHRS ~ * 1 INS * 2 ~ VOR 1,2) DME 2 {1,2} SENSOR RT 5 1,2,3,4,5} FTMP COMPUTER ~ 2 0 IFCMS ({0,1} * IAFT RT * 1 * WEATHER #............................ # trajectory set number 208 (out of 218) ~ 1 1 RADAR ALTIMETER 2 {1,2} I DIGITAL AIR DATA ~ 1 1 AHRS * *~ 0 INS * 2 0 VOR {1,2} * DME * 2 {1,2} SENSOR RT 5 1,2,3,4,5} I FTMP COMPUTER * 2 {1,2) I FCMS * 2 AFT RT * 1 * WEATHER #............................ # trajectory set number 209 (out of 218) * 1 1 I RADAR ALTIMETER *2 {1,2} 1 DIGITAL AIR DATA * 1 AHRS * I INS *2 * VOR {(1,2) * DME * 2 {1,2} SENSOR RT' 5 (1,2,3,4,5} I FTMP COMPUTER (0,1} 0 I FCMS * 1' WEATHER............................ # trajectory set number 210 (out of 218) ~ 1 I IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA * 1 1 AHRS * * 1 INS 2 * VOR {1,2} I DME * 2 {1,2} SENSOR RT {*0,1,2,3,4} 0 FTMP COMPUTER ~* * ~ FCMS ~* * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 211 (out of 218) 1 1 I RADAR ALTIMETER *2 {1,2} 1 DIGITAL AIR DATA 1 1 AHRS * * I INS * 2 ~ VOR (1,2) * DME * 2 {1,2} SENSOR RT * 0,1,2,3,4} {1,2,3,4,5} FTMP COMPUTER ~* * 0 FCMS * * * AFT RT ~ 1 * WEATHER #............................ # # trajectory set number 212 (out of 218),1

370 ~1 1 IRADAR ALTIMETER * 2 {1,2} I DIGITAL AIR DATA ~* 1 1 AHRS * * 1 INS 2 * VOR 1,2} * DME * 0,1) 0 SENSOR RT ~* * ~ FTMP COMPUTER * *. FCMS * * * AFT RT * 1 * WEATHER #............................ # trajectory set number 213 (out of 218) # 1 1 I RADAR ALTIMETER * 2 (1,2} DIGITAL AIR DATA *I 1 1 AHRS * ~ * 1 INS 2 * VOR {1,2}, I DME 0,'1 {1,2} I SENSOR RT * * 0 FTMP COMPUTER * * * FCMS * * * AFT RT 1 I * WEATHER............................ # trajectory set number 214 (out of 218) *1 1 IRADAR ALTIMETER 2 (1,2) DIGITAL AIR DATA 1 1 AHRS ~* 1 INS 2 * VOR *{1,2} I DME 0,1} {1,2} I SENSOR RT * {1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS ~* * AFT RT I * WEATHER............................ # trajectory aet number 215 (out of 218) # I 1 I RADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA ~ 1 1 AHRS * * 1 INS 2 * VOR 0 * DME * * 0 SENSOR RT * * * FTMP COMPUTER * * * FCMS ~ * * AFT RT * 1 * WEATHER............................ # trajectory set number 216 (out of 218) *1 1 1 RADAR ALTIMETER 2 {1,2} DIGITAL AIR DATA * 1 1 AHRS * * I INS 2 * VOR 0 * DME * {1,2} I SENSOR RT ~ * 0 FTMP COMPUTER * * * FCMS ~* * * AFT RT * 1 * WEATHER #............................ # # trajectory set number 217 (out of 218),.

371 * 1 1 IRADAR ALTIMETER * 2 {1,2} DIGITAL AIR DATA * 1 1 AHRS * * I 1 INS * 2 * VOR ~* 0 * DME * * (1,2 I SENSOR RT * * (1,2,3,4,5} I FTMP COMPUTER * * 0 FCMS * * * AFT RT * 1 * WEATHER............................ # trajectory set number 218 (out of 218) ~ I 1 I RADAR ALTIMETER 2 {1,2) I DIGITAL AIR DATA * 1 1 AHRS * * 0 INS * 2 {1,2} VOR {1,2} * DME 2 {1,2} SENSOR RT *5 {1,2,3,4,5} IFTMP COMPUTER 2 {1,2} I FCMS 2 0 AFT RT * 1 * WEATHER

372 APPENDIX K Closed-form 3olutioin of a simple 3-state, acyclic, nonrecoverable process This appendix contains the symbolic derivation of the distribution of reward for a simple markovian three-state, acyclic, nonrecoverable process, where r(u2) > r(ul) > r(uo). See Section 5.3.9.2 [Three State Markovian Acyclic Process] and Fig. 5.4. The regions Cy,, are explicitly stated and the appropriate cases of Theorem 5.25 iii) are indicated in brackets. For the example presented in this appendix, we carry the full notation throughout the derivation. This is done, in part, to emphasize the complexity of the integrations. As can be seen from the intricacy of the equations in this appendix, such a "brute force" technique quickly becomes difficult to follow. Therefore, in Appendix L [Closed-form solution of a simple 4-state, acyclic, nonrecoverable process], we use intermediate variables to denote partial results. Let y e [r(uo)t, r(ul)t) (thus, I = 1): i= 1 =- n-l (i.e., checking for 1-resolvability): j- n = 2: /( - r(ul)t y - r(u-)t (K.1) C,2 = |~> r(u2) - r(u )' r(U2)- r(Uo) case (K.J1) and since y < r(ul)t: r - r(uo)t 1 (K.2)' r(U2)- r(uo)

373 j = 1- i: C, - r(uo)t - (r(u2) - r(uo))V2 (K.3) C' [ case b-iii)] (K.3) Cy', I 0 r(ul) - r(uo) j = 0: Cylo = 10, ) I[ca8e b-iv) (K.4) Fy(y) = J f2(V2)fl(Vl)dd2 (K5) Cy1 2 CY, 1 y - r(Uo)t y - r(uO)t - (r(U2)- r(Uo))v2 2(U -,(Uo) r( 1) - (0) o) (K.6) J S X2e -22x 1-1 dvldv2 f f I x2e - Y2> 1v1dd O 0 - r(uo)t'("2) - ("o)r 2 - () [ X r(uo)t (t2)- r(U o))V2] (K.7) -Xe -X2v2 1 (u-r(UO) dv2 - X2V2 X2(r( 1)- r(uo)) - e + Xl(r(u2) - r(uo)) + X2(r(ul) - r(uo)) _y - r(uo)t (K.8) (U )2))-, _(uo) x(y - r(uO)t) [ 1((U2)- "(Uo))+X2((U l)- f(Uo))] V2 (u 1) - (u() U)- (Uo) I 0

374 If y E [r(uo)t, r(ul)t) (K.9) X2(y - r(Uo)) _ =- (u2) - (U) + X2(r(u ) - r(uo)) Fr(y) = 1-C e +t27(O X,(r(U2)- r(uo)) + X2(r(Ul) - r(uo)) i(y- r(Uo)t) [X(I(r(2) - r(U)) + X 2((U )- r( 0))] ( -r( 0)t) i(U1) - T(UO) -(U ) I )- 0o) e 1- e ()-t(Uo) j Let y E [r(u)t, r(u2)t) (thus, I = 2): i = 2 = n (i.e., checking for 2-resolvability): j=- n = 2: 2 [ r2)- r(u ) case a-i) (K.10) C2.: = 0, ~~[~- ~ ] case a-i) j- =n-l- 1: CY1 = 10, oo) [case a-ii)l (K.11) j = 0: Cf, = [0, 0) [case a-ii)J (K.12) y- r(Ul)t r(u)- "(u 1) T(U2) - r(u J) (K.13) x~-xe x d'2 f 2(12)d2 = 0 X:2 ~2 C2 0o

375 x2(Y- r(u )t) /2(V2)d2 = (2)- (U1) (K.14) n - 1=- l- 1 = 1 (i.e., checking for l-resolvability): j= 2: Y = ( y-r(u)t y - r(uo)t [case b-i)J (K.15) y,2 r(u2)- r(ul1)' r(u2)- r(uo) [ - r(ul)t y- r(uo)t (K.16) - r(u2)- r(U1)' r(U2)- r(uo) b = >\0 r (ul) - r(uo)) y- r(uo)t- (r(u2)- r(uO))v2 ( r(u)- r(uo)) (case b- ii)

376 y- r(u,)t We know from Eq. K.16 that v > r,- r - r(2) - r(Ui) y - r(ul)t - (r(u2) - r(ul))v2 ( r(u) - r(uo) ) (r(u2)- r(ul))(y- r(u,)t) (K.18) y - r(u)t2) - r() r( Ui) - r(uo) = O, so Cyl _= [ O y - r(uo)t - (r(u2)- r(u))2(K.9)'..... r(ul) r(uo) j= 0: cylo = O, oo) lcase b-iii)] (K.20) f Jf2(v2)fl(vl)dvldv2 (K.21) (K.22) y- 7(Uo)t y - r(Uo)t - (.(u2)- r(U ))v2'(U2)- (uO) (u")- (u) (K.22) Ir r _I X2 e V ldvdv2 y-r(ui)t 0 (u2)- (u1)

377 y- r(uO)t r(U2) - r(uo) >2~ y - r(ul)t (K.23) r(u2)- (U1) I 1i e- ( o) y-(uo)t-(r(u2)-'(uo))v2] ] y- r(Uo)t (U 2) -,(U0) r r V2- X -(oV2 x,(y - r(uo)t) (K.24) r- (ul)t r(u2)- r(ul)t -[ 2 2v2 (r(u1) - r(uo))X2 e + X2(r(ul) - r(uo)) - XI(r(U2) - r(uo)) v- r(u0)t (K.25) f(U2)-~(U O) V2 X1(Y - (uo)t) 1 (U 1)-(Uo) (U )- T(UO) e J y-r(Ul)t r(U 2)- (u 1)

378 x2(y- r(u 1)), (y - r(uo)t)'(") -'(ui)'("2)- 0(Uo) j j2(v2)f/i(V)didv2 = e - e) (K.26) C,21 C,2,1,(y -,(Uo)t) (r(ui)- r(UO))X2 r(u2)- 2(uO) X2(r(ul) - r(uo)) - 1(r(2) - r(u2))) (y - (Uo)t)o[ x2(r(u1) - o(Uo))- X1(T'(2)-'(UO)) (r(U1) - T(U))(r(U2)- r( o)) (Y- r(ul)t) [2(r(ul)- (Uo))- Xl(r(u2)- r(UO))] - C (y - r~u1)[(r(u 1r(UO))('(u2)- r(UO)) -e Therefore, Fr(y) = fi(v)dl + ff2(V2)fl(v)dvld2 (K.27) CyP2 C 2 Cy, 1 =2(y - T(u1)t) X2(y - r(u )t) x2(y - r(uO)t) r(u2)- r(u1) "(U2)- r(u1) (u2)- (U 0) =-1- e + -e i(y - r(uo)t) (r(Ui) - r(0o))X2 f tu)- (o) + ~ X2(r(ul) - r(uo)) - XI(r(u2) - r(uo)) (K.28) (y - r(uo)t)[X2(r(Ui)- r(Uo))-~(r(2)- r(uo)) r(u l)- r(uo))((u2)- r(u o)) (y - T(ul)t)[x2(r(ul)- r(Uo))-X(,(U(2)- r(Uo))] (r(U i)- r(o))(U(u,^)) - ( (Uo)) -~

379 tIf y E r(ul)t, r(u2)t) (K.29) 2(Y - t("o)t) F(y - (U2) - (UO) + (r(tu) - r(uo))X2 X)2(r(ul) - r() uo)) - Xi(r(u2)- r(uo)) r(U r) _ _ __ ___ Xy ) (y - r(uo)t) [X2(ru 1) -.(Uo)) -X 1(r( 2)- (Uo))] ( (U i)- (Uo))("(U 2)- T("o )) For y r r(u2)t: If E [r(u2) t, o) (K.30) y(y)= 1 In summary, if 1 E ['r(uo)t,r(u1)t): x2(y - r(uO)t F(U) =) - r(u.o ('r(u) - r(uo))X2 Xi(r(U 2) - r(uo)) + X2(r(ui) - r(uo)) iK.31) (y - (uFo)t) [1i(r(u2) - T(uo)) + 2((Ui)- (U))] (y - )(U0)t) e i(U)- (u"o) L( - - )t (u ) ift E jr(ul)t, r(2) t):

380 2(Y - r(uO)t) F, "(U2) - (UO) + ( () (r(ul)- r(uo))X2 X2(r (u-) - r( o)) - X1,(r(u2) - r(Uo)) X,(y - r(Uo)t) (y - (o) 2(r(u") - "(UO)) - x(r(U )- r(U)) e (U2)- [(UO) (r(,)- [ (UO))((u"2)- r(U)) (K.32) (y - r(ui)t)[X2(r(Ui)- r((U)) - X (r(U2)- r(o))] ((Uli)- r(UO))(r(U2)-?(UO)) - e if y E [r(u2)t,oo): F(y) = 1 (K.33) Finally, note that the above are sums of exponentials: if y E [r(uo)t,r(u,)t): - A ly - A 2y - A3y Fr(y) = 1 - Ble +" B2e - Be (K.34) if y E r(u)t, r(U2)t): Fy(y) 1-BIC- A 1{/ - A 4- - A3y F~(y) = 1 - Be + B4e - B3~e (K.35) if y E [r(u2)t,oo): Fy(y)- 1. (K.36) where A,) - ^x2 (K.37) r(U) - r(Uo)

381 XA X (K.38) A2 = r(u2) - r(Uo) A_ = XI _ ]\(r(u2) - r(o)) + X2(r(u) - r(uo)) (K.39) r(u2)- r(uo) r(ul) - r(uo) A4 = As (K.40) x 2(U 0)t B(u2) - r(Uo) (K.41) (r(u1)- r(Uo))X2 x, (r(U2) - r(Uo)) + X2(r(u1) - r(uo)) (K.42) X1r(u o) t(u l) - T(Uo) ~B3 = _ (r(utl)- r(uo))X2 X' ( r(u2) - r(uo)) + X2(r(ul) - r(uo)) Xr,(u o) t [ Xi((u2)- (u ))+ X2((u1)- (uo))] (u)t ( 43) ( 1)- r(uo) + ((U 1) - r("o))(r(u2)- r(uo)) (r(u1) - r(uo))X2 4 = xI(r(u2) - r(uo)) + 2(r(ul) - r(uo)) (K.44) x I(Uo)t [x (f(u2)- (U 0)) + 2(r(1) - r(uo))] (u,)t (ul)- f(uo) ("(ul)- (uO))('(u2)- ("Uo))

382 APPENDIX L Closed-form solution of a simple 4-state, acyclic, nonrecoverable process This appendix contains the symbolic derivation of the distribution of reward for a simple markovian four-state, acyclic, nonrecoverable process, where r(u ) > r(u2)> r(ul) > r(vo). See Section 5.3.9.3 [Four State Markovian Acyclic Process] and Fig. 5.5. The regions Cy,, are explicitly stated and the appropriate cases of Theorem 5.25 are indicated in brackets. To help simplify the manipulations, we use the notation: r(u,) - r(u,) K,()= (L.1) and Tr(uk)- r(u,) (L.2) r(u,) - rJ Let y E [r(uo)t, r(u,)t) (thus, I = 1): i= 1 = n-2 (i.e., checking for l-resolvability): j = n = 3: y-r(u2) y- r(o)[case bi (L.3) o (U3)- (u2)' r(U)3 (U) - 0) and since y < r(u):2) - [0 y- r(u~)t (L.4)' () - r(uo) J

11, II 0 _ -. o - 0 00 0 - al ^^^*> ok - ^ A i - An - l.. A. - 0 _,,X _ C. C: C o - L1 0 _____ _ _ _ __0 - o o3 o3 3 3 -l 0r 0

384 K30(Y) K20(Y)- f023 3 K10(y)- f012 2 -'013 V3 -A3v3 -2 i2 -i Iv X3 e X 2 x e dvldv2d 0 0 0 K3(y) K20(y)- fo23V3 (L.14) = >FseS X 3V3 X-g2 V2[ e -X11Kx 0(y)-0l2V2- 012- 3] dV2dv ) 0 0 K3o(y) K20(Y)- )o=v3 -3V3 -)22v2 -22- I(KI0(y)- ~Ol2t;2 - l03V3j dv (Ldv 0 0 K30(y) K20(y)- 023v3 -r -X3v X2v2 -,,1013v3 -A' 1 2V (L.16) j j x3e' x 2e - B (y)e e J dv2dv 0 0 where A,' = X- X\ l 12 (L.17) and BI (V) = 2e (L.18) Then, K30(y) Fr(y) = e x 3 3 [- e- 2( (y) - OV o (L.19) A(y) e-llav3[1- e 1 20(y) 3)] dv - ~~7~ ^ L^ " ^ J 1^3 A1; ]

385 K30(y) r -X3V3 -X3-X2(K20(y)- o023 ) Be (y)X -33-013 - xs - xe - - 0 (L.20) B1 (y)X - 3 3- X 013V3 - A (K20() )- 023 3) + - e dve A1 K 3(y) _x e -Xv3,, ( ) -A 3 - jf e33 ()e -2 V3() o (L.21) -A 3 -A; -B3' (vY)e3 + B, (y)e 4 Vdv, where A2 -=,-'2023 (L.22) As3 = X3 + Xlo013 (L.23) A4' = X3 + X o13 - Al 0o23 (L.24) B2'' (y)= x2Xse (L.25) B, (v) B' (Vy)s X2X, - Koy) (L.26) Ai Al and,, Bl (y)X, -A^i K20(v ), 2Xs - XKo(y)- ^ K2, () (L.)( BA4 1, A1 Ax Al

386 Thus, -x C3o(y B2 ( Y ) -A K30(y) 1= i- e.[I-e A2 (L.28) - 3.(y) [1- e 3 0(Y)] B' (y) [- e-A' K(y)] A, A4 -A xY -A 2Y -AaY, -A+V 1- Bl e - B2(Ye + B3(Y)- -B4(y)e (L.29) -Ary -Ay -A-y + B,(y)e + Bo(y)e - B7()e where ri ( u )-r(uO)(L.30) Al = r(U 3) - r(Uo) A2 =- (L.31) r(u2)- r(uo) A3X = __ _ As+' (L.32) r(u2) - r(uo) r(u) - r(uo) (L.32) r(ul) - r(uo) x, A3'' Ab1 = r - r(uo) ) - r ) (L.34) A5 = XI + (L.35) r(u)-r() r(U2) - r(uo) X, A' r(:o) A(L.36) r(Al)- r(uo) r(u2)- + r( (L.36) 7 r~ui) - r~uo) r(U2) "r(o) r(') -ruo)

387 x3(" ( )t B1 - i(u3) - r(U2) (L.37) Bi(y) - e x2r(UO)t X2Xs r(2)- (uo) (L.38) B2 - A' e A2 Xs r(u3) (u2) 2 ) - r(u) (L.39) BA(y) = A"- e A Il( 2 ~x~ lr(U 1)) t) X2XS i ru )- r(u-) (L.40) A' AA* A e A'' r(uo))t xlr(u)t 2XX3 r(u 3) - r"(2) + r(u )-r(uo) (L.41) So, F(y) is the sum of expoAentials: A1 s3 x:(Uo)t:'2f(Uo)t X2 y(u)+ rE (o)), r2)- ((o) (L.42) A, A4 A-I r(Uo0)t,(,o)t ^10 (Uo)t X 2s r'(U3)-r(U2) + r(u,)-r(Uo) + r(U2)-f(U0) (L.43) FB(y)= A — -,e + A1 A4 So, Fr(y) is the sum of exponentials: -A y -A~ y -A 6y -A 7 -B4e - B, + B e -B7 Let y E [r(ul)t, r(u2)t) (thus, I = 2): i= 1 = n-2 (i.e., checking for l-resolvability):

388 j == n - 3: y ( - r(u2)t y- r(uo)t ] (L.45) >_o, u)- r(U2)' r(u) - r(Uo) and since y < r(u2)t: r(U3)- r(uo) [O, Ko()] (L47) j =:n - 1 = 2: C y - r(ul)t - (r(Us) - r(ul))vs y - r(uo)t - (r(uS) - r(uo))vs 1 y2 - r(U2) - r(ul)' r(2) - r(uo) (L.48) lcase b-ii) Consider the left-hand term above and assume it is less than 0. Then u - r(ul)t - (r(us) - r())v < V3 < - r(ui)t (L.49) ~ -1\~^ ~ < 0 =~ V,< (L.49) r(u2) - r(u1) r(U2)- r(u1) but y ~ r(ul)t, so no such vs is possible. Thus, the above term is greater than or equal to zero, and so C:,2 ( U- r(ul)t - (r(u) ))- r(u)O) - (r()- r(o) ( )V3 1 (L.50) 2- r(U2)- r(ul) r(U2) - r(uo) ( K21() - (123vs, K20() - 233 ] (L.51)

389 = - 1 = t: C, I [, - r(uo)t- (r(u2) - r(uo))V2- (r(U3)- r(uo))V3 1 (L.52) =^l [ o- - r(u{) - r(u0) J[ase b-iii)] [, Ko(y) - o12V2 - oL13v ] (L53) - = 0: Cyl o 0 case b-iv)l (L.54) fJ f 1J8s(Vs)f2(v2)i(i)dvd2d (L.55) Cy,3 y 2 C, 1 K30(Y) K20(y) - 023V3 Kl0(y) - C0122 - C013V3 - X -X 22 -X v (L.56) =S I I Xae XC X2VXl e vlddv32d 0 K2(y) - 0O23V3 ~ K30(y) K20(y) - 03V3 f f x -e'10 - 33 - 22[1 e- 2K10(Y) - O22 2 0 d2dV3 (L.57) 0 K21(Y) - 0023V3 K30(y) K20(y) - -23V3 - r xe - 3V 0 K21(y)- f023v3 (L.58) [xe x2V2 - 22- l10(y)- 122 - o0l33Va)]

390 K30(y) K20(Y) - 023v3 r r -f xe 3v3[ -X 2V2 ( -Xl013 v3e -Al V]d, (L.59): Ia~ I lX2~ - Bx (Y)~ dcdJ o K21(y) - C023V3 K30(y) 3e -x3V3[e -X 2(K21(y)-o23v3) e -X(20(y) - 23v3) o (L.60) Bl (y) e 013V -A; (K21(y) - E023V3) e- A (K20(y)- o023V3] K30(y) e X3v3-X2(K21(y)- f3v3) e - 3V3-2(20(y)- 0233) B1' (y)X3 - 3 1- (K21() - 0o2'3) - 1013 (L.61) A B. (y)X3 -X3v 03v3 - A (K20(y)-(o23 V3) d + -t e A1 K30(y) f B' ()e -A6 V3 B' )e 2 V3 o (L.62) - t -A +3 (e -A -B, ())e +e, (Y)e do,

391 where A6' = X - Al o023 (L.63) A8' X + Xl+ ol - A1l 0o2 (L.64) B' (y- Xs e2 (L.65) and B * (B y)Xs -A' K21,() X2X3 -XKo(y)-^' Ki() (L.66) B.o (y) eX (L.6") A1 A1 Thus, f J I/f(vS)f2(u2)fl(ul)dvldv2dv C1 C1 Crl y,3.82 gy, B' [ 1- e'Ko(1f B'(y) [ -'(Y)] (L.67) B (y) -A'K3(y)] 1B4 (y) [ -A K30(Y)] =Bse - Bge - B2e + B3s (L.68) "^f-A ".lY - e As -Aiy -Bloe + BAe -Dl + B B7e where rA4 ~1 r (L.69) r(u2)- r(u,)

392 AX + A' (L.70) r(U2) - r(Ul) r(Us) - r(Uo) A10,+ A, (L.71) r(ul)- r(uo) r(u2)- r(ul) I, A a1' Ag" A - + I + ^' (L.72) r(u1) - r(uo) r(s) - r(o) r(u) - r(o)(L72) ):2r(u l)t Xs (U 2)- =(u1) (L.73) A2 A, r(uo)t x, 1(u )t B. X3 e'(U3) - (r(U ) +r(U2 -r(UL) (L.74) -9 - -- I AS Al:x (U 0o)t A' r(ul)t X 2X3 )(u 1) - 1(Uo) + r(u2)- ( 1) (L.75) Blo A= e A A1 A4 ^1 ^4 A'' r(u) x 1(U2) A1 r(u) 2X3,(U3)- r(u2) r(U3)-'(U2) (U2)- r(U 1) (L.76) BII = I e AO Al J I fs(vs)f2(2 ()dvdv2d i the um o exponentials: C3 C.2 Cy.1 -Acy -lA A2 -ACy =Be - Boe - B2e + Be -A s lOy -A+lly + -Ay 7 -A7Y - Bloe + Blii + Boe - B7e

393 i = 2 = n-l (i.e., checking for 2-resolvability): j = n = 3: o(2 y- r(U2)t y - r(ul) (L.78) Cfs. ^)-(-2) 3- r(2)' r(), - r(u) Jb-i) and since y < r(u2)t: - [ r(us)- r(ul)) (L.79) L ^ )-(l)- r(u J j [o,K21() -12 ] (L.82) V,2^ - —' r(- r(u )) [ case b-iii)l (L.81) Cy -= 1, oo) [case b-iv) (L.83) j — 0: CY!o = oo [case b-iv)j (L.84) K31(y) K21(y) - 123v3a f f!(sf2(v2),,dv2dv3 = = x Ie vx 2e2v2dvdIs (L.85) Cy,3 C,%2 0 o

394 K31(y) -X3v3[ 1 e 2(K21(y) -( 233)] d (L.86) 0 K31(y) -X3e -A 23 (L.87) j,X3e - B2 (y)e 2 3dv3 0 where A2 X3 - X2123 (L.88) B2 (y)= xe-c' (L.89) Then, ri\ \ J i,(~,V,(~.)r.,s.,=l-e-X) ^^31^ ^ ( [A 1 A iK y)] (L.90) f f fA3(V)f2(v2)dv2dvs = 1 - e 31) B2 () [ eC 2 C'2 ~2 1,3,2 -A 12y -A 13Y -A 4 - -B 2(y) - Bl,(y)e + B14(y)e (L.91) where A12 X=, (L.92) r(u2) - r(ul) A = r(u2) r()- (L.93) A = + A (L.94) r(U2) - r(u1) r(u3) - r(ul)

395 X3r(ul)t r(u 3) - r(u 1) (L.95) x2r(U 2 )t XS e (U2)-(u"l) (L.96) B18 = -A- e x2r(Ul)t A2 r(ul)t X2 r(u2)- (UI) (U3)- r(U1)t (L.97) B14 - r e A2 So, f f /(v,)f2(v2)dv2dv3 is the sum of exponentials: CYa3 CY,2 f f sf(V8)f2(v2)dv2d (L.98) C,3 C,2 -A12y -A13y -^14^ = 1 - B12e - 13e +B ^14 -A2y -A3y F(y) = — B2e + Be -Asy -Aoy -Aoy -'A'v + B8e - B9 + Bloe - Bl^e -A 12Y -A 13Y -A 14Y -B12e -B13 C + B14e Let y E [r(u2)t, r(us)t) (thus, I = 3): i = 1 = n-2 (i.e., checking for l-resolvability):

396 i — n = 3: r( - r(u2)t y-r( r(uo)t Ca > r(u5) - r(u2)' r(U3) - r(uo) [case b-i)J (L.100) and since y > r(u2)t: [ y- r(U2)t y- r(uo)t (L.11) r(s)- r(u2)' r(U) - r(u) [Ks2(y), Kso(y)] (L102) j = n-lI = 2: c:l,= ( y- r(ul)t - (r(u) - r(U))vs y - r(uo)t - (r(u) - r(uo))v3 >o0 r(u2)- r(ul) r(u2)- r(uo) (.1) icase b-ii) and from Eq. L.49 C/2 y - r(ul)t - (r(us) - r(ul))V3 y - r(uo)t - (r(u)- r(uo))v3 C^2 = t ~ r(u2) — r(ul) r(u))- r(uO) J (L.104) ( K21(y)- 12v3, K20(y) - o02sVs J (L.105) J = 1= i: C, = r- O, y - r(uO)t - (r(u2) - r(uo))V2 - (r(uS) - r(Uo))Va ](L.106) C, [ r(u K o- r(uo2 - ocase b [O,Ko(ly) - 012V2 - 13V] (L.107)

397 j- 0: Cl,o = oo Icase b-iv)l (L.108) j J jhfs()f2(2)fl(vl)dvdv2dv3 (L.1O) C;, 3 C,2 C,,1 3o0(y) K2(y) - 023V3 K10(Y) - 012( - 013V3 (L.110) = j 1 j ^j Xse x2v2e 1e l dv dv2dv3 K32(y) K21(Y) - 023V3 and following the pattern of Eqs. L.57-L.62, K3o(y) _,, A - J (B )-y)e - 2 (y)e -2 K3(y) (L.lll) -A- v3, -A 3 -_.6' (y)~+ B4 (y)e dV3

398 B' [ -A h K32(y) - A K3o(y0 A6 A2 (L.112) B' (V) - Ao 32K(v) -A' K3o( + B ( [e - A^4 K32(y) - ^4 K30(y + ~A~ e A4 (L.113),I I... A K... A.A A "2r(u2 i) - ) r( 3) - r(u2) As +A1^~ -=~ B2 ^ ~A 2 ~(L.115) r(u2)- r(Ui) r(u3)- r(u)..- A-A A = + Bile + 18' (L.116) r( u) - ) (3- r(r( ) - r(2) r(U2) -- r(ul) r f(us) - r(uo) X1 Al AO (L.116) A = r(u,) - r(uo) r(U2) - r(uo) r(us) - r(u2) x I A I At If = + A' + A4' (L.117) As8 r(u) - r(uo) r(U2) - r(uo) r(us) - r(U2)

399 ^6',(U2)r x2,(, 1) Xs X (U3)- r(U2) (U2) )- r(ui) (L.118) Bl6 =' ~"''r- A6 A2 r(u2)t x2,(uo)t BX ("3) - ("2) + r(2)- ()(L.119) A2 A'(u2(U)t Al ( Io ), (Ul)f X2Xs3 (U)- r(u2) +r(U )- (U0) T(u-(Ui) (L.120) 17 -; I; e Al Ao and A4' (u2)t x r(Uo)t A1 r(uo)t X2X, T(U3) - r(uZ) + r(U )- r(u) + r( - r(U) (L.121) B18 A =, e A\ Al So, f S3(vs)f2(V2)fl(vl)dvdV2dv i the sum of exponentials: C1,3 C1,2 C1,1 J J Js3(vs)f2(t2)fi(vl)dvldu2dvs (L.122) Cy, 3 C.2 Cyl. = Blse - B9e - Bloe + B3e -A17Y — A^1 lay i -A - - B17e + B e + Blse - B7e i = 2 = n-l (i.e., checking for 2-resolvability): j - n = 3: 2= ( - r(2)t y- r(ui)t 1 ca (L.123) o r(u)- r(u2)' r(3) - r(ul) J ae

400 and since y > r(u2)t: ( y- r(u y- r(u)t ](L.124) r(u3) - r(u2)' r(u) - r(ul) = (KS2(y),K's(y) i (L.125) j = n - I = 2: 2 [ o - r(ul)t- (r(S) - r(u)) vs (L.126) ^,2 == 0, ~ T ~Tcase b-ill)] L r(U2) - r(u) bJs(i) [o,K21(y) - 12u3 ] (L.127) j - 1: c,i = [O,oo) [case b-iv)j (L.128) j 0: CY,o = oo [case b-iv) (L.129) K31(Y) K21(Y)- 12V3 f fS.f3(vS)f2(v2)dV2dv = xe "x2e dV2dv (L.13 C,3 C,2 K32(y) 0 K31(y) S e-XVa[ -x2(K21()-,23V3)] dv ( K32(y)

401 K31(Y) -XJ 3 3 - (y)-A 3 (L.132) - 3s 33- Bf (y)Ae 3t, K32(y) -3K32(y) -X331(y) B2 () r[ -32(y) -2 1(y) (L.133) -e 2 ~Y -C~ - A2 -A loy -A C -AMtY -A j B y(y)e B y- B -,() - B2y)e + B4(y)e (L.134) where A1io =- i (L.135) r(u s) - r(u2) A20 + 2 (L.136) r(u )- r(u2) - r(u2)-r(ul) X3r(u2)t B(U3)- r(u) (L.137) Bio -- ) x2l(ul)t X e U2)-r(il) (L.138) 20o = -- r- A2 So I f3(V)2(v2)dv2dv3 is the sum of exponentials: C,3 C,2 J f f,(v3)f2(v2)dv2dv3 (L.139) c|,3 C,2 -A~l -A12y -A20Y -A 4 = BXoe - B12e - BC20 + B14e

402 i = 3 = n (i.e., checking for 3-resolvability): j= n = 3: Is n - 2y- r(u)t (L.140) C~,s = [, r(u) - r(u2) [case a-i)l j= 2:,2= [I,oo) lcase a-ii)l (L.141) j = 1: Cy3,= 0,oo) clase a-ii)l (L.142) j= 0:, = 00o case a-iii)l (L.143) K32(y) fr (v)dv3 = xr e-3"3 du (L.144) g,3 -X 3K32(y) = 1I- e (L.145) = 1 - Aloe (L.146) So, f C 2 f3(vs)vs is the sum of exponentials: CY

403 J f(vs))v (L.147) C3 -Alt 1 - I - Blo. If y e r(u2)t, r(us)t): (L.148) Fy(y)= -i —— +HBse - B7e - Be -AIly -A12 -A14Y -AI6y + Bl1e - B12e + B14e - Bo1e -A 1?Y -A ley "B-A20 - B7le + Blse - B20e Finally, let y E r(u3)t,oo0). Then If y E [r(U3)t,oo): (L.149) Fyy) 1.

404 APPENDIX M recursively derived solution of a simple 3-state, acyclic, nonrecoverable proce This appendix contains the recursive derivation of the distribution of reward for a simple markovian three-state, acyclic, nonrecoverable process, where r(u2) > r(ul) > r(uo). See Sections 5.3.10 [Recursive Formulation of F1,U 5.3.9.2 [Three State Markovian Acyclic Processl, and 5.3.10.1.2 [Three State Markovian Acyclic Process], and Fig. 5.4. The regions CY,' 2 are explicitly stated and the appropriate cases of Theorem 5.25 iii) are indicated in brackets. Many derivations are detailed in Appendix [Closed-form solution of a simple 3state; numbers of the corresponding Appendix K equations are also indicated in brackets. Let y E [r(uo)t,r(u,)t) (thus, I = 1): j = n = 2: 1,2; r(u2) r u r(Uo)t (M. 1) y, 0, r(U) - r(o) [Eq. K. 21 j= 1=i: 1,2;r(u2) [ y- r(.o)t - (r(u2)- r(uo))V2 (M2) V' - r(u,) -r(uo) [cse( b-iii)l (M 2) j- 0: C:; = [0,oo) I case b-iv) (M)

405 F; 1(Yl (U2,U1,UO))= Fy(y)='S ((v2)/ (v)dv(Mdv 1,2;r(uI) 2,2;r(u2) C.2 C 1 g2,l y- r(uo)t y- r(u)t - (r(2) - ("u))u2 r(u2)-'(UO)'(U1)- (Uo) (M (M.5) =-f r >1 f x2 e eultl ~ - X 2~C ~ C 2Xdvldv2 0 0 and from Eq. FYSlild: If E [r(uo)t,r(u,)t) (M.6) F 1 Ut( (U2, (u, O)) = Fy(y) x2( - ("uo)t) (u2)- r(UO) +2(r(u,) - r(uo)) (r(u) - r(uo)) + X2(r(ul)- r(Uo)) X,(y- r(uO)t) _ [ ((2)- r(Uo)) + 2((u1) - r(uo) (Y - r(u0)t) (Ui)- (UO) -e T()- r(Uo) Let V E [r(ul)t,r(u2)t) (thus, I= 2): j- 2: C1,2;r(u2) [ y-r()t - r((UO)t )K (M.7) =. (U2) - r(u,) r(2) - r(uo)) (Eq. K.16 j= 1: C (2) = [, - V (uo)t - (r(U2)- r(U(o))v2 1 [Eq. K. 19 M.8) —, rui)- r(o) J *

406 j- 0: C,2;r(u = [,oo) [case b-iii)] (M.9) fJ f 12 (v2)/f 2 (vl)dvdV2 (M.0o) 2.2;r(u2) 2.2,r(u2),I Cy,1 y- T(Uo)t - r(o)t - (r(u2)- r(- O))v2 r(u2) - r((u) - r(uo) (M. 1) =.1~ P! x2e -2v~2n dvdV2(M.) y - r(u1)t 0 r(u)- r(l) x 2(y - r(u )t) 2((Y - r(u"))'(u2)- "(") e (u2) - r(0) (y - "(uo)) (r(u1l) - r(Uo))2 r(u2)- T(UO) 2(r(u - r( ))) - Xo()) 2)- i(U( - )) - ( - (uo)) X 2((ul) -'(uO)) - x 1(2) - ru))M. 12) e ((1)- (UO))(("2)- ("UO)) (V- r(ui))[42((u l)-'("o))- )1('( 2)-'("o) (r(u )- ) (UO))('(U2) - (UO)) -e [Eq. Tz220integrationel 2(Y - r(u)t) F (v1 (U2,uU)) = F y) = - e'(u2)- T,(u) (M.13) llu(y (,2,,, 0))= rr^ )== 1-e

407 Therefore,;F 2(u (u2,ul,uo))= Fr y) (M.14) =- S f ^f2(v2)f2(vi)dvdv2 + FZ u (yl (u2uil,,U)) 1,2;r(u2) 1l,2;r(u2) C,2 C',I x2(-.(l)t) ) 2(Y - r(")0 X2(- (uo)) r(u2)- v(u1) r(u2)- r(ul) r(u2)- r(u0) ss1-~ +~ - e x(y - r(uo)) + (r(u1) - r(to))X2 T - ( - 2 r(- o) X2(r(u1) - r(o)) - ) X(r(u2) - r(uo)) (M.15) (Y -'(uo))[X2('("i- (o))- (() - (o)) ( 2)- "o))] ("(U 1)- r(UO))(r(U2) - (U")) (y - (l)) 2("(U l)-'(UO))- X (U"2)- (%0))] ( (" 1) - r(uo))(r("2) - r(uo)) 2(y - r(uo)) r(U2) - r( (r(ul)- r(u0))X2 - — e + + 2(r(u1)- r(uo)) - X(r(u2) - r(uo)) x (y - r(uo)) (y - r(o))X 2(r(u L) - r(Uo)) - l( r(2) - r(o) e (U2) - r(U) (r(" l)- t(U))("(u2) - r(U)) (M.16) (y - r(u ))[ 2(r(U 1) - (U)) )- i((U2) - o())] (r(u )- Y(uo))((u2)- 0 (UO)) For y > r(u2)t: FJ u (V| (u2, uuo)) = Fr(y) = 1 (M.17)

408 APPENDIX N Recursively derived solution of a simple 4-state, acyclic, nonrecoverable proc( This appendix contains the recursive derivation of the distribution of reward for a simple markovian four-state, acyclic, nonrecoverable process, where r(us) > r(u2) > r(ul) > r(u0). See Sections 5.3.10 [Recursive Formulation of FYuI], 5.3.9.3 [Four State Markovian Acyclic Processj, and 5.3.10.1.3 [Four State Markovian Acyclic Pro1,8;r(u3) cessJ, and Fig. 5.5. The regions Cy(, are explicitly stated and the appropriate cases of Theorem 5.25 iii) are indicated in brackets. Many derivations are detailed in Appendix L [Closed-form solution of a simple 4-state; numbers of the corresponding Appendix L equations are also indicated in brackets. Intermediate variables such as Al and B1 are as defined in Appendix L. To help simplify the manipulations, we use the notation: u - r(u,) (N. 1) r(u.,)- r(u1) and - r(u) - r(u,) (N.2) Let yE [r(uo)t,r(ul)t) (thus, I = 1): j n 3: CS (3) [OK3o(y) j] E L.51 (N.3) CY,

409 j n - 2: 1,,r(u3) - 1 Eq. L. (N.4) Cy, 2 (3= [ o,K2o() - o23V, ] 3 Eq. (N.4) j- 1= i: 1,;r(3) = [ OKo(y)- (12V2- fol3s ] [Eq. L.10] (N.5) j = 0: C;r(u3) = case b-v) (N.6) CY, 0i'('J)= m Icasc I-iv)] (N.6) If y E [r(uo)t,r(ul)t): (N.7) ~.^S; ~3 ~-A1y -A2y -z, (yj (U3,U2,luU) = 1-B - B2B + B3e +3 -A4 -Ay -AAy -Aty -B4e + Bse + Be - Be Let y E [r(ul)t#o(s) (thus, l = 2): j = n = 3: C; (ua)- = [o,Kso(y)] [Eq. L.471 (N.9) =n- 1 = 2: cT(3) = ( K21(Y)- 1123vS, K20(y) - (02P ] [Eq. L.51 (N.10)

410 c;( =3) [ O,Ko((y) - o122- (olVs ] [Eq. L.531 (N.ll) j= 0: Cy, o; I= [case b-iv)] CV ~; (3) = ~~ lcoas 6-iv)l (N.12) j j j f5(v3)f2(v2)fl(vl)dludv2dv3 (N.13) s, 3; (N) 1 "3;,(U3) 13;(3) %,3 C,2 2 CI -A Y -A9y -AAy -A 3 Bse - Boe - B2e + Bse -A loy -AiiV -AGY -A7y -Bl0e + Bile + Bse - B7e -' 1 (I (UsU2,UlUO) x3(y-,(Ul)t) -=1 -ie f(U3)- r(u + (r(u(u2)- r(u1)) =ru r) - r(u()) + X3(r() - r( u)) (N. 14) x 2(y- r(Mu ) [ [2((U3) - "(U1)) +X3((u2) - r(U))] (y - r(u1)t) (u2)-r(ul) r(u2)- (ul) c^ \~c

411 [ It3 -A A13Y - F31 uyj (u3y (12,uuo) = 1- B1L2 - Bls3 + B14e. (N.15) If yE [r(u1)t,r(u2)t): (N.16) 1 U (I (U,,u2,u1,uO) = 1- B2e + Be1 -AsY -Ay AloY -Ally + s8e - Be + Bloe - Ble -A12Y -A 13Y -Ay - B12e -B e B + Be14 Let y e [r(u2)t,r(us)t) (thus, I = 3): j- n = 3: C,; ( = 3 K [KS2(y), Ko(V)] [Eq. L.1021 (N.17) j n - 1 = 2: Cy 2 (3) = - ( K21(Y)- 12v3, K20(y) - o02ss J] Eq. L.1051 (N.18) j = 1- i: c 3; r(u3) = [0,Klo(y)- (22-o G13 ] [Eq. L.1071 (N.19) j- 0: c -;r(u3) [case b-iv)] (N.20)

412 Jf f f f3(V)f2(2)fl(vl)dvdv2dv (N.21) 1, 3 i(u3) l,;r(u3) 1u,3;r(u3) y,3 y.2 Y,I B -A 6Y -Ao/ - A1Y -A3J/ =B1i6e - Bqe - Bie- + Bse -A 17Y -Ally -A 1Jy -A7y - B7e + B1e + B1e - B7 _U(yl (,,~i2U12,o) X - 3 r(u)t) | l( 2) - - r(u1)) - X2((U3) - r(ul)) - 2(yu- ))[ (y - (u1i)[x3(r(ug) - r(U)) - 2(r(U3) - r(U1))] (N.22) e (3) 1) e (r02) - r))((u 3) - (u )) =1-B12e -B2e + B.4e Theref- or ThU(re (for e, Ui o) (N.23) - A1iy - A"oi - A]4 -- I - B12~ - B2o~ + B14e Therefore,

14 13 1 U 1(yl (utu2, u1,U0)= J Sf(vt)f2(v2)fl(v)dvldv2dv 1,3;r(ua) cl3;'("3) l,3;r(u3) (N.24) cy.3 Cy.2 cy, + F, (y1 (u,, u2, U1, Uo) If v E [r(U2)t,r(ua)t): (N.25) |3F2 3 - -A3y 7-A7y -Ao F:;(YI (U3 ( U2, U,UO) = + e - B7 - Boe -A l lA 12Y -A 1Y -A ly + B1e - B12e + B14e - Be -A 17y -A ~l "-A2y B17e + B8e - B20e Finally, let y E [r(us)t,0o). Then If Y E [r(us)t,oo): (N.26) F2 (yl (UU2,Ui,,o) = 1

BIBLIOGRAPHY 414

415 BIBLIOGRAPHY [l) L. Svobodova, Computer Performance Measures and Evaluation Methods: Analysis and Applications. New York, NY: American Elsevier, 1976. 121 D. Ferrari, Computer Systems Performance Evaluation. Englewood Cliffs, NJ: Prentice-Hall, 1978. [31 H. Kobayashi, Modeling and Analysis: An Introduction to System Performance Evaluation Methodology. Reading, MA: Addison-Wesely, 1978. 141 C. H. Sauer and K. M. Chandy, Computer Systems Performance Modeling. Englewood Cliffs, NJ: Prentice-Hall, 1981. 151 M. L. Shooman, Probabilistic Reliability: An Engineering Approach. New York, NY: McGraw-Hill, 1968. 16] B. V. Gnedenko, Yu. K. Belyayev, and A. D. Solovyev, Mathematical Methods of Reliability Theory. New York, NY: Academic Press, 1969. 17) R. E. Barlow and F. Proschan, Statistical Theory of Reliability and Life Testing: Probability Models. New York, NY: Holt, Rinehart and Winston, Inc., 1975. 18] A. Coppola, "A mathematical model for the prediction of sytem effectiveness," in Proc. 2nd New York Conf. Electronic Reliability, Oct. 1961. [9] Weapon System Effectiveness Industry Advisory Committee (WSEIAC), Chairman's Final Report, AFSC-TR-65-6, US Air Force Systems Command, available from Defense Documentation Center (DDC), Cameron Station, Alexandria, VA 22314 USA, Jan. 1965. I101 F. A. Tillman, C. L. Hwang, and W. Kuo, "System effectiveness models: An annotated bibliography," IEEE Trans. Reliability, vol. R-29, pp. 295-304, Oct. 1980. I111 J. F. Meyer, D. G. Furchtgott, and A. Movaghar, "A bibliography on formal methods for system specification, design, and validation," SEL Report No. 163, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1982. [121 J. F. Meyer, "Computation-based reliability analysis," IEEE Trans. Conput., vol. C-25, pp. 578-584, June 1976. [13] B. R. Borgerson and R. F. Freitas, "A Reliability model for gracefully degrading and standby-sparing systems," IEEE Trans. Comput., vol. C-24, pp. 517-525, May 1975.

416 [14] H. B. Baskin, B. R. Borgerson, and R. Roberts, "PRIME-A modular architecture for terminal oriented systems," in 1972 Spring Joint Computer Conf., AFIPS Conf. Proc. 40, pp. 431-437, 1972. 115] R. Troy, "Dynamic reconfiguration: An algorithm and its efficiency evaluation," in Proc. 1977 Int. Symp. Fault-Tolerant Computing, Los Angeles, CA, pp. 44-49, June 1977. [161 J. Losq, "Effects of failures on gracefully degradable systems," in Proc. 1977 Int. Symp. Fault-Tolerant Computing, Los Angeles, CA, pp. 29-34, June 1977. [171 M. D. Beaudry, "Performance-related reliability measures for computing systems," IEEE Trans. Comput., vol. C-27, pp. 540-547, June 1978. [181 H. Mine and K. Hatayama, "Performance related reliability measures for computing systems," in Proc. 1979 Int. Symp. on Fault-Tolerant Computing, Madison, WI, pp. 59-62, June 1979. 1191 F. A. Gay and M. L. Ketelsen, "Performance evaluation for gracefully degrading systems," in Proc. 1979 Int. Symp. on Fault-Tolerant Computing, Madison, Wisconsin, pp. 51-58, June 1979. [201 T. C. K. Chou and J. A. Abraham, "Performance/availabilty model of shared resource multiprocessors," IEEE Trans. Reliability, vol. R-29, pp. 7076, April 1980. 121] X. Castillo and D. P. Siewiorek, "A performance-reliability model for computing systems," in Proc. 1980 Int. Symp. Fault-Tolerant Computing, Kyoto, Japan, pp. 187-192, Oct. 1980. 1221 J. M. De Souza, "A unified method for the benefit analysis of faulttolerance," in Proc. 1980 Int. Symp. on Fault-Tolerant Computing, Kyoto, Japan, pp. 201-203, Oct. 1980. [23] X. Castillo and D. P. Siewiorek, "Workload, performance, and reliability of digital computing systems," in Proc. 1981 Int. Symp. Fault-Tolerant Computing, Portland, ME, pp. 84-89, June 1981. [24] R. Huslende, "A combined evaluation of performance and reliability for degradable systems," in ACM/SIGMETRICS Conf. on Measurement and Modeling of Computer Systems, Las Vegas, NV, pp. 157-164, Sept. 1981. 1251 A. Pedar and V. V. S. Sarma, "Phased-mission analysis for revaluationg the effectiveness of aerospace computing-systems," IEEE Trans. Reliability, vol. R-30, Dec. 1981. [261 C. M. Krishna and K. G. Shin, "Performance measures for multiprocessor controllers," CRL-TR-1-82, Computing Research Laboratory, The University of Michigan, Ann Arbor, MI, Oct. 1982.

417 127] J. Aiat and J. C. Laprie, "Performance-related dependability evaluation of supercomputer systems," in Proc. 1983 Int. Symp. on Fault-Tolerant Computing, Milano, Italy, June, 1983. [281 E. Gai and M. Adams, "Measures of merit for fault-tolerant systems," in Proc. Guidance and Control Conf., Gatlinburg, TN, Aug. 1983. [291 J. F. Meyer, "On evaluating the performability of degradable computing systems," IEEE Trans. Comput., vol. C-29, pp. 720-731, Aug. 1980. 1301 J. F. Meyer, R. A. Ballance, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 3, NASA Grant NSG 1306, SEL Report No. 116, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1978. [311 -, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 4, NASA Grant NSG 1306, SEL Report No. 121, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1978. [321 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Performability evaluation of the SIFT computer," IEEE Trans. Comput., vol. C-29, pp. 501-509, June 1980. [331 J. F. Meyer, "Closed-form solutions of performability," IEEE Trans. Comput., vol. C-31, pp. 648-657, July 1982. [341, "A model hierarchy for evaluating the effectiveness of computing systems," in Texte dee Conferences II-c Congres National de Fiabilite, PerrosGuirec, France, pp. 539-555, Sept. 1976, Tome II. [351 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 1, NASA Grant NSG 1306, SEL Report No. 106-1, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Nov. 1976. [36j J. F. Meyer, R. A. Ballance, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 2, NASA Grant NSG 1306, SEL Report No. 111, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1977. [371 R. A. Ballance and J. F. Meyer, "Functional dependence and its application to system evaluation," in Proc. of the 1978 Johns Hopkins Conf. on Information Sciences and Systems, Baltimore, MD, pp. 280-285, March 1978.

418 138] J. F. Meyer, "On evaluating the performability of degradable computing systems," in Proc. 1978 Int. Symp. on Fault-Tolerant Computing, Toulouse, France, pp. 44-49, June 1978. [39] D. G. Furchtgott and J. F. Meyer, "Performability evaluation of faulttolerant multiprocessors," in 1978 Government Microcircuit Applications Conference Digest of Papers, Monterey, California, pp. 362-365, Nov. 1978. 140] J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Performability evaluation of the SIFT computer," SEL Report No. 127, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1979. [411 D. G. Furchtgott, "METAPHOR (Version 1) Programmer's Guide," SEL Report No. 128, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1979. [421 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 5, NASA Grant NSG 1306, SEL Report No. 129, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1979. 143] L. T. Wu and J. F. Meyer, "Phased models for evaluating the performability of computing systems," in Proc. of the 1979 Johns Hopkins Conf. on Information Sciences and Systems, Baltimore, MD, pp. 426-431, March 1979. 1441 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Performability evaluation of the SIFT computer," in Proc. 1979 Int. Symp. on Fault-Tolerant Computing, Madison, Wisconsin, pp. 43-50, June 1979. 145] L. T. Wu and J. F. Meyer, "Phased models for evaluating the performability of computing systems," SEL Report No. 135, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1979. [46] D. G. Furchtgott, "METAPHOR (Version 1) User's Guide," SEL Report No. 136, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1979. [47] J. F. Meyer, "Performability modeling with continuous accomplishment sets," SEL Report No. 137, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1979. 1481 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 6, NASA Grant NSG 1306, SEL Report No. 138, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1979.

419 1491 -, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 7, NASA Grant NSG 1306, SEL Report No. 141, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, January 1980. [501 J. F. Meyer and L. T. Wu, "Evaluation of computing systems using functionals of a stochastic process," SEL Report No. 140, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1980. [511 J. F. Meyer, "Performability models and solutions for continuous performance variables," SEL Report No. 139, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1980. [521 J. F. Meyer, D. G. Furchtgott, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 8, NASA Grant NSG 1306, SEL Report No. 145, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1980. [531 J. F. Meyer, "Closed-form solutions of performability," SEL Report No. 147, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, January 1981. [541 J. F. Meyer, D. G. Furchtgott, A. Movaghar, and L. T. Wu, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 9, NASA Grant NSG 1306, SEL Report No. 148, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, January 1981. [55j J. F. Meyer and L. T. Wu, "Evaluation of computing systems using functionals of a Markov process," in Proc. 14th Annual Hawaii Int. Conf. on Sy8tem Sciences, Honolulu, HI, pp. 74-83, Jan. 1981. [561 J. F. Meyer, "Closed-form solutions of performability," in Proc. 1981 Int. Symp. on Fault-Tolerant Computing, Portland, ME, pp. 66-71, June 1981. [571 J. F. Meyer, D. G. Furchtgott, and A. Movaghar, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 10, NASA Grant NSG 1306, SEL Report No. 155, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1981. [58 -, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Semi-Annual Status Report Number 11, NASA Grant NSG 1306, SEL Report No. 164, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, January 1982. [591 L. T. Wu, "Models for evaluating the performability of degradable computing systems", Ph.D. Thesis, University of Michigan, Ann Arbor, MI, 1982.

420 1601], "Models for evaluating the performability of degradable computing systems," CRL-TR-7-82 (also issued as SEL Report No. 169), Computing Research Lab, The University of Michigan, Ann Arbor, MI, June 1982. 161) J. F. Meyer, D. G. Furchtgott, and A. Movaghar, "Models and techniques for evaluating the effectiveness of aircraft computing systems," Final Report, NASA Grant NSG 1306, SEL Report No. 170, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, July 1982. 1621 L. T. Wu, "Operational models for the evaluation of degradable computing systems," in ACM/SIGMETRICS Conf. Measurement and Modeling of Computer Systems, Seattle, WA, pp. 179-185, Aug. 1982. 1631 D. G. Furchtgott and J. F. Meyer, "Performability evaluation of computing systems using reward models," CRL-TR-27-83, Computing Research Lab, Univ. of Mich., Ann Arbor, MI, Aug. 1983. 1641 J. F. Meyer, "Performability modelling of distributed real-time systems," in Proc. Int. Workshop on Applied Mathematics and Performance/Reliability Models of Computer/Communication Systems, Pisa, Italy, Sept. 1983. 165] A. Movaghar, "Models for validation of degradable systems," Thesis proposal, The University of Michigan, Ann Arbor, MI, Jan. 1982. 166] J. F. Meyer, "Performability modeling of distributed real-time systems," Computing Research Lab, CRL-TR-28-83, The University of Michigan, Ann Arbor, MI, Aug. 1983. 167) J. H. Wensley, L. Lamport, J.Goldberg, M. W. Green, K. N. Levitt, P. M. Melliar-Smith, R. E. Shostak, and C. B. Weinstock, "SIFT: Design and analysis of a fault-tolerant computer for aircraft control," Proc. of the IEEE, vol. 66, no. 10, pp. 1240-1255, Oct. 1978. 168] J. H. Wensley, J. Goldberg, M. W. Green, W. H. Kautz, K. N. Levitt, M. E. Mills, R. E. Shostak, P. M. Whiting-O'Keefe, and H. M. Zeidler, "Design study of software implemented fault tolerance (SIFT) computer," Interim Technical Report No. 1, NASA Contract NAS1-13792, Stanford Research Institute, June 1978. 169) A. L. Hopkins, Jr., T. B. Smith, III, and J. H. Lala, "FTMP-A highly reliable fault-tolerant multiprocessor for aircraft," Proc. of the IEEE, vol. 66, no. 10, pp. 1221-1239, Oct. 1978. 1701 T. B. Smith, A. L. Hopkins, W. Taylor, R. A. Ausrotas, J. H. Lala, L. D. Hanley, J. H. Martin, E. C. Hall, and J. R. Howatt, "A fault tolerant multiprocessor architecture for aircraft," vols I-III, Technical Report, NASA Contract NAS1-13782, The Charles Stark Draper Laboratory, Inc, Cambridge, MA, July 1976, April 1977, and Nov. 1978.

4 21 [711 S. K. Kachhal and S. R. Arora, "Seeking configurational optimization in computer systems," in Proc. ACM Ann. Conf., pp. 96-101, 1975. [721 K. S. Trivedi and T. M. Sigmon, "A performance comparison of optimally designed computer systems with and without virtual memory," in Proc. 6th Annual Int. Symp. on Computer Architecture, pp. 117-121, Apr. 1979. [731 K. S. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications. Englewood Cliffs, NJ: Prentice-Hall, 1982. 1741 A. L. Scherr, "An analysis of time shared computer systems," Doctoral Thesis, Department of Electrical Engineering, MIT, Cambridge, MA, June, 1965. [751 V. L. Wallace and R. S. Rosenberg, "Markovian modelss and numerical analysis of computer system behavior," in AFIPS Conf. Proc., vol. 28, pp. 141-148, 1966. 176] J. L. Smith, "An analysis of time sharing computer systems using Markov models," in AFIPS Conf. Proc., vol. 28, pp. 87-95, 1966. [771 J. von Neumann, "Probabilistic logics and the synthesis of reliable organisms from unreliable components," in Automata Studies, Annals of Math Studies No. 34, C. E. Shannon and J. McCarthy, Ed. Princeton, NJ: Princeton University Press, pp. 43-98, 1956. [781 E. F. Moore and C. E. Shannon, "Reliable circuits using less reliable relays," J. of the Franklin Institute, vol. 262, part I, pp. 191-208, and part II, pp. 281-297, 1956. 1791 Z. W. Birnbaum, J. D. Esary, and S. C. Saunders, "Multi-component systems and structures and their reliability," Technometrics, vol. 3, pp. 55-57, Feb. 1961. [80] W. G. Bouricius, W. C. Carter, and P. R. Schneider, "Reliability modeling techniques for self-repairing computer systems," in Proc. ACM 1969 Annual Conf., pp. 295-309, Aug. 1969. [811 W. G. Bouricius, W. C. Carter, D. C. Jessep, P. R. Schneider, and A. B. Wadia, "Reliability modeling for faulttolerant computers," IEEE Trans. Comput., vol. C-20, no. 11, pp. 1306-1311, Nov. 1971. [821 W. M. Hirsch, M. Meisner, and C. Boll, "Cannibalization in multicomponent systems and the theory of reliability," Naval Research Logistics Quarterly, vol. 15, no. 3, pp. 331-359, 1968. 183) V. Postelnicu, "Nondichotomic multi-component structures," Bull. Math. de la Soc. Sci. Math. de la R.. de Roumanie, vol. 14, no. 2, pp. 209-217, 1970.

/ 22 1841 J. D. Murchland, "Fundamental concepts and relations for reliablity analysis of multistate systems," in Reliability and Fault-Tree Analysis, R. E. Barlow, J. B. Fussell, and N. D. Singpurwalla, Ed. Philadelphia, PA: SIAM, 1975. [85] F. A. Tillman, C. H. Lie, and C. L. Hwang, "Analysis of pseudo-reliability of a combat tank system and its optimal design," IEEE Trans. Reliability, vol. R-25, pp. 239-242, Oct. 1976. 1861 J. C. Laprie and K. Medhaffer-Kanoun, "Dependability modeling of safety systems," in Proc. 1980 Int. Symp. on Fault Tolerant Computing, Kyoto, Japan, Oct. 1980. 1871 X. Castillo and D. P. Siewiorek, "A workload dependent software reliability prediction model," in Proc. 1982 Int. Symp. on Fault-Tolerant Computing, Los Angeles, CA, pp. 279-286, June 1982. [88] H. S. Winokur, Jr. and L. J. Goldstein, "Analysis of mission-oriented systems," IEEE Trans. Reliability, vol. R-18, no. 4, Nov. 1969. [891 D. G. Furchtgott, "February Monthly Report," Internal memorandum, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Feb. 1977. [90] M. D. Beaudry, "Performance related reliability measures for computing systems," in Proc. 1977 lnt. Symp. on Fault-Tolerant Computing, Los Angeles, CA, pp. 16-21, June 1977. [911 F. A. Gay, "Performance Modeling for Gracefully Degrading Systems", Ph. D. Dissertation, Northwestern Universtity, Evanston, IL, June 1979. 1921 G. N. Cherkesov, "Semi-markovian models of reliability of multichannel systems with unreplenishable reserve of time," Engineering Cybernetics, vol. 18, March 1981. 193] P. J. Denning and J. P. Buzen, "The operational analysis of queueing network models," Computing Surveys, vol. 10, pp. 225-261, Sept. 1978. 1941 D. F. Haasl, "Advanced concepts in fault tree analysis," System Safety Symposium, The University of Washington and The Boeing Company, Seattle, WA, June 1965. 1951 S. A. Lapp and G. J. Powers, "Computer-aided synthesis of fault trees," IEEE Trans. Reliability, vol. R-26, pp. 213, April 1977. [961 G. E. Apostolakis, S. L. Salem, and J. S. Wu, "CAT: A computer code for the automated construction of fault trees," Rep. EPRI NP-705, Electric Power Research Institute, March 1978.

423 197) J. D. Esary and H. Ziehms, "Reliability of phased missions," in Reliablity and Fault Tree Analysis. Philadelphia, PA: SIAM, pp. 213-236, 1975. 198) H. Kumamoto and E. J. Henley, "Top-down algorithm for obtaining prime implicant sets of non-coherent fault trees," IEEE Trans. Reliability, vol. R27, pp. 242-249, Oct. 1978. (99] R. L. Williams and W. Y. Gateley, "GO methodology-an overview," EPRI NP-765, Electric Power Research Institute, May 1978. [100l L. Caldarla, "Fault tree analysis of multi-state systems with multistate components," in Proc. American Nuclear Society Topical Meeting on Probabilistic Analysis of Nuclear Reactor Safety, Los Angeles, CA, vol. VIII-Paper 1, May 1978. I101] D. G. Furchtgott, "June Monthly Report," Internal memorandum, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, June 1981. 1102) -, "August Monthly Report," Internal memorandum, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Aug. 1981. 1103) E. R. Woodcock, "The calculation of reliabilty systems: The program NOTED," Authority Health and Safety Branch, U.K.A.E.A., 11 Charles II Street, London SW1 England, 1968. [104] Y. H. Kim, K. E. Case, and P. M. Ghare, "A method for computing complex system reliability," IEEE Trans. Reliability, vol. R-21, pp. 215-219, Nov. 1972. [1051 K. P. Parker and E. J. McCluskey, "Probabilistic treatment of general combinational networks," IEEE Trans. Comput., vol. C-24, pp. 668-670, 1975. [106] A. Satyanarayana and A. Prabhakar, "New topological formula and rapid algorithm for reliability analysis of complex networks," IEEE Trans. Reliability, vol. R-27, pp. 82-100, June 1978. 1107] J. M. Hammersley and D. C. Iandscomb, Monte Carlo Method. London, England: Methuen and Co., Ltd., 1964. [108) J. B. Fussell and J. S. Arendt, "System reliability engineering methodology: A discussion of the state of the art," Nucl. Safety, vol. 20, 1979. [109) N. J. McCormick, Reliability and Risk Analysis. New York, NY: Academic Press, 1981. I1101 H. E. Kongso, "RELY4: A Monte Carlo computer program for system reliability analysis," Report RISO-M-1500, Danish Atomic Energy Commission, June 1972.

424 [1111 M. 0. Locks, "Monte Carolo Bayesian system reliability-and-MTBFConfidan-e assessment," Air Force Flight Dynamics Laboratiory, WrightPatterson AFB, Ohio, AFFDL-TR-75-144. Available from NTIS; Springfield, VA 22161, 1975. 11121. "Reactor Safety Study: An assessment of accident rates in US commercial nuclear power plants," WASH-1400 (NUREG 75/014), US Nuclear Regulatory Commission, available from NTIS, Springfield, VA 22161, Oct. 1975. 1113) S. D. Matthews, "MOCARS: A Monte Carlo simulation code for determining the distribution and simulation limits," ERDA Rep. TREE-1138, EG&G Idaho, July 1977. [114) R. C. Erdmann, et al., "Probabilistic safety analysis III," EPRI NP-749, Electric Power Research Institute, April 1978. [115] P. A. Jensen and M. Bellmore, "An algorithm to determine the reliability of a complex system," IEEE Trans. Reliability, vol. R-18, pp. 169-174, Nov. 1969. 116] W. E. Vesely and R. E. Narum, "PREP and KITT: Computer codes for the automatic evaluation of a fault tree," USAEC Rep. IN-1349, Idaho Nuclear Corporation, available from NTIS; Springfield, VA 22151, Aug. 1970. 11171 R. H. Blazek, R. E. Thomas, R. K. Thatcher, and J. L. Easterday, "TASRA: TAbular System Reliability Analysis," AFFDL-TR-71-128, Battelle, Columbus Lab., 1971. 1118] S. N Semanderes, "ELRAFT-A computer program for the efficient logic reduction analysis of fault trees," IEEE Trans. Nucl. Sci., vol. NS-18, pp. 481-487, Feb. 1971. 11191 J. B. Fussell, E. B. Henry, and N. H. Marshall, "MOCUS —A computer program to obtain minimal sets from fault trees," USAEC Rep ANCR-1156, Aerojet Nuclear Company, Aug. 1974. 1120) Boeing Commercial Airplane Co.,"ARMM: Automatic Reliability Mathematical Model," Boeing Document No. D6A-10500-1. 1121) P. K. Pande, M. E. Spector. and P. Chatterjee, "Computerized fault tree analysis: TREEL and MICSUP," Rep. ORC-75-3 (AD-AO10 146), Operations Research Center, Univ. of California, Berkeley, CA, April 1975. [1221 W. J. Van Slyke and D. E. Griffing, "ALLCUTS-A fast comprehensive fault tree analysis code," ERDA Rep. ARH-ST-112, Atlantic Richfield Hanford Company, July 1975.

425 1123B1 B3.. B.jurman, G. M. Jenkins, C. J. Masreliez, K. L. McClellan, and J. E. Templeman, "CARSRA: A reliability estimation tool for redundant systems," in Airborne Advanced Reconfigurable Computer System (ARCS), August 1976. 1124] G. R. Burdick, N. H. Marshall, and J. R. Wilson, "COMCAN-A computer program for common cause failure analysis," Rep ANCR-1314, Aerojet Nuclear Company, May 1976. [1251 F. L. Leverenz and H. Kirch, "Users guide for the WAM-BAM computer code," Electric Power Research Institute Report Rep. 217-2-5, Jan. 1976. [126] O. Platz and J. V. Olsen, "FAUNET: A program package for evaluation of fault trees and networks," Report RISO-348, Danish Atomic Energy Commission, Sept. 1976. 1127] C. L. Cate and J. B. Fussell, "BACFIRE-A computer code for common cause failure analysis," Rep. NERS-77-02, Nuclear Engineering Department, University of Tennessee, Knoxville, TN, Feb. 1977. [128] J. B. Fussell, D. M. Rasmuson, and D. Wagner, "SUPERPOCUS-A computer program for calculaing system probabilistic reliability and safety characteristics," Report NERS-77-01, Nuclear Engineering Department, Univ. Tennessee, Knoxville, TN, May 1977. 11291 H. E. Lambert and F. M. Gilman, "The IMPORTANCE computer code," ERDA Rep. UCRL-79269, Lawrence Livermore Laboratory, Univ. of California, March 1977. 11301 J. Olmos and L. Wolf, "A modular approach to fault tree and reliabilty analysis," Rep. MITNE-209, Dept. Nuclear Engineering, Massachusetts Institute of Technology, Cambridge, MA, Aug. 1977. 1131] P. J. Pelto and W. L. Purcell, "MFAULT: A computer program for analyzing fault trees," USDOE Report BNWL-2145, Battelle Pacific Northwest Laboriitoires, NTIS, Nov. 1977. 11321 W. E. Vesely and F. F. Goldberg, "FRANTIC-A computer code for timedependent unavailability analysis," U. S. Nuclear Regulatory Commission Rep NUREG-0193, Oct. 1977. 11331 D. B. Wheeler, J. S. Hsuan, R. R. Duersch, and G. M. Roe, "Fault tree analysis using bit manipulation," IEEE Trans. Reliability, vol. R-26, pp. 9599, Jun. 1977. [1341 R. C. Erdmann, F. L. Leverenz, and H. Kirch, "WAMCUT: A computer code for fault tree evaluation," Electric Power Research Institute Rep. EPRNI-NP-803, June 1978.

/4 26) [1351 W. Y. Gateley and R. L. Williams, "GO Methodology-System reliability assessment and computer code manual," EPRI NP-766, Electric Power Research Institute, May 1978. 1136] D. M. Rasmuson, et al., "COMCAN-II: A computer program for common cause failure analysis," Rep. TREE-1289, Idaho National Engineering Laboratory, Sept. 1978. 1137] D. M. Rasmuson and N. H. Marshall, "FATRAM-A core efficient cut-set algorithm," IEEE Trans. Reliability, vol. R-27, pp. 250-253, Oct 1978. 1138] R. E. Worrell and D. W. Stack, "A SETS user's manual for the fault tree analyst," Rep. SAND-77-2051, Sandia Laboratories, Nov. 1978. [1391 J. P. Roth, W. G. Bouricius, W. C. Carter, and P. R. Schneider, "Phase II of an architectrual study of a sellf-repairing computer," SAMSO Report, no. TR-67-106, Los Angeles, CA, Nov. 1967. 1140] J. L. Fleming, "RELCOMP: A computer program for calculating system reliability and MTBF," IEEE Trans. Rel., vol. R-20, no. 3, pp. 102-107, Aug. 1971. [141] F. P. Mathur and A. Avizienis, "Relibility analysis and architecture of a hybrid-redundant digital system: Generalized triple modular redndancy with self-repair," in 1970 Spring Joint Computer Conf., AFIPS Conf. Proc. S6, pp. 375-383, 1970. 11421 D. A. Rennels and A. Avizienis, "RMS: A reliability modeling system for self-repairing computers," in Proc. 3rd Int. Symp. Fault-Tolerant Computing, pp. 131-135, June 1973. 1143] J. J. Stiffler, L. Bryant, and L. J. Guccione, "An engineering treatise on the CARE II dual mode and coverage models," Final Report, NASA Contract No. L-18084, Nov. 1975. 1144] J. J. Stiffler, L. A. Bryant, and L. J. Guccione, "CARE III final report," Phase I, Vol I and II, Raytheon Co., prepared for the NASA Langley Research Center, Nov. 1979. 1145] Y. W. Ng and A. Avizienis, "ARIES 76 User's Guide," Tech. Rep. No. UCLA-ENG-7894, Comp. Sci. Dept., Univ. of California at Los Angeles, Los Angeles, CA, Dec. 1978. [1461 S. V. Makam and A. Avizienis, in Proc. 1982 Int. Symp. on Fault-Tolerant Computing, Los Angeles, CA, pp. 267-274, June 1982. 11471 A. Costes, J. E. Doucet, C. Landrault, and J. C. Laprie, "SURF: A program for dependability evaluation of complex fault-tolerant computing systems," in Proc. 1981 Int. Symp. on Fault-Tolerant Computing, Portland, ME, pp. 72-78, June 1981.

427 (1481 J. B. Fussell, G. J. Powers, and R. G. Bennetts, "Fault-trees: A state of the art discussion," IEEE Trans. Reliability, vol. R-23, pp. 51-55, Apr. 1974. 1149] J. D. Esary and F. Proschan, "A reliability bound for systems of maintained, interdependent components," J. Amer. Statist. Assoc., vol. 65, pp. 329-338, 1970. 1150 J. L. Bricker, "A unified method for analyzing mission reliability for fault tolerant computer systems," IEEE Trans. Reliability, vol. R-22, no. 2, June 1973. 1151] A. Pedar, "Reliability modeling and architecture optimization of aerospace computing systems", Ph.D. Thesis, Indian Institute of Science, Bangalore560 012, India, 1981. 1152] J. C. Laprie, "Reliability and availability of repairable structures," in Proc. 1975 Int. Symp. Fault-Tolerant Computing, Paris, France, pp. 87-92, June 1975. 11531 Y.-W. Ng and A. Avizienis, "A reliability model for gracefully degrading and repairable fault-tolerant systems," in Proc. 1977 Int. Symp. Fault-Tolerant Computing, Los Angeles, CA, pp. 22-28, June 1977. [154] A. Costes, C. Landrault, and J. C. Laprie, "Reliability and availability models for maintained systems featuring hardware failures and design faults," IEEE Trans. Comput., vol. C-27, pp. 548-560, June 1978. 1155] R. A. Howard, Dynamic Probabilisitc Systems, Vol. II: Semi-Markov and Decision Processes. New York, NY: Wiley, 1971. 1156] E. F. Hitt, M. S. Bridgman, and A. C. Robinson, "Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems," NASA contract NAS1-15760, NASA Contractor Report 159358, Battelle Columbus Laboratories, Columbus, OH, June 1980. 1157] "Models and techniques for evaluating the effectiveness of aircraft computing systems," Proposal for NASA Grant NSG 1306, submitted to NASA Langley Research Center, March 1976. 1158] T. F. Arnold, "The concept of coverage and its effect on the reliability of a repairable system," IEEE Trans. Rel., vol. R-22, no. 3, pp. 251-251, March 1973. [159] "Models and techniques for evaluating the effectiveness of aircraft computing systems," Proposal for extension of NASA Grant NSG 1306, submitted to NASA Langley Research Center, March 1977. 1160] D. G. Furchtgott, "January Monthly Report," Internal memorandum, Systems Engineering Lab, The University of Michigan, Ann Arbor, MI, Jan. 1977.

!t 28 11611 J. F. Meyer and A. W. Naylor, "Logically consistent task sets for system evaluation," in Proc. Seminaire sur l'Approach Systemes, ENSAE, Toulouse, France, vol. I, pp. 65-99, November 1973. 11621 P. E. Pfeiffer, Concepts of Probability Theory. New York, NY: McGrawHill, 1965. 1163) H. L. Royden, Real Analysis. New York, NY: Macmillan, 1968. [1641 E. Wong, Stochastic Processes in Information and Dynamical Systems. New York, NY: McGraw-Hill, 1971. 1165] M. Davio, J. P. Deschamps, and A. Thayse, Discrete and Switching Functions. New York, NY: McGraw-Hill, 1978. 11661 E. Phibbs and S. IH. Kuwamoto, "An efficient map method for processing multistate logic treees," IEEE Trans. Reliabilty, vol. R-23, pp. 93-98, May 1972. 1167] R. G. Bennetts, "On the analysis of fault trees," IEEE Trans. on Reliability, vol. R-24, pp. 175-185, Aug. 1975. 11681 N. K. Nanda, "Applicantion of a Boolean identity for fault trees," IEEE Trans. Reliability, vol. R-29, p. 12, April 1980. 1169] C. L. Hwang, Frank A. Tillman, and M. H. Lee, "System-reliability evaluation techniques for complex/large systems-A review," IEEE Trans. Reliability, vol. R-30, pp. 416-423, Dec. 1981. [1701 L. Fratta and U. G. Montanari, "A Boolean algebra method for computing the terminal reliability in a communication network," IEEE Trans. Circuit Theory, vol. C-20, pp. 203-211, May 1973. 1171] K. Gopal, K. K. Aggarwal, and J. S. Gupta, "Reliability analysis of multisate device networks." IEEE Trans. Reliability, vol. R-27, pp. 233-236, Aug. 1978. 11721 J. A. Abraham, "An improved algorithm for network reliability," IEEE Trans. Reliability, vol. R-28, pp. 58-61, April 1979. 1173] M. 0. Locks, "Recursive disjoint products, inclusion-exclusion, and min-cut approximations," IEEE Trans. Reliability, vol. R-29, pp. 361-367, Dec. 1980. 1174] R. K. Tiwari and M. Verma, "An algebraic technique for reliability evaluation," IEEE Trans. Reliability, vol. R-29, pp. 311-313, Oct. 1980. 1175] K. E. Iverson, A Programming Language, New York, NY: Wiley, 1962.

/i 29 1176] B. W. Kernighaii and D. M. Ritchie, The C Programming Language. Englewood Cliffs, New Jersey: Prentice-Hall, 1978. 1177] R. Miller, Switching Theory, Volume I: Combinational Circuits. New York: Wiley, 1965. 11781 Z. Kohavi, Switching and Finite Automata Theory. New York, NY McGraw-Hill, 1970. [179] F. Preparata and R. Yeh, Introduction to Discrete Structures for Computer Science and Engineering. Reading, MA: Addison-Wesley. 1973. 1180] E. Veitch, "A chart method for simplifying truth functions," Proc. ACM, pp. 127-133, 1952. 1181] M. Karnaugh, "The map method for synthesis of combinational logic circuits," Trans. AIEE, Part I, Communication and Electronics, vol. 72, pp. 593-599, 1953. 11821 G. Gratzer, Lattice Theory: First Concepts and Distributive Lattices. San Francisco, CA: W. H. Freeman and Co., 1971. 11831 S. MacLane and G. Birkhoff, Algebra. New York: MacMillan, 1967. [184] C. Shannon, "The synthesis of two-terminal switching circuits," Bell System Tech. J., vol. 57, 1949. [185] W. V. Quine, "The problem of simplifying truth functions," Am Math. Monthly vol. 59, Oct. 1952. [186] E. J. McCluskey, "Minimization of Boolean functions," Bell System Tech. J., vol. 35, Nov. 1956. 1187] P. Tison, "Generlization of consensus theory and application to the minimization of Boolean functions," IEEE Trans. Electron. Comput., vol. EC-5, 1967. 1188] P. L. Tison, "An Algebra for Logic Systems-Switching Circuits Application," IEEE Trans. Comput., vol. C-20, pp. 339-351, Nov. 1971. 11891 I. Koren and M. Berg, "A module replacement policy for dynamic redundancy fault-tolerant computing systems," in Proc. 1981 Int. Symp. FaultTolerant Computing, Portland, ME, pp. 90-95, June 1981. 1190j S. V. Makam and A. Avizienis, "Modeling and analysis of periodically renewed closed fault-tolerant systems," in Proc. 1981 Int. Symp. FaultTolerant Computing, Portland, ME, pp. 134-141, June 1981.

430 [19Il T. Nakagawa, K. Yasui, and S. Osaki, "Optimum maintenance policies for a computer system with restart," in Proc. 1981 Int. Symp. Fault-Tolerant Computing, Portland, ME, pp. 148-150, June 1981. 1192] Y. Oda, Y. Tohma, and K. Furuya, "Reliability and performance evaluation of self-reconfigurable systems with periodic maintenance," in Proc. 1981 Int. Symp. Fault-Tolerant Computing, Portland, ME, pp. 142-147, June 1981. 11931 R. Huslende, "Optimal cost/reliability allocation in communication networks," in Proc. 1988 Int. Symp. on Fault-Tolerant Computing, Milano, Italy, June, 1983. 1194] J. A. Munarin, "Dynamic workload model for performance/reliability analysis of gracefullly degrading systems," in Proc. 198S Int. Symp. on Fault-Tolerant Computing, Milano, Italy, pp. 290-295, June, 1983. [1951 E. Cinlar, Introduction to Stochastic Processes. Englewood Cliffs, NJ: Prentice-Hall, 1975. 11961 V. A. Ditkin and A. P. Prudnikov, Operational Calculus in Two Variables and its Applications. New York: Pergamon Press, 1962. 1197] D. Gross and C. M. Harris, Fundamentals of Queueing Theory. New York, NY: Wiley, 1974. [198] L. Kleinrock, Queuing System., Volume I: Theory. New York, NY: Wiley, 1975. 1199] H. Raiffa, Decision Analysis: Introductory Lectures on Choices under Uncertainty. Reading, MA: Addison Wesley, 1968. [200] J. Moses, "Symbolic integration: The stormy decade," Comm. ACM, vol. 14, Aug. 1971. [201] A. C. Hearn, "REDUCE 2: A system and language for algebraic manipulation," in Proc. Second Symp. Symbolic and Algebraic Manipulation. UNIVERSITY OF MICHIGAN 3 9il7ll l lll Illll-ll Illlllll ll llll ll II ll lIIII l 3 9015 03127 1391