THE UNIVERSITY OF MICHIGAN COMPUTING RESEARCH LABORATORY1 DEMAND DRIVEN EVALUATION WITH EQUATIONS Satlsh Tbatte CRL-TR-34-84 August 1984 Room 1079, East Engineering Building Ann Arbor, Michigan 48109 USA Tel: (313) 763-8000'Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the funding agency.

1 Demand Driven Evaluation with Equations Satish Thatte Department od Etect'icat Engineering and Computer Science The University oa Michigan Ann Abbot, Michigan 48109 1. Introduction Term rewriting systems, such as The X-calculus (Barendregt, 1981), combinatory logic (Curry and Feys, 1958), and the FP system (Backus, 1978), are fundamental to applicative programming. A recent trend is to permit a programmer to describe term rewriting systems using sets of first-order equations (Burstall, et at, 1980, Goguen and Tardo, 1979, Hoffmann and O'Donnell, 1982, Turner, 1976). These systems are the focus of this paper, and we refer to them as First-Order Equational Systems (FOES). FOES expand the expressive power of applicative programming in many ways. They can be used to directly define primitive functions on tree-like data-structures (Burstall, et at, 1980). Hoffmann and O'Donnell (1982) describe their use in succinct definitions of interpreters for languages such as LISP and LUCID (Ashcroft and Wadge, 1977). Their reliance on pattern matching in defining functions largely eliminates the need for the explicit use of selectors and predicates such as LISP's CAR, CDR and NULL, and greatly improves clarity and readability. FOES readily admit a many-sorted interpretation (Burstall, et at, 1980, Goguen and Tardo, 1979), and polymorphism (MacQueen, 1981, Milner, 1978), and hence a very sophisticated form of strict static type 1

2 checking. A lazy interpretation of FOES extends the notion of computation with infinite structures to all tree-like structures, and indirectly permits applicative definitions of circular data structures such as doubly-linked lists, which can be viewed as cyclically infinite binary trees. An examination of the operational semantics of existing term rewriting systems reveals that two approaches are possible. The evaluation of an expression in any rewriting system amounts to repeated application of the rewrite rules until an irreducible (normal) form is reached. Most expressions admit many alternative routes for the rewriting process, not all of which may terminate. Those that do terminate may not all reach the same normal form. Some systems determine the correct route, or computation rule, by decree. For instance, HOPE (Burstall, et at, 1980) uses innermost first evaluation except for a special variant of the cons constructor. FP (Backus, 1978) and OBJ (Goguen and Tardo, 1979) uniformly use the innermost first rule. The advantage in choosing a rule based on operational considerations is that efficiency and simplicity of implementation can be built into the choice. Often, restrictions on the permissible combinations of equations are also avoided. The other commonly used approach is to treat the rewrite rules as the basis for a congruence relation and declare congruence classes to be the meaning of the terms they contain. The correctness or safeness of a computation rule

3 is judged by the requirement that it must reduce each term to its unique normal form equivalent if one exists. This more "mathematical" approach has been used for the X-calculus, combinatory logic, and some FOES (see, e.g., Hoffmann and O'Donnell, 1982). Apart from the obvious appeal of semantic elegance, the mathematical approach coincides with lazy evaluation, which has been shown to be an attractive way to deal with many programming problems (Friedman and Wise, 1976, Henderson and Morris, 1976, Kahn and MacQueen, 1977). One clear limitation of the approach is that it can only be applied to systems which satisfy the confluence property (Huet, 1977). More troublesome is the need to find a safe and efficient computation rule for each system. The standard example of a safe and optimal computation rule is the "normal order" rule for the X-calculus (Vuillemin, 1974). The trouble with FOES is that they are meant to be user-defined, therefore it is impossible to look for an appropriate computation rule for each individual system. The ideal solution would be a rule that is safe and optimal for all confluent FOES. The "full substitution" rule of Vuillemin (1974) is safe but not optimal, since it requires all outermost redices in an expression to be reduced, including, in general, many unnecessary ones. One way to approach optimality is to look for a safe sequential rule, i.e., a rule that selects exactly one redex at each step. Such a strategy turns out to be optimal in number of

4 reduction steps in the so-called noncopying implementations, in which expressions are represented by directed acyclic graphs, and all residuals of a given subexpression share the same representation (O'Donnell, 1977, Vuillemin, 1974). Safe sequential evaluation is identical in spirit to the less specific notion of demand driven evaluation, and we use the two terms interchangeably throughout this paper. Unfortunately, not only is a safe sequential rule for all confluent FOES not known, it almost certainly does not exist, since taking account of the right-hand sides of equations amounts to unbounded look-ahead. The reported work on the subject therefore confines itself to considering sequentiality based only on the left-hand sides of equations (Hoffmann and O'Donnell, 1984, Huet and Levy, 1979). Our goal is to extend the notion of demand driven evaluation to confluent FOES, i.e., to find a computation rule for FOES that is both safe and sequential, even when the right-hand sides of equations are disregarded. The most commonly studied class of confluent FOES is the class called tegulat by Hoffmann and O'Donnell (1984), and nonambiguous tineat by Huet and Levy (1979). We shall adopt the former term. In this paper, we study a subclass of regular FOES that is practically important and substantially easier to deal with. The defining characteristic of members of this class is that constructors and defined functions are represented by disjoint sets of symbols. This restriction is used in several actual languages incorporating FOES

5 (Burstall, et at, 1980, Subrahmanyam and You, 1983, Turner, 1976), and is analogous to the separation of function and predicate symbols in logic programming (Kowalski, 1979). We call this subclass constuctot FOES to emphasize that argument patterns in equations of this class contain only constructor symbols and variables. It turns out that only a subset of the class of constructor FOES admits the kind of safe sequential rule we need. Along with the rule, we therefore need a decision procedure to identify members of this subset. In Section 2 we informally describe our approach to the problem, Section 3 contains the mathematical results, in Section 4 we discuss possible applications and extensions of the results, Section 5 describes related work, and Section 6 contains the conclusions. 2. Informal Description Let us start with an example to show that the leftmost outermost first rule is not safe for all FOES. Example 1. pair_int(X,nil) = nil pair_int(X,A.L) = <X,A>.pair_int(X,L) append(nil,L) = L append(A.L1,L2) = A.append(L1,L2) f = f evaluate: pair_int(f,append(nil,nil)) The syntactic convention in Example 1 and subsequent

6 examples is that identifiers beginning with upper case letters are variables and the rest are function symbols. The symbols cons is represented by the infix operator "." for brevity. It is clear that although both arguments of pair_int in the expression are redices, choosing the leftmost first leads to divergence, while choosing the rightmost one quickly leads to the normal form nil for the entire expression. The example suggests that, first, it is necessary to infer the strictness or otherwise of a defined function with respect to each of its arguments, and second, a function is nonstrict with respect to an argument when that argument is represented by a variable in some equation. Actually, the example is too simple to bring out the full complexity of the situation. The distinction between divergent and nondivergent arguments is not sharp. For instance, the second argument of pair_int needs only as much constructor structure as is necessary to match one of the two patterns available for it, as Example 2 illustrates. Example 2. head(nil) = error, head(A.L) = A evaluate: head(pair_int(1,append(1.nil,f))) Although an attempt to compZetezy evaluate the second argument of pair_int in the expression would diverge, any outermost first strategy yields the normal form <1,1> for the entire expression. Example 3 shows that even this degree of strictness may be conditional.

7 Example 3. pair_list(X,nil) = nil pairlist(A.L1,B.L2) = <A,B>.pair_list(L1,L2) pairlist(nil,B.L2) = <-1,B>.pair_list(nil,L2) evaluate: (1) head(pair_list(f,nil)) (2) head(pairlist(f,1.nil)) Expression (1) leads to normal form "error" and (2) is divergent, although both contain exactly the same divergent subexpression in the same argument position. In other words, pair_list tolerates a divergent first argument only if the second argument is nil. Once the second argument reveals sufficient constructor structure to preclude this possibility, the structure of the first argument is needed to make further progress. The outlines of the overall evaluation strategy we need have now emerged. As in any other demand driven strategy, applications of defined functions are evaluated from the outside inwards, as needed. Only the outermost applications are needed a priori. The reason for selecting any other application is that it stands in such an argument position for a needed outer application that its evaluated (constructor) structure must be made manifest in some degree, and the only way to reveal constructor structure is to reduce the application. In order to turn the application into a redex, we must decide which equation is applicable to it. We therefore choose an argument whose structure is

8 critical to that decision. When sufficient structure has been revealed to narrow the choice down as far as possible, we choose another argument to narrow it down still further, and so on, until a single equation remains. Note that we concern ourselves only with the left-hand sides of equations. The structure of these must be such that we are able to choose a "critical argument" at each step. If we cannot, the strategy fails. Fortunately, it turns out that in such a situation no strategy can succeed if it relies only on the structure of the left-hand sides. This somewhat simplified description has glossed over two important details. First, in all our examples so far, the argument patterns have consisted of at most one constructor. The approach generalizes to deeper patterns with a little care, as illustrated by Example 4. The example uses two equations for dual cases of AVL tree rebalancing by single rotations (see Knuth, 1973, p.454). Example 4. rotate(node(Alpha,d_plus,node(Beta,plus,Gamma))) = node(node(Alpha,balance,Beta),balance,Gamma) rotate(node(node(Alpha,minus,Beta),d_minus,Gamma)) = node(Alpha,balance,node(Beta,balance,Gamma)) Let us ignore the real significance of the balance factors d_plus, etc., and consider an arbitrary application of rotate. After ensuring that the outermost constructor of the argument is "node", neither the left nor the right

9 subtree of the node can be safely evaluated since one of the equations has a "don't care" attitude in each case. The next critical structure is the balance factor for the node. If this is d_plus, then the second equation does not apply while the first may, and vice versa for d_minus. The reader will have recognized that the selection process taking place here is commonly done manually in languages like LISP, where the outermost structure of an argument is first queried using a predicate such as NULL. The argument may then be taken apart using selectors, and the pieces further queried selectively, etc. It is instructive to rewrite the rotate function using conditionals, selectors, and predicates, in order to appreciate the gain in clarity brought about by using equations. The safe sequential computation rule we are seeking can be interpreted as an automatic version of the selection process based on the implicit logical necessity described by equations. The second detail to note is that all functions may not be exhau^tivety defined, i.e., none of the equations may be applicable for certain combinations of arguments. In such an expression, which we call "root-stable" following Hoffmann and O'Donnell (1984), the outer function symbol is playing the role of a constructor. All applications immediately subordinate to a root-stable expression are therefore needed a priori, as in Example 5. The outermost application of rotate cannot be dealt with

10 Example 5. evaluate: rotate(node(head(el),balance,rotate(el,e2,e3))) using either of the equations in Example 4, hence the applications of head and rotate immediately subordinate to it become "outermost". Another way to describe the situation is that the outermost rotate will be a part of any eventual normal form for the expression. We conclude this section with two further examples. Example 6 shows that all confluent FOES do not safely admit a sequential evaluation rule. The example uses the "parallel" logical or function provided as a primitive in certain applicative languages. Example 6. or(X,true) = true, or(true,X) = true or(false,false) = false evaluate: or(g,h) Case 1: g=true, h=h. Case 2: g=g, h=true. It is impossible to choose either argument for evaluation. Choosing the first argument is unsafe if g and h are defined by the equations labeled Case 2, and the second one is unsafe if Case 1 is used. Finally, we would like to emphasize that FOES for which our approach fails may actually turn out to be sequential, if the right-hand sides are taken into account, as in the case of the "funny append" in Example 7:

11 Example 7. append(L,nil) = L, append(nil,L) = L append(A.L1,L2) = A.append(L1,L2) Although the first equation appears to add generality, in fact it is redundant, since lack of structure for the first argument still means lack of structure for an application of append. Therefore, it is still safe to evaluate the first argument first. 3. Mathematical Results 3.1 Preliminaries and Definitions 3.1.1 Preliminaries Most of the material in this section is a review of terminology and constructions commonly used in the literature on term rewriting systems. A reduction system operates in a non-empty tanked alphabet Z which contains all symbols in the system. T denotes the set of all gtound S-terms for the alphabet. S-terms, in general, may contain vatiabtes drawn from a countable set X. A Z-term is said to be intear iff no variable occurs more than once in it. We drop the prefix signifying the alphabet and speak of terms whenever possible without confusion. A path p is a possibly empty string of integers. We say that p teaches subterm t/p in term t. The empty string A reaches the term itself, the string'k' reaches the kth argument,'km' reaches the mth argument of the kth argument,

12 etc. Given paths p and q, p.q denotes their concatenation. The symbols < and < denote the prefix and ptopet pteJix relations on paths respectively. Paths p and q are said to be independent iff neither p<q nor q<p. Paths(t) denotes the set of all paths that reach some subterm in t. Paths(t) is partitioned into XPaths(t) and CPaths(t), where XPaths(t) is the subset that reaches variables in t. We frequently refer to a path pePaths(t) as an occutrence of the aubtetm t/p in t. The expression t[pww] denotes the term obtained by replacing t/p at p by w. A sub^titution is a map from variables to terms. The meaning of a substitution can be extended naturally to a map from terms to terms. The application of a substitution a to a term t is conventionally denoted by ta, where ta is the instance of t produced by simuttaneousZy substituting a(x) for every variable x in t. We use the notation t5u to denote that u is an instance of t; t<u means t<u and tou. If neither t<u nor u<t then t and u are said to be independent. The relation < is clearly a partial order. The usual first-order unification algorithm of Robinson (1965) is denoted by UNIFY. If two terms t and u have no common instance, then UNIFY(t,u) fails, otherwise it succeeds and returns a substitution a such that ta=ua is the teast common instance of t and u, i.e., in order-theoretic terminology, their join. Since we shall consider safety of computations independent of the right-hand sides of equations, it is

13 natural for us to treat the collection of left-hand sides as an independent entity with its own properties. We refer to this collection as a base for a reduction system. DeAinition 1: A base L for a reduction system in the alphabet 2 is a finite set {1i, m>i>1} of linear Z-terms such that: (1) isj implies 1i and 1j are independent. (2) iteTz such that liSt, m>i>1. (3) If 11,12 c L and peCPaths(11), then UNIFY(11/p,12) fails untess p=A and 11=12. Each member of L is called a (4edex) pattern. " Condition (1) states that there are no redundant lefthand sides, (2) states that there are normal forms, and (3) states that there are no "critical pairs" (Knuth and Bendix, 1970), which is used to ensure that any reduction system based on L will be confluent (Huet, 1977). Definition 2: A reduction system S in alphabet Z is a finite set {<li,ri>, m>i>l} of pairs of Z-terms such that: (1) L = {li, m2i>1} is a valid base. (2) Each variable in ri appears also in li, m>i>1. Throughout the following, we shall deal with a fixed set Z of function symbols, a fixed base L, and a fixed reduction system S based on L, unless mentioned otherwise. In many definitions, the entities being defined are qualified with the subscript L or S signifying the context. These subscripts are dropped whenever the base or system concerned

14 is the fixed one. Any ueTE such that u~1, for some lEL, is called a 4edex. If t/p is a redex for some peCPaths(t), then p is called a tedex occuttence in t. The set of all redex occurrences in t is denoted by ROL(t). A simple reduction t-S>u occurs in S iff t/s=la for some lcL, <l,r>cS, and u=t[s-ra]. We write t->u iff t — >u for some seRO(t). Finally, t-*>u is called a reduction sequence where -> is the reflexive transitive closure of — >. We use the notation A:t->u in order to attach a name (in this case A) to a simple reduction, and similarly also B:t- >u. A ambiguously denotes all empty reduction sequences. As with paths, we use the notation A.B to denote the concatenation of sequences A and B. Such concatenation is meaningful only if the last term in A is identical to the first term in B. The set NFLcTZ is the set of normal forms, and teNFL iff ROL(t)=4. If t-*>u in S and ueNFL, then t is said to noLmatize to u in S, written as u=NORMs(t). PcPath^(t) is said to be an independent set of paths in t iff all pairs of distinct paths in P are independent. If PcROL(t) then P is called an independent set od xedex occuLtence&, denoted by PcISROL(t). The significance of independent sets of redices is that the order in which they are reduced is immaterial to the result obtained by reducing them all. This permits us to introduce the notion of a multireduction.

15 Pdeinition 3: Let QlISRO(t), and let {q1,..'qm} be an arbitrary enumeration of Q, where t/qi=liai, <li,ri>eS, m>i>1. Let t1=t, and ti+1=ti[qi+riai], m>i1>. We say that the muttieduction t Q>tm+l occurs in S. The term tm+1 is clearly independent of the enumeration of Q. m We treat a multireduction as a sequence of simple reductions whenever convenient. Although the notion of reduction sequence generalizes naturally to that of a multisequence, we do not need the generalization. In fact, our only serious use of multireductions occurs in Proposition 35, where it is necessary to deal with the forward migration of residual redices in rearranging a reduction sequence. The notion of residuals, which is introduced next, is crucial in analysing the behavior of sequences of reductions. Residuals describe the way in which different parts of an expression are affected by a simple reduction. The left-hand side pattern in the redex is destroyed. Parts of the expression that are reached by extensions of the redex occurrence going beyond the pattern are rearranged according to the right-hand side. A possible rearrangement is disappearance, which is how outermost reductions sometimes make divergent inner expressions disappear. The rest of the expression is essentially unaffected. Dejinition 4: Suppose A:t-a>u, where t/a=la, <l,r>cS. Let qcPaths(t). The set q\A, called the treidualt of q after A, is defined as follows:

16 (1) if a~q then q\A = {q} (2) if a=q then q\A = 4 (3) otherwise, let a=q.s. if s e ZPaths(l) then q\A = 4. Otherwise, is'e XPath4(l) such that s'<s. Let s=s'.w, x=l/s', and R = {v | r/v=x}. Then q\A = {a.v.w I vcR} Phoposition 5: If A:t —>t' and qsPaths(t), then q\A is an independent set of paths. Prood: Obvious from the definition of q\A. ~ The following Proposition is a succinct statement of the reason for excluding "critical pairs" in Definition 1. Proposition 6: qcRO(t), A:t->u, implies that q\AcRO(u). Puood: Straightforward by condition (3) in Definition 1. a Suppose A:t->u and QcPaths(t). Then the set Q\A is simply u{q\A I qeQ}. This can be extended to sequences of reductions by composition, i.e., Q\A = Q Q\B.C = (Q\B)\C If qcPaths(t), we use the notation q\B instead of {q}\B. Pooposition 7: If PeISRO(t), and A:t->t', then P\AcISRO(t'). Proof: Simple consequence of Propositions 5 and 6. - We conclude this section by stating a fundamental property of our class of confluent FOES. This property is called "closure" by O'Donnell (1977), and the "parallel moves lemma" by Huet and Levy (1979). Another appropriate name would be the "one step confluence" property.

17 Lemma 8: Let A:t >u and B:t Q>v be multireductions. Let R=P\B, S=Q\A, and u >w. Then, vR>w. Proof: By Proposition 7, SeISRO(u), and RcISRO(v). For the rest, see the proof of Lemma 11 in Huet (1977) or the proofs of Lemma 12 and Theorem 17 in O'Donnell (1977). ~ 3.1.2 Definitions The first concept in need of a definition is the concept of sequential evaluation itself. We are interested in sequential evaluation because we believe it embodies the prerequisites for efficient evaluation strategies for FOES. As such, our definition excludes the possibility of using look-ahead and/or memory in choosing redices. A (tedex) selection algorithm A is a function of a base L and a ground term t such that, A(L,t) either fails or returns some qsROL(t). If L is fixed then the specialized or "curried" version of A is denoted by AL. Definition 9: Given a selection algorithm A, and tsTz, the evaluation sequence for t produced by A is the reduction sequence t0->.->tn->.. such that: (1) to = t (2) if AL(tn) fails then the sequence terminates in tn, else A(L)(t ) returns r and tn r>n+l The sequence is denoted by OAs(t). If the sequence terminates in tn for some n, then we write oA,(t)+ = tn else we write OA,S(t)t. * VPeinition 10: A selection algorithm A is said to be safe for a system S based on L iff given any ueNFL and any t such

18 that u=NORMs(t), oA,s(t) = u. E Definition 11: A base L is said to be sequential iff there is a selection algorithm A which is safe for any S based on L. In this case, A is said to sequentiatize L. * So far, we have been speaking of confluent FOES in general. We now introduce the class of constructor FOES which forms the subject of our study in this paper. Deiinition 12: Suppose L = {li, m>i 1 }. Let 1i fi(uil..,uin), m>i>1, rL = {fi, m>i>1}, and AL = Z-rL. AL is called the set of contructots of L. A term f(ul,..,um) is said to be a AL-pattetn iff ferL and each ui is a AL-term, m>i>1. L is said to be a constnacto& bace iff all lisL are AL-patterns. A system based on such an L is called a constructot system. ~ Notice that, by definition, applications of constructors are always root-stable. In the following discussion, we assume that our fixed base L is a constructor base, and the alphabet Z is accordingly partitioned into r and A, where A is the set of constructors. We now have a precise definition of our problem. We need (1) A decision procedure to identify sequential constructor bases, and (2) A redex selection algorithm ("computation rule") that sequentializes atl sequential constructor bases. We still need a precise formulation of the demand driven approach outlined in section 2. The critical notion is that

19 of a necessary redex, which is intuitively a redex that helps narrow down the set of equations compatible with the given term. The identification of compatible equations is based on the known structure of a given term, which is inferred using the fact that applications of constructors are root-stable. Given a ground term t, u = KNOWN(t) is the largest linear A-term or A-pattern such that u<t. Note that the set of all linear A-terms or A-patterns w such that w<t is nonempty and finite, and hence contains a largest member by the properties of UNIFY. The variables in KNOWN(t) are assumed to be new. For a ground term t, define Lt = {liL I UNIFY(l,KNOWN(t)) succeeds}, where Lt is the set of equations compatible with t. Let PDNO(t)=XPath^(KNOWN(t)). Each pcPDNO(t) is said to be potentiatty ditectty necessary for t. If, in addition, pcZPaths(l) for each leLt, then p is said to be directty neces6a-y for t, denoted by peDNOL(t). The notion of necessary redex occurrences essentially iterates directly necessary occurrences until a redex is reached. The exception is the situation noted in section 2, when rootstable outermost applications of defined functions must be taken into account. The structure of the definition below caters more to the needs of later proofs by structural induction than to intuitive transparency. Definition 13: peROL(t) is said to be necessary for t, written as pcNRO(t), iff one of the following holds:

20 (1) p = A or p Ec NOL(t) (2) qeDNOL(t), p=q.r, and r e NROL(t/q) (3) 1qcDNOL(t) such that ROL(t/q)=4, rePDNO(t), p=r.s, and s e NROL(t/r) Example 8 illustrates these definitions. Example 8. L = {f(X,a,b), f(b,X,a), f(a,b,X)} t = f(a,f(b,a,f(a,a,b)),f(b,b,a)) u = t/2 = f(b,a,f(a,a,b)) KNOWN(t) = f(a,Y1,Y2) PDNO(t) = {2,3} DNO(t) = {2} R0(t) = {2.3,3} KNOWN(u) = f(b,a,Y3) PDNO(u) = {3} PNO(u) = {3} RO(u) = {3} NRO(u) = {3} NRO(t) = {2.3} Our whole approach rests on the assumption that a necessary redex can always be found in a reducible expression. This is clearly not the case for all FOES, and our conjecture is that only those FOES which satisfy this assumption are sequential in the sense of Definition 11. We say that the bases of such FOES are strictly sequential. Definiton 14: L is said to be ^trictty Aequentiat iff whenever ROL(t) is nonempty, NROL(t) is also nonempty. ~ It is clearly straightforward to write an algorithm that, given a strictly sequential base and a reducible expression, will always return a necessary redex occurrence.

21 It remains to show that: (1) Strict sequentiality is decidable, and (2) Ordinary and strict sequentiality coincide for constructor bases. 3.2 The Decidability of Strict Sequentiality Our decision procedure Check is given in Algorithm 1. Check uses the notation that for each feZ, with arity k, z = f(x1,..,xk), all xieX are new. f I1EL I Zf f L = {leL zf < 1 and Pf = {1,..,k}. As the structure of Check indicates, strict sequentiality is a property of the group of equations defining each individual function. In fact, fcheck is simply an abstract version of a translator that transforms each such group into a single equation involving nested conditionals, selectors, and predicates. The path p chosen in each call of fcheck corresponds to the argument or part of argument whose structure needs to be queried next, given that the structure of the application of f discovered so far corresponds to z. The argument z plays no part in the decision procedure. Its presence is solely a device to facilitate statements and proofs of the properties of Check. an example of a base rejected by Check is found in Example 9 below. We first prove a few simple technical facts about Check. A call fcheck(L,P,z) is said to be tegLtimate iff it results from the call Check(L). Proposition 15: In any legitimate call fcheck(L,P,z) the

22 I. Function Check(L) Let sf = fcheck(Lf Pf,zf Return the conjunction of sf, fsr Function fcheck(L,P,z) If |LI<1 Then Return True Else Let Q = {qsP I qJXPaths(l), leL} If Q = % Then Return False Else choose any p c Q partition L into Lc, ccA where lcLc iff 1/p > c(xl,..,Xk), k>0, and each xi is a new variable correspondingly, for each c, Pc = - {p} u {p.j, 1<j<k} Zc= z[pYc(Yl,..,yk)] where each yi is a new variable Return the conjunction of fcheck(Lc,Pc,zc), ceA Algorithm 1 following statements hold: (1) P c Paths(l) for each leL. (2) z<l for every leL. (3) for each 1 c L-L, UNIFY(l,z) fails. (4) P=XPaths(z). (5) ILI>1 implies P~4. (6) if fcheck(L',P',z') is also a legitimate call, then

23 either UNIFY(z,z') fails, or z and z' are not independent. Prooa: Assertion (5) follows from (2) and (4). The proofs of the rest proceed by induction on the length of the calling chain resulting in the call fcheck(L,P,z). Basis,: The length is 1, and the call is fcheck(Lf,Pf,zf) for some f. All assertions except (6) are obviously true. For (6), it is sufficient to note that if a call fcheck(L1,P1,zl) results eventually from a call fcheck(L2 P2,z2), then Zl>z2, and also that UNIFY(zfz9) fails whenever fog. Induction: The inductive assumption is that the assertions hold for all calls with calling chains of length n21. Suppose the call fcheck(L,P,z) has a chain of length n+1. Clearly, the call results immediately from another call with chain length n, and all assertions hold for the latter by the inductive assumption. The inductive step is now straightforward for all assertions except (6). For (6), the same observations as in the basis case suffice. * Poposition 16: Check(L) terminates. Poof: Let zlength(z) = Ilength(p), pacPaths(z). By (2) in Proposition 15, zlength(z) has a finite upper bound in any legitimate call fcheck(L,P,z) unless L=4. Moreover, zlength(zc)>zlength(z) by (4) in Proposition 15. Hence, each legitimate call fcheck(L,P,z) terminates, either because L=4 or because each chain of recursive calls issuing from it is bounded by the limit on the monotonic increasing

24 quantity zlength(z). a The reason Check guarantees strict sequentiality is that, whenever we have a term t that is neither irreducible nor a redex, the path p chosen in the call fcheck(L,P,z) with the largest z compatible with t reaches a directly necessary subterm of t. This is the intuitive idea in the next Proposition. Ptoposition 17: Check(L), tiNFL, and AjRO(t) implies DNO(t))o. Proof: Suppose the antecedents hold. If Lt=4, then condition (2) in the definition of PNO(t) is vacuous, and it is easy to see that in that case, RO(t)~O implies DNO(t)~4. Suppose Lt~~. Let Z be the set of A-patterns u such that u<t and fcheck(M,R,u) is a legitimate call for some M and R. Since Lt~4, Z is clearly nonempty, and also finite. By (6) in Proposition 15, Z is linearly ordered, and hence contains a maximal element z, which occurs in some call fcheck(L,P,z). Let Q = {qeP qjXPaths(l) for any lcL}. By (3) in Proposition 15, L=Lt. By the maximality of z, QcXPaths(KNOWN(t) )=PDN(t). By (1) in Proposition 15, and the above considerations, we have QcDNO(t). It remains to show that Q#~. If ILI>1 then since Check(L) we have Q~#. Since Lt~4, the only other possibility is that L={l}. Since AjRO(t), IKNOWN(t). Since PcPaths(l) and P=XPathA(z) by Proposition 15, if Q=4 then the only way z<l is if z=l. However, by (2) in Proposition 15, z<l, hence Q=4 implies

25 z=l and l<t, contradicting the assumption that AiRO(t). * It only remains to iterate this fact by structural induction. Lemma 18: Check(L) implies L is strictly sequential. PFooO: We must show that whenever R0(t) is nonempty, NRO(t) is also nonempty. The proof proceeds by induction on the structure of t. Basis: ter, where RO(t)=NRO(t)={A}. Induction: The inductive assumption is that the required property holds for all proper subterms of t. If t is a redex then AcNRO(t). Suppose it is not, and rsRO(t). By Proposition 17 above we have DNO(t)~4. Suppose pcDNO(t). If RO(t/p)o4 then by the inductive assumption lqENRO(t/p), and p.qsNRO(t). If RO(t/p)=% then let scPaths(t) be the nonempty path such that r=s.w and scPDNO(t). By the inductive assumption NRO(t/s)~s, and therefore by (3) in Definition 13, NRO(t)~4. [ We show that Check is a complete decision procedure by defining a function "strange" which constructs a counter example to prove that L is not strictly sequential, whenever Check(L) fails. DeVinition 19: If a legitimate call fcheck(L,P,z) diUectty returns false, being unable to find a suitable peP, then strange(L,P,z) is a term defined as follows. Let L = {1o,.., m}, m21. Partition P into P1..,Pm, such that Pi. XPaths(li). This is possible by the failure condition in fcheck. Let Vi={z/p I pPi }. By (4) in Proposition 15,

26 each VicX. Let nx(i)=(i+l)mod(m+1). Define a substitution a such that: a(x) = if xCVi then a simple ground instance of 1nx(i) else x where a simple ground instance of a term is an instance in which each variable is replaced by a normal form. Note that a is well-defined since Vi are disjoint by the construction of z. strange(L,P,z) =def Za. Example 9. Let L be as in Example 8. Clearly, fcheck(LfPf,zf) fails directly, and thus strange(L,Pf,zf) is well defined. We have, P = P = {1,2,3l, z = = f(ZO,Z1,Z2) 10 = f(X,a,b), 11 = f(b,X,a), 12 = f(a,b,X) Therefore, P0 = {1}, P1 = {2}, P2 = {3} V0 = {ZO}, V1 = {Zl}, V2 = {Z2} With straightforward choices for simple ground instances, strange(L,P,z) = f(f(b,a,a),f(a,b,a),f(a,a,b)). Ptoposition 20: Whenever w = strange(L,P,z) is well-defined, (1) w is a ground term. (2) P = RO(w) ~ 4 (3) DNO(w)=% (4) NRO(w)=4 Proof: (1) is a consequence of (4) in Proposition 15. The P=RO(w) part of (2) merely asserts that AiRO(w). To see this, note that since w is well-defined ILI>1. Moreover,

27 AeRO(w) would mean ilcL such that l<z, and by (2) in Proposition 15, l=z. This would imply, again by (2) in Proposition 15, that L is not an independent set, contradicting (1) in Definition 1. P~% follows from (5) in Proposition 15 since ILI>1. By the construction of w, KNOWN(w)=z, hence by (2) and (3) in Proposition 15, L=Lw. We know that each qcPDNO(w)=XPathA(z) reaches a variable in some lcL=Lw by the construction of w, hence DNO(w)=4. Since PDNO(w)=XPathz(z)=P=RO(w) by (4) in Proposition 15 and (2) above, NRO(w)=4 is obvious. * Lemma 21: Check(L)=false implies that L is not strictly sequential. PrLoof: Check(L)=false implies that some legitimate call fcheck(L,P,z) fails directly, and therefore w=strange(L,P,z) is well-defined. Assertions (2) and (4) in Proposition 20 then imply that L is not strictly sequential. ~ This concludes the demonstration of the decidability of strict sequentiality. Theotem 22: Check(L) iff L is strictly sequential, i.e., Check is a decision procedure for strict sequentiality of constructor bases. Puoof: Immediate consequence of Lemmas 18 and 21. ~ 3.3 The Equivalence of Ordinary and Strict Sequentiality The first part of this proof shows that every strictly sequential system is sequential. The proof proceeds in two steps. The first step is to show that all demand driven computations for normalizable terms are essentially the

28 same. The second step is to show that the normal form equivalent of each normalizable term can be obtained by some demand driven computation. Together, the two steps show that any demand driven computation obtains the normal form equivalent of a normalizable term. We first need to establish certain important properties of necessary redex occurrences: (1) A necessary redex is always an outermost redex (Proposition 25), (2) Necessary redex occurrences persist as such until they are reduced (Lemma 29), and (3) In a strictly sequential system, the last redex to be reduced before reaching normal form is always a necessary one (Lemma 31). Note that (3) is really a corollary of (2). In proving (1) and (2), we start with properties of DNO, and extend them to NRO by structural induction. Proposition 23: AcRO(t) implies DNO(t)=~. Pioof: AeRO(t) implies KNOWN(t)>l for some leLt, therefore PDNO(t)nZPaths(l) is clearly empty. [ Cotottaty 24: A c RO(t) implies NRO(t) = {A}. ~ PopoiLtion 25: If peNRO(t) and seRO(t) then s/p. Poof: By induction on the structure of t. The basis case is that tec, which is trivial. For the inductive step, assume that the proposition holds for all proper subterms of t. There are three cases, according to Definition 13. Caoe 1: p=A or peDNO(t). Recall that if peDNO(t) then by

29 Proposition 23, ARO(t), hence s~A. The rest follows from the fact that pcPDNO(.t). Case 2: qeDNO(t), p=q.r, and reNRO(t/q). We have AjRO(t) as before. By Corollary 24, if qcRO(t) then p=q. Therefore, s~A and seq. The rest follows by the inductive assumption. Case 3: Similar to Case 2. ~ Cotrolag 26: NRO(t)eISRO(t). C Ptoposition 27: Suppose peDNO(t), A:tr->u, and per, then p\A={p}, and peDNO(u). Proof: Since DNO(t)~#, by Proposition 23, r~A. By the antecedents and the definition of DNO, rip, hence p\A = {p}. Moreover, since rOA, LUcLt, and pcDNO(u). m P'oposition 28: peNRO(t), A:t —S>u, and pes implies that p\A={p} and peNRO(u). PhooA: By Proposition 25, sxp, therefore p\A={p}, and moreover, s#A. psNRO(u) is then straightforward by analysing the cases of Definition 13, using Proposition 27. m A sequence B:t->u is said to ptesetve pcRO(t) iff p\B={p}. Lemma 29: Suppose A:t- >u and reNRO(t). Then either A preserves r and rcNRO(u), or A=A1.A2.A3, where A1:t->v preserves r, A2:v- >w, and A3:w->u. P'toof: By induction on |Al. The basis |AI=0 is trivial. Suppose IAI=n+1. Let A=A1.A2, where Al:t ->t1 is the first reduction in the sequence. By Proposition 28, either r=s, or r\A1={r} and reNRO(tl). The rest follows by the

30 inductive assumption. U Coitottay 30: If qeNRO(t), A:t-Q>u is a multireduction, and qxQ, then q\A={q} and qeNRO(u). ~ Lemma 31: In a strictly sequential system, if A:t —L>u and ueNFL then reNRO(t). P__oo: By strict sequentiality, NRO(t) is not empty. By Lemma 29, each member of NRO(t) is either reduced or preserved, and since u is irreducible, it must be the former. d A reduction sequence t0->t —>.. —>ti —>.. is said to be normat iff each simple reduction is a necessary one, i.e., iff in each reduction ti.>ti+ in the sequence, seNRO(ti). A is normal by convention. Normal sequences are simply the formal version of demand driven computations in our context. Using the properties of necessary sequences, we now establish the properties of normal sequences. Ptopo^ition 32: If A:t- >u is normal, ucNFL, reNRO(t), and r * t >t' then there is a normal sequence B:t' >u such that IB|=IA|-1. Ptoof: Since ucNFL, A does not preserve r. Therefore, by Lemma 29, A=A1.A2.A3 such that A1:t->v preserves r, and A2:v —>w. Proceed by induction on [A11. Bansi: IA11=0. Trivial. Induction: Assume for IA1|=n. Suppose IA11=n+1. Let A1=A11.A12 where A11:t ->v' A12:v1 S>-. Clearly, 1A11=sn. Since A1 preserves r, reNRO(v'), and since A1 is normal seNRO(v'), therefore r and s are independent. The order of

31 reductions using r and s may therefore be reversed without affecting the result, i.e., v' r>v"- >w. The rest follows by applying the inductive assumption to A1l. v PtopoiLtion 33: If A:t —>u and B:t —>w are normal sequences, ucNFL, and IAI=IB|, then u=w. Prtood: Straightforward by induction on IAI using Proposition 32. ~ Lemma 34: If A:t->u and B:t->w are normal sequences in a strictly sequential system, and ucNFL, then there is a normal sequence D = B.C such that IDI=IAI, and D:t->u. Proof: By Proposition 33, IBI>IAI is impossible, and whenever IBI<IAI, wjNFL, hence NRO(w)~4. B can therefore be extended as a normal sequence to the same length as A. Let this extension be C, and D=B.C. By Proposition 33, D:t ->u. ~ Lemma 34 shows that all normal sequences for normalizable terms are essentially the same. The next Proposition contains our only real use of multireductions and the so-called parallel moves lemma (Lemma 8). The Proposition states that in a strictly sequential system, any sequence which starts with an arbitrary multireduction, and reaches normal form by a demand driven computation thereafter, can be rearranged to an entirely demand driven computation. The essential idea is to migrate the redices in the initial multireduction forward as residuals until they either become necessary or disappear. The parallel moves lemma guarantees that the migration can take place

32 without changing the final result. The upper bound on the length of terminating normal sequences is crucial in ensuring that the migration process terminates. P'ropoitiLon 35: In a strictly sequential system, If A1:t->u is a multireduction, A2:u->y is normal, and yeNFL, then there is a normal sequence B:t->y. Proof: The proof proceeds by induction on |A21. Basis: IA21=0. In this case, since S is an independent set, and hence may be reduced in any order, Lemma 31 implies that ScNRO(t), and B=A1. Induction: Suppose IA21=n+1. We may assume without loss of generality that SnNRO(t)=4, since normal redices in S could be included in A2. By strict sequentiality, NRO(t)~#. Let reNRO(t). By Corollary 30, r\A1={r}, and reNRO(u). By Proposition 32, if A3:ur>u', there is A4:u'->y such that A4 is normal and A41 =n. Let A5:t-r>t', R=S\A5 P=RnNRO(t'), and Q=R-P. By Lemma 8, t'R>u'. Therefore t' >t" and A6:t" Q>u'. The rest follows by applying the inductive assumption to A6 and A4. I Lemma 36: In a strictly sequential constructor system, if u=NORM(t) then there is a normal sequence B:t->u. Prood: Suppose A:t->u. If IAI=0 then B=A. Suppose |AI>0. Let A2 be the longest normal suffix of A, and A=A1.A2. By Lemma 31, IA2|>0. Proceed by induction on IA1I. Basis: |A11=0, B=A2=A. Induction: Assume the result for IA1|=n. Suppose IA11=n+l. Let A1=A11.A12 A11:t- >v, A12v-r>w By Proposition 35, 1 A'114, 2 A11 12V Av-w

33 there is a normal sequence A3:v->u. Applying the inductive assumption to A11 yields the rest. ~ N is said to be a normal Zelection atgoiLthm iff whenever NROL(t)~*, N(L,t)cNROL(t), and N(L,t) fails otherwise. Lemma 37: Any normal selection algorithm N sequentializes atl strictly sequential constructor bases. Ptoof: Suppose L is strictly sequential, S is based on L and u=NORMS(t). By Lemma 36, there is a normal sequence B:t- >u. From Lemma 34, it follows that ION, (t)=|B|I, and oN,S(t)+=u. Finally, if the decision procedure for strict sequentiality rejects a base, that base is necessarily nonsequential. The function strange is again used in constructing the necessary counter example. Lemma 38: If L is a constructor base and Check(L)=false, then L is not sequential. Pnoo (sketch): Check(L)=false implies that w=strange(L,P,z) is well-defined for some L={10,..,lm}, P=P0u..uPm, and z, where m>1. Now suppose A sequentializes L. Clearly, w can be normalized in some system based on L, hence A(L,w) must succeed and return some path qcP. Suppose qePi. We now construct a system S based on L for which A is not safe. Let k=(i+1)mod(m+1), and let <lk,lk>eS. Since t/q is an instance of lk, oA, (w)t. By the construction of w, qcXPath^(i). Let <li,r>cS, where r is any normal form. Clearly, w can be made an instance of 1. by suitably 1.

34 "filling in" S (including a suitable choice for r), since all instances of lk in w are reached by paths that reach variables in li, and the right-hand sides corresponding to left-hand sides other than li and lk are unconstrained given (3) in Definition 1. Thus, r=NORMs(w), and for A to be safe for S, we must have oA S(w)+=r. * Example 10 illustrates the construction suggested in the proof above. Example 10. Let w=strange(L,P,z) as constructed in Example 9. RO(w) = {1,2,3}. Suppose AL(w)=1ePO. Then i=0 and k=1. The required counterexample S is: f(X,a,b) = b, f(b,X,a) = f(b,X,a), f(a,b,X) = a Theotem 39: A constructor base is sequential iff it is strictly sequential. Ptoof: Straightforward consequence of Theorem 22, and Lemmas 37 and 38. * CoLottatc 40: Check is a decision procedure for the sequentiality of constructor bases. * CototLtar 41: Any normal selection algorithm sequentializes all sequential constructor bases. a 4. Applications The obvious application of the results of the previous section is an efficient evaluator for sequential FOES. A first effort in that direction is the (nondeterministic) normal selection algorithm Select given in Algorithm 2. Although Select can be used to produce normal sequences

35 I Function Select(L,t) Return get(4,{A},t) Function get(L,P,t) Let Q = {qcP I qjXPaths(l), lcL} If Q=4 Then Fail, Else Choose any p e Q Let u = t/p If u is a redex then return p Else If u = g(u,1..,uk), k>O, ger Then Let q = get(L9,Pg,u) If get is successful Then Return p.q Else Return get(X,P-{p},t) Else Let u = c(ul,..,uk), k>O, ccA Return get(Lc,Pc,t) where Lc, Pc are defined as in fcheck Algori-thm 2 which are optimal in the abstract world where only the reduction steps in a noncopying reducer count, its actual use is obviously very expensive since the cost of 6inding a necessary redex far outweighs the cost of the reduction step. The search cost can be reduced substantially if the search does not begin at the root of the entire expression each time. Instead, once a directly necessary subexpression is identified, it should be worked on until it is reduced to a root-stable form. Such a strategy reduces search cost by

36 increasingly localizing the search for the next redex. The function Evaluate in Algorithm 3 realizes this modification. Note that Evaluate is not a selection algorithm, it is an actual evaluator that returns the normal form equivalent of the given expression. The sequential system involved is S. Function Evaluate(t) Function RootStabilize(t) Let w = Root_Stabilize(t) Let t = f(t,..,tn) Let Q = PVNO(w) Return Spin(Lf,Pf,t) For qeQ Do w:= w[q+Evaluate(w/q)] Return w Function Spin(L,P,t) Let Q = {qcP I qXXPaths(l), leL} If Q=4 Then t=la, <l,r>cS (see Proposition 17) Return Root Stabilize(ra) Else Choose some q e Q Let w = Root_Stabilize(t/q) = C(W,..,Wm) Let v = t[q+w] If c c r Then Return v Else Return Spin(Lc,Pc,v) where Lc Pc are defined as in fcheck C' C L i Algorithm 3

37 The nondeterministic choice of an occurrence from set Q in both Root_Stabilize and Evaluate suggests the possibility of safe parallel evaluation. Safe parallelism is usually associated with the notion of strictness. Conventionally, the strictness of a function with respect to an argument is understood to mean that it is safe to reduce that argument to normatl ~om before applying the function. In our case, a finer-grained notion of strictness is appropriate. A function is strict with respect to an argument in this sense if it is safe to reduce that argument to toot-stabte 6oom before attempting to apply the function. This notion of strictness is applicable to parts of arguments as well. We believe that our notion of strictness and the parallelism it offers may be practically significant, since rootstabilization is clearly a much more substantial operation than single step reduction, coinciding with complete evaluation in the case of scalar types. Other possible applications include translation of groups of equations to a single logically equivalent one, in preparation for optimization as described by Hudak (1984), and safe translation of lazy equational programs to logic programming languages such as PROLOG (Subrahmanyam and You, 1983). Our results can also be used to derive a complete "semantic unification algorithm" for use in integrating logic and functional programming as suggested by Subrahmanyam and You (1984). Although the formal results presented in section 3 are

38 applicable to constructor FOES only, they can be extended to cover all (regular) bases, using the simulation technique described by Thatte (1984). The basic idea of the technique is to treat the outermost function symbol in every rootstable term as a constructor. It is clearly possible to extend our Evaluate function to accommodate this change. 5. Related Work Safe and optimal computations with recursive equations have been extensively investigated (Vuillemin, 1974, Berry and Levy, 1979, Cadiou, 1972, Wadsworth, 1971). Confluence and other properties of FOES are discussed by Huet (1977), O'Donnell (1977), and Rosen (1973). Much of this work has been influenced by the seminal study of confluence by Knuth and Bendix (1970). The issue of sequential evaluation with confluent FOES has been addressed by Hoffmann and O'Donnell (1979,1984), and by Huet and Levy (1979). Hoffmann and O'Donnell's methods are applicable to a restricted class of FOES for which a preorder examination of redices without backtracking is safe. Huet and Levy have independently arrived at results similar to ours for regular FOES using a much more abstract approach. In contrast to their work, we have chosen to tackle the simpler problem of constructor FOES using a direct constructive approach which we feel is more easily amenable to efficient implementation. 6. Conclusions We have presented abstract algorithms for checking sequentiality, and for safe sequential, i.e., demand driven

39 evaluation of expressions, in the context of constructor FOES. Clearly, much work remains to be done on methods for efficient implementation before the approach becomes a serious practical alternative. We believe that a suitable concrete implementation of sequential FOES must also be amenable to smooth integration with an implementation of a language with higher-order functions, since the two paradigms bring complementary strengths to applicative programming. We are currently investigating the possibility of using a generalization of the implementation technique based on combinators due to Turner (1979). More work is also needed to explore the applications discussed in Section 4, especially the possibility of using our refined notion of strictness in parallel evaluation. Finally, our results need to be extended to cover all regular FOES. I am grateful to John Wiersba for suggesting a crucial element of the current version of the decision procedure Check.

40 References Ashcroft, E.A., and Wadge, W.W. (1977), "Lucid, a nonProcedural Language with Iteration," Communications of the ACM 20(7). Backus, John (1978), "Can Programming be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs," Communications of the ACM 21(8). Barendregt, H.P. (1981), "The Lambda Calculus: Its Syntax and Semantics," North-Holland, Amsterdam. Berry, G., and Levy, J-J. (1979), "Minimal and Optimal Computations of Recursive Programs," Journal of the ACM 26(1). Burstall, R.M., MacQueen, D.B., and Sanella, D.T. (1980), "HOPE: An Experimental Applicative Language," Tech. Rep. CSR-62-80, Univ. of Edinburgh, Scotland. Cadiou, J-M. (1972), "Recursive Definitions of Partial Functions and Their Computations," Ph.D. Dissertation, Stanford Univ. Curry, H.B., and Feys, R. (1958), "Combinatory Logic," North-Holland, Amsterdam. Friedman, D.P., and Wise, D.S. (1976), CONS Should Not Evaluate Its Arguments, in "Proc. 3rd International Conference on Automata Languages and Programming," Univ. of Edinburgh, Scotland. Goguen, J.A., and Tardo, J. (1979), An Introduction to OBJ: A Language for Writing and Testing Software Specifications, in "Proc. IEEE Conf. on Specifications of Reliable Software," pp170-189. Hudak, P., and Kranz, D. (1984), A Combinator-based Compiler for a Functional Language, in "Proc. 11th ACM Symp. on the Principles of Programming Languages," Salt Lake City.

41 Henderson, P., and Morris, J.M. (1976), A Lazy Evaluator, in "Proc. 3rd ACM Symp. on the Principles of Programming Languages," Atlanta. Hoffmann, C.M., and O'Donnell, M.J. (1979), An Interpreter Generator using Tree Pattern Matching, in "Proc. 5th ACM Symp. on the Principles of Programming Languages," San Antonio, Hoffmann, C.M., and O'Donnell, M.J. (1982), "Programming with Equations," ACM TOPLAS Vol. 4(1). Hoffmann, C.M., and O'Donnell, M.J. (1984), Implementation of an Interpreter for Abstract Equations, in "Proc. 11th ACM Symp. on the Principles of Programming Languages," Salt Lake City. Huet, G. (1977), Confluent Reductions: Abstract Properties and Applications to Term Rewriting Systems, in "Proc. 18th IEEE Conf. on Foundations of Computer Science," Providence, RI. Huet, G., and Levy, J-J. (1979), "Computations in Nonambiguous Linear Term Rewriting Systems," Tech. Rep. 359, INRIA, Le Chesney, France. Kahn, G., and MacQueen, D.B. (1977), Coroutines and Networks of Parallel Processes, in IFIP 77 (B. Gilchrist, Ed.), North-Holland, Amsterdam. Knuth, D.E., and Bendix, P.B. (1970), Simple Word Problems in Universal Algebras, in "Computational Problems in Abstract Algebra" (J. Leech, Ed.), Pergammon Press. Knuth, D.E. (1973), "The Art of Computer Programming, Vol. 3, Sorting and Searching," Addison-Wesley. Kowalski, R. (1979), "Logic for Problem Solving," NorthHolland, Amsterdam. MacQueen, D.B. (1981), Structure and Parameterization in a Typed Functional Language, in "Conf. Record of Symposium on Functional Languages and Computer Architecture," Lab. for Programming Methodology, Univ. of Goeteborg, Sweden.

42 Milner, R. (1978), "A theory of Type Polymorphism in Programming," Journal of Computer and Systems Sciences, Vol. 17, pp. 348-374. O'Donnell, M.J. (1977), "Computing in Systems Described by Equations," Lecture Notes in Computer Science 58, Springer-Verlag. Robinson, J.A. (1965), "A Machine-oriented Logic Based on the Resolution Principle," Journal of the ACM 12, pp. 23-41. Rosen, B.K. (1973), "Tree-manipulating Systems and ChurchRosser Theorems," Journal of the ACM 20(1). Subrahmanyam, P.A., and You, J-H. (1983), "FUNLOG = Functions + Logic: A Computational Model Integrating Logic Programming and Functional Programming, " Tech. Rep. UTEC-83-040, Univ. of Utah. Subrahmanyam, P.A., and You, J-H. (1984), Pattern Driven Lazy Reduction: a Unifying Evaluation Mechanism for Functional and Logic Programs, in "Proc. 11th ACM Symp. on the Principles of Programming Languages," Salt Lake City. Thatte, S.R. (1984), "On the Correspondance Between Two Classes of Reduction Systems," to appear in Information Processing Letters. Turner, D.A. (1976), "SASL Language Manual," Tech. Rep., St. Andrews Univ., UK. Turner, D.A. (1979), "A New Implementation Technique for Applicative Languages," Software Practice and Experience, Vol. 9. Vuillemin, J. (1974), "Correct and Optimal Implementations of Recursion in a Simple Programming Language," Journal of Computer and Systems Sciences, Vol. 9. Wadsworth, C. (1971), "The Semantics and Pragmatics of the Lambda Calculus," D.Phil. Dissertation, Oxford University, England.