THE UN I V E R S I T Y OF M I C H I GAN COLLEGE OF ENGINEERING Department of Nuclear Engineering Technical Report HALF-SPACE MULTIGROUP TRANSPORT THEORY IN PLANE GEOMETRY John K. Shultis Sergej Pahor Paul F. Zweifel, Project Director ORA Project 01046 supported by: NATIONAL SCIENCE FOUNDATION GRANT NO. GK-1713 WASHINGTON, D.C. administered through~ OFFICE OF RESEARCH ADMINISTRATION ANN ARBOR April. 1968

ACKNOWLEDGMENTS To the members of my doctoral committee I owe much, for it is from them I have acquired my knowledge of transport theory. By their encouragement and assistance they have made this thesis possible. In particular I am indebted to Professor Paul Zweifel who has been far more than just a committee chairman. My most sincere appreciation is also given to Dr. S. Pahor who has been the guiding influence of the present research. Finally, I would like to express my gratitude to the other members of my committee, Professors K. M. Case, G. C. Summerfield, and F. C. Shure for their continued interest and encouragement. This thesis has been made possible by the financial support in the form of fellowships from the Phoenix Memorial Project and the Rackham Graduate School. The National Science Foundation also provided research support during the final stages of this thesis (Grant GK-1713). To all of the above I will always be grateful. In conclusion, I must also extend my appreciation to my wife, Susan, for her understanding and encouragement, and to my parents without whose love and help I would not have progressed this far. ii

TABLE OF CONTENTS Page LIST OF TABLES v LIST OF FIGURES vi ABSTRACT vii Chapter I. INTRODUCTION 1 1I. MULTIGROUP TRANSPORT EQUATION EIGENFUNCTIONS8 201 Infi.nite Medium Eigenvectors and Adjoint Eigen:rvect..rs 9 2.2 Full-Range Completeness 15 2.5 Orthogonality and Normalization of Eigenvect)ors 20 2o4 Infinite Medium Green's Function 22 II:, THE ALBEDO PRCBLEM 24 351 Generalized S-Function 25 5.2 The Adjoint Albedo Problem 31 5.3 Reduction of the Nonlinear U and V Equations 4 3.4 Fredholm Equations for the U and V-Functions 41 355 An Approximate Equation for U and V 45 IVc SYMMETRIC TRANSFER 48 4.1 Multigroup Equations for Thermal. Neutrons 49 4.2 Discrete Eigenvalue Spectrum 55 4,3 Simplification of the U and V Equations 58 4.4 Uniqueness of Solution of the U-Equation 60 V. VARIOUS HALF-SPACE PROBLEMS 64 5.1 The Milne Problem 65 5.2 The Albedo Problem 71 553 Half-Space Green's Function 73 VI. NUMERICAL RESULTS 76 6.1 Iterative Solution for the U and V-Functions 77 6.2 Milne Problem 84 VI:I CONCLUS IONS 95 iii

TABLE OF CONTENTS (Concluded) Appendix Page A. DISCRETE EIGENVALUES FOR SYMMETRIC TRANSFER 99 B. COMPUTER CODE LISTINGS 103 B.1 Program MILNE 103 B.2 Program GENUV 111 B.5 Subroutine: AVEC(A,NUO,C,SIG,IG,AA,AX,LLL,LLLL) 114 B.4 Subroutine: ROOT(FN,C, SIG, IG, XST, XEND,EPS1,EPS2, DSCR, BSCR) 116 B.5 External Function: DIS(C,SIG,IG,Y,A,TN) 118 B.6 Subroutine: PRT(U, V,IG,ITER) 119 B 7 External Function: GAUX(AA,BB) 120 B.8 Subroutine: MINV(A,N,D,L, MNC) 122 B.9 Program FRED 125 REFERENCES 128 iv

LIST OF TABLES Table Page I. Three-Group Macroscopic Cross Sectionsfor Water (cm-1) 78 II. Six Group Uranium Cross Sections 79 III. Discrete Eigenvalues 86 IV. Extrapolation Lengths 93 V

LIST OF FIGURES Figures Page 6.1. Convergence of U-V iteration schemes. 83 6.2. Emergent Milne distribution (normalized to unit density) for Case I. 88 6.3. Emergent Milne distribution (normalized to unit density) for Case II. 89 6.4. Emergent Milne distribution (normalized to unit density) for Case III. 90 6.5. Emergent Milne distribution (normalized to unit density) for Case IV. 91 6.60 Normalized emergent Milne distribution for Case V. 92 vi

ABSTRACT In this dissertation exact solutions to the isotropic multigroup neutron transport equation for various half-space problems in plane-geometry are obtained. These solutions are obtained by combining the techniques of Chandrasekhar's principle of invariance and Case's singular eigenfunctions expansion, and extending them to the multigroup case. The main advantage of the present method over previous half-space techniques is that it is less complicated and numerical evaluation, particularly for emergent distributions, is far simpler. The albedo problem is found to be of fundamental importance for half-space problems, since the solutions of all the other problems can be expressed in cof the albedo solv-tion. The albedo problem is solved in two dist+.nct steps. First the emergent distribution is calculated. By considering a singular eigenfunction expansion for the albedo problem, an inhomogeneous Fredholm equation for this emergent distribution is obtained. This equation is, however, difficult to evaluate numerically since it involves computation of the N-group eigenfunctions. By using Chandrasekhar's invariance principle, a nonlinear integral equation for the emergent distribution can be obtained. This equation is easily solved by numerical iteration. Once this emergent distribution is known, the known full range completeness and orthogonality relations of the eigenfunctions are used to obtain the angular flux inside the half-space. From the albedo problem solution, the solutions of the other half-space problems are obtained. As an example, a computer code has been developed which gives the emergent distribution to the Milne problem and its extrapolation length for any number of energy groups. Finally for the special case of a symmetric transfer matrix (as is found in thermal neutron transport problems, and certain radiative transfer cases) it is shown how the results for a general transfer matrix can be greatly simplified. For this case, uniqueness of solution of the Fredholm equation, reality of the eigenvalues of the transport equation, and half-range completeness of the eigenfunctions can be easily proven. vii

CHAPTER I INTRODUCTION During the past three decades, much effort has been expended in investigating the neutron transport equation. The interest and importance of the properties of the transport equation are reflected by the vast number of papers and many comprehensive books on the subject. Yet, despite all the extensive research, exact solutions have been obtained only for specially idealized situations. Even with the present generation of high-seed computers, it has not been possible to solve numerically the neutron transport equation in its full generality. Very few realistic problems are described well by a version of the transport equation which is sufficiently simplified to permit an exact analytical solution. For instance, reactor design requires the solution of an energydependent transport equation in amedium with often rapidly varying material properties; on the other hand, exact solutions of the transport equation have been obtained primarily for the one-speed approximation and physical situations no more complicated than adjacent half-spaces. In the past few years the energy-dependent transport equation has begun to receive much attention. The present situation in the energy-dependent theory, however, is far less satisfactory than in the one-speed case. Analytic solutions have been obtained only for a very few idealized situations and for grossly oversimplified physical models. Whenever these analytic 1

methods are applied to realistic problems of practical technical interest, they usually become untractable or so time-consuming as to become worthless as a practical computational technique5 As a result engineers have come to rely upon very crude approximations together with elaborate computational schemesO For example, reactor designers use almost exclusively the multigroup diffusion approximation for finding the neutron flux distribution in fast and intermediate reactors However, investigation of the energy-dependent transport equation does have some valid justification, aside from its own inherent beauty~ The exact solution of idealized problems serves as useful standards against which the rougher methods used in. engineering can be comparedo Further, the analytic stUudy of the transpcrt eq uatl-ion in a form which only roughly rrimodels some physical situation, may give some previously unknown information about the physics of the actual problemn This is particularly true of problems which traditionally are treated by techniques that "hide" the inheren.t physics (eogo, computer simulations, Monte Carlo calculations, e-tc ) Finally,there is always the hope that one of the exact methods can in some approximate way be extended to an area of practical importanceo One of the most successful techniques in the monoenergetic'treatment of the transport equation is the singular eigenfunction expansion method of Case 7 The generalization of this method to the continuous energy-dependent case in 8 plane geometry has been done by BXednarz and Mika, and while quite general, it is highly formalistic and limi.ted to infinite medium problemso To extend the

5 singular eigenfunction method to energy-dependent half-space problems, two different approximations have come into wide useo The first maintains the continuous energy dependence in the transport equation but assumes very special forms of the energy dependence of the cross sectionso Exact solutions have so far been limited to non-multiplying media with degenerate scattering kernels, 5,0 or to the constant cross section 11 case. For these cases, with two exceptions, the analytic solutions and the reduction of these solutions to numerics are highly non-trivial. These two special half-space problems are the constant cross section case, and the 12-14 completely seperable kernel model. Both of these problems can be handled is a manner similar to the one-speed case. An alternative approach for solving half-space problems is to use the multigroup formulation which splits the continuous energy range into N distinct regions over which the cross sections are assumed constant. The eigenfunctions to the N-group isotropic transport equation and their'full-range completeness" property (whereby the solution of any infinite medium problem can be expanded uniquely in terms of these eigenfunctions) have been known for several years. 56 Recently their full-range ortilogonality relations and the solution of the infinite 17 16 medium Green's function have been obtained for the two-group7 and N-group cases. Until now, however, the solution of half-space problems has been limited 18,19 to special cases. Several two-group problems have been investigated. In a paper on radiative transfer by Siewert and Zweifel full-range and "halfrange completeness" of the eigenfunctions (whereby the solution to any halfspace problem can be expanded uniquely in terms of only half of the eigenfuncticns)

4 for the N-group case were proved with the specific restriction that the 20 determinants of the transfer matrix, C, and all its minors vanished. Finally, Leonard and Ferziger have shown that the N-term degenerate kernel for thermal neutrons in a nonmultiplying medium can be reduced to the N-group transport 9,10 equation with a symmetric transfer matrix. For this symmetric C case halfrange completeness can be proved and, in principle, closed form solutions to half-range problems be obtained although thus far none have been. The purpose of this work, therefore, is to consider half-space problems of the N-group isotropic transport equation with a completely arbitrary transfer matrix. All previous energy-dependent half-space investigations have depended fundamentally upon a half-range completeness theorem. Furthermore this theorem has always been constructive in the sense that it explicitly demonstrated how the expansion coefficients could be calculated from the given boundary conditions. For energy-dependent problems, such constructive theorms tend to be exceedingly complex. In view of this situation several authors have tried various alternative schemes to avoid the half-range formalism. 21 22 Fuchs and Collatz and Zelazny have favoured an iteration scheme for slab problems based on full-range rather than on half-range expansions. Case also has developed a very general method which when applied to half-space problems, 25 completely avoids eigenfunction expansions altogether. In this method all that is required is a knowledge of the infinite medium Green's function. Recently a different approach has been used by Pahor to circumvent the 24,25 half-range expansion difficulties. By applying the invariance principles 26of 27 of Ambarzumian and Chandrasekhar to the singular eigenfunction of the thermal

5 neutron degenerate kernel case, he derived the emergent angular distribution for half-space problems from a nonlinear integral equation. Once the emergent distribution is known,then the simpler full-range singular eignefunction expansions can be used to obtain the complete solution. It is this approach which here is extended to the general multigroup situation. The plan of the present research is as follows: Chapter II reviews the known results of the N-group infinite medium eigenfunctions and the adjoint 15 16 eigenfunctions as developed by Zelazny and Yoshimurao The full-range orthogonality and completness properties of these eigenfunctions are then used to solve the infinite medium Green's function problem. The next two chapters deal with the emergent distribution of the albedo problem. Consideration of the N different albedo problems (one for source neutrons belonging to each energy group) leads to a generalized S-function. This matrix function assumes the same important role in multigroup theory as does the scalar Ambarzumian-Chandrasekhar S-function in the one-speed situation27 From use of the principle of invariance, a non-linear integral equation for the S-matrix is derived. This equation demonstrates that the S-matrix, a function of two variables, can be decomposed into a product of two other matrices,U and V,each of which is a function of only one variable. The nonlinear integral equations for these,U and V matrices, which are analogous to the H-function in the one-speed case, are amenable to solution by numerical means. Once the U and V matrices are known the emergent distribution for the albedo problem can then easily be foundo

6 An alternative approach to the albedo problem is then considered. From a full-range eigenfunction expansion of the albedo problem and the adjoint albedo problem, a pair of regular inhomogeneous Fredholm equations for the U and V matrices are obtained. These Fredholm equations, along with associated singular integral equations, have been solved analytically only for the one23 speed case. Nevertheless standard numerical techniques can be applied to solve for. and V. For a certain class of problems, namely for systems near criticality, these equations can yield a good analytic approximation for the U and V functions. Chapter IV shows how for the special case of symmetric transfer, as is found in certain thermal systems for example, the results of the previous chapter are greatly simplified. In particular, the V-matrix becomes simply the transpose of the J-matrix. Hence only one nonlinear integral equation or Fredholm equation need be solved. For symmetric C other very useful results can be derived. It can be shown that all the eigenvalues are real (a requirement, which while physically necessary, had to be assumed for the general case). The reality of the eigenvalues then permits one to prove the existence and uniqueness of solution of the Fredholm equation for U. This uniqueness property implies that the eigenvectors are half-range complete-a result recently proved by Leonard and Ferziger in a much more complicated fashion.10 In Chapter V it is demonstrated how the emergent distribution of other half-space problems (Milne's and Green's function) can be expressed directly in terms of the U and V functions (or equivalently the generalized S function.) Then once the emergent distributions have been calculated, the complete solutions

7 inside the half-space are readily obtained from application of the full-range orthogonality and completeness of the eigenfunctions. To demonstrate the usefulness of the U and V-matrices and the ease with which they are computed from the nonlinear integral equations, Chapter VI presents several numerical examples. The Milne problem is considered for several different media and the emergent distributions and extrapolation lengths are explicitly evaluated. To this end,a series of computer programs for general N and arbitrary C have been written. The listings of these programs are given in the Appendixo To summarize, it is the purpose of this work to demonstrate how various half-space problems encountered in multigroup theory can be solved without recourse to a half-range completeness property of the infinite medium eigenfunctions. The overall approach has been to show that these problems may be solved in two distinct steps. First the emergent distribution can be expressed with the help of two fundamental matrix functions; and then full-range expansions can be used to generate the complete angular density.

CHAPTER II MULTIGROUP TRANSPORT EQUATION EIGENFUNCTIONS The time-independent multigroup neutron transport equation for isotopic scattering in plane geometry can be written in the form (a (x ) + (x,9) = d' (x,'). (201) -1 The vector 4(x,.) is an N-component vector (where N is the number of energy groups), of which the ith component, *i(x,4), is the angular flux of the ith group. The components of the matrix, Z are given by oi j., a being the total interaction cross section for the ith group. Finally, the elements, cj., of the transfer matrix, C, describe the transfer of neutrons from the jth group to the ith group. For an isotropically scattering and fissioning medium the c's are given by ij 1 s 1 f (2.2) c. ~- v. o. ij 2 Hj-i 2 i j j where as is the scattering cross section (both elastic and inelastic) for j-i the transfer of neutrons from the jth group to the ith group, af is the fission cross section for the jth group, v. the number of fission neutrons produced J by an incident jth group neutron, and Xi is the fission spectrum fraction of the ith group. It is always possible to order the groups such that (see Section 4.1) > (... N8 (205)

9 and by dividing Eq. (2.1) by N and measuring distance in units of the smallest 20 mean free path, 1/a, one may set a =1. In the following discussion it will N N be assumed that the normalizing factor 1/aN is included in the ai s and c's. In this chapter the infinite medium eigenfunctions and eigenvectors to the transport equation (2.1), and its adjoint, are reviewed. The orthogonality and full-range completeness properties of these eigenvectors are then discussed. Finally as an example of the application of these eigenvectors, the infinite medium Green's function problem is solved. 2,1 INFINITE MEDIUM EIGENVECTORS AND ADJOINT EIGENVECTORS Using the analogy of the one-speed problem, a set of eigenfunctions solutions, t(v,x,J) to Eq. (2.1) of the form -X/V (vx,) = e- g(v,[) (2.4) is sought. Substituting this ansatz into Eqo (201), the following equation for the eigenvectors, (v,li) is obtained: (- E) (v) = c ^ d'i P(vI), (2.5) -1 where E is the unit matrix. Different but equivalent forms for these eigen16-18 vectors have been obtained by several authors. In this thesis the simple notation (with slight changes) and approach of Yoshimura will be used. The eigenvectors can be written in the form ( v,I) = PF(v,J) b,(v) + G(v,t) (v), (2.6)

10 where P indicates the Cauchy principal value. The matrices F and D are defined as [F(z,)].. = --. — 6,., (2.7) 10 Y. z - 1 ij and [G(z,.)].. = 6(o.z - I) 6.., (2.8) i j 1 j1 and the vector b(v), which has yet to be determined, satisfies -! b(v) Cf dp P(v, ) C a(v) (2.9) Substitution of Eq. (2.6) into Eq. (2.9) yields the following simultaneous equation for the unknown vectors b(v) and.(v): Q(v) b(v) = dt G(v,i) (v), (2.10) ~1 where Q(z) C1 - P F(z,p) dj. (2.11) To solve for b(v) and X(v) it is necessary to divide the eigenvalue spectrum into two regions. Region I: v ~ (-l,l) —Discrete spectrum On this region the eigenvalues are denoted by vo, and Eq. (2.10) immediately yields Q(v ) b(v ) = 0. (2.12)'~ O O

11 For this equation to have a nontrivial solution, the following condition must be satisfied: det 2(v ) 0. (2.15) Equation (2.13) is the general N-group dispersion relation which gives an even number, say 2M, of discrete eigenvalues. For simplicity it will be assumed all these eigenvalues are distinct. For each root, vo, the vector b(vo) is determined from Eq. (2.12) and a normalization condition Ib(vo)l = 1. The X(vo) vector is completely arbitrary with the provision it has a finite norm. For convenience the null vector will be chosen for X(vo). Thus,in component form,the discrete eigenvectors may be written as v b.(v ) (V,) = -, i = 1 N, Iv | > 1. (2.14) i o C. v -0JI 1 O 1 o From Eq. (2.11) it is easily verified that if vo is an eigenvalue then -Vo and vo* (complex conjugate) are eigenvalues with (v ) = b(-v ) = b*(v ) (2.15) o o o. 0'W 0 For the special case of symmetric C, it can be proved (Chapter IV) that all the eigenvalues are real or imaginary, and in Appendix A it is shown for any subcritical system the eigenvalues are always real. For a general system, on the other hand, there is no a priori reason to suspect that all the discrete eigenvalues are necessarily real. However, from physical grounds, a subcritical infinite medium must not have any imaginary discrete roots, and the dominant root (defined as the root with the largest real part) must be real. An eigenfunction expansion for a realistic problem which has imaginary

12 eigenvalues or a complex dominant root would produce an oscillatory behavior at large source distances and hence negative fluxes. For the purposes of this thesis, it will be assumed that the medium is subcritical, and that all the discrete eigenvalues are nonmultiple and finite. Region II: vc(-l,l) Continuum Spectrum This region is divided into N subintervals, v., j = 1 ~ N such that for vvj., nj < vI ~. _ where r. is defined as 1/ca and Tr =0. Without loss of VJ jl -1 j o 1 generality, one may consider only the jth subinterval. First decompose the Q(v) matrix and b(v) and j v) vectors as shown below: 1 j N __ Q, ^2 b k ^1 ^2 -1 11 ^1 gv) = ---- -— v) = - -j \(v) = - - j (2.16) ~3 b N - N - N The vectors b1 andy are (j-l) component vectors, b2 and 2 are (N-j+l) component vectors, and 21 and 24 are square matrices of size (j-l) and (N-j+l) respectively. The matrices 02 and Q3 are rectangular of size (N-j+l) x (j-l) and (j-l) x (N-j+l), respectively. With these definitions. Eq. (2.10) can be written as l(v)) B(v)) + 2 2(v) = 0 (2.16a)

13 SQ b () + Q (v) b2(v) 2(v) a (2.16b) This system of N equations has (2N-j+l) unknowns [(N-j+l) X's and N b's]. Thus one must specify the values of (N-j+l) of these unknowns to obtain a solution. Let these be the (N-j+l) components of the vector b2(v). But there are (N-j+l) linearly independent choices of b2(v) which satisfy Eqs. (2.16a) and (2.16b), and hence there is an (N-j+l) fold degeneracy in the eigenvectors 16 for the jth region. An obvious choice for the (N-j+l) b2's is bm(v) e M = j - N (2oN?) ) = (m-j+l), (2m17) where e, is an (N-j+l) dimensional vector all of whose components are zero, except the ith which is equal to unityo Corresponding to each vector b2m there is a vector Xm(v)o From Eqs. (2o16a) and (2.16b) one obtains =b(v) - -1(v) 2 b, (2.18) and m(V) = [ -4 -3 l(^ bm (2.19).2'4 - 3 1'2 -2 where it is assumed -(v) exists for ve(-l,l)o Since Xk(v) can be choosen arbitrarily, it is set equal to the null vector. Then from Eqs. (2016a) and (2o16b), m(v) is determined completely from (v) = Q(v) bm(v).(220) 2 20)

14 Thus for the j th subinterval, the eigenvectors, in component form, may be written as vibm(v)]. /m(v i)]= p + ( - t)[x (v)], i (2.21) cr.v - 1 1 i j = 1 - N, i = j - N, where bm(v) and Xm(v) are determined from Eqs, (2o18) and (2,20), respectively. In passing, it should be noted that bm(v) and X (v) are even functions of v and hence the eigenvectors (v,p) have the property (v~ VJ 0) - J.(- V ) @ (2.22) Before discussing the orthogonality and completeness propert;ies of the infinite medium eigenfunctions, it is necessary to introduce the adjoint 16,17 eigenvectors. The adjoint equation of Eq. (2ol) is defined to be' - t(x,) +, ) = t(X ) C d' l t(x,p ) (2o23) ax /" where ~I(x,p) is the adjoint angular flux and the tilde denotes the transpose matrix operator. As before, the adjoint eigenfunctions, ~t(vx,i), are defined as t(v,x,p) = e X/ t( ). (2 24) The adjoint eigenvector equation becomes (Z C E) tc(v, ) = CJ di Pt(i) (2 25) -1

15 This eigenvector equation is precisely the same as Eq. (2,5) with C replaced by C. The adjoint equation has exactly the same spectrum as that obtained for the infinite medium direct eigenvalues. Thus one has Ot(v,Cc) = (v,4,C), (2.26) i.e., the adjoint eigenvectors are obtained by simply replacing ci. by c.j in ij ji the ordinary infinite medium eigenvectors. In this work these adjoint eigenvectors are denoted by v rb (v ) t K -(v v (-1,1), (2)27) 1. 0 and tm +t~~ v b(v)vj i [$j (v,)]i= P + S(o.v - ) (v)i, v C (-1,1). (2.28) 1 Their normalization is written as a(v ) = / d! mt(v,i) (229a) -1 atm () = 1 d tmv) (2.29b) -1 Finally, one can verify that these adjoint eigenvectors satisfy the relationship (2.29c) t(v,_) = t(-v4). (2o29c) 2o2 FULL-RANGE COMPLETENESS The eigenvectors, J(v,l), discussed in Section 2.1 have the very useful

16 property that they are "fiul u:-range complete " Thi s property -at h.:; stated in the form of the following theorem. Theorem. The set of functions (v, ) for all. ve to the eigenvalue spectrum is cormplete, in the sense that an arbitrary faun:tiorn' (k) ef ined for E I[-1.,1] can. be expa-.ide in the form M M ill( Z.) t'_.j +s -s OS- OS s=l s=l N N N + V f dv^ A (v) em(v,), (2.50) z - i / J ( j=I. L ~ j -L j [m J where a, a and.i A(v) are tmniquely determined expans:i. coet. l i.i.r.tso +s ~s j This t'heorem:has been proved in various forms by several a,.itheoSo 18 Zelazny and Kuszell tried to prove it by obtaining a set, I Frfah:olm equations for tthe expansion coefficients. However it was not shovwn t:.at the sol'ution. of these FreThoim equlat;iolns existed or was unique Spcia.. proofs of completeniess for particular cases of' the transfer matrix have'L,,-nl olbtalned9 17 20 e,g., symmetric C, -'two -group case, anf.d det [li =0. A pro f b, tnte nmethod 6 of cons': tru.ction for general N-dimensional C has been recent;ly i:cbtai.e,: by Yoshimura, and it is his proof, with some modification a:di c-orr.ecti.on which will be given. hereO First consider the vector'r ([) defined as M A^ ) (ii) 4 - ( {'S (v (OSt) - as' os2, w) (2031) s=l

17 where M is the number of discrete pairs of roots. To prove this theorem it is sufficient to show that the function''(i() can be expanded solely in terms of the continuum modes, i.e., N N J A(V) (V (232) j=l aj =j If the coefficients Am(v) exist, then one can define the two vectors B(v) and:(v) such that for vev _ _ J N n(v) = ) A (v) b.(v), (2533) m=j and (v) = Am(v) m(v) (2, 34) m=j From Eq. (2.20) one has t(v) = 2(v) \(v), (2.35) and using the explicit form of ~m(v,[), Eq. (2.25) becomes a N PdvK -(V) n(L + (c.v - A) ~() (2.36) 1-1 1j j=l Vi or equivalently I(') =- dvP - - r.(v) + 6(aiv - i) i(v (2.36a) -1 I 1L Replacement of 4 by ai-' in Eq. (2.36a) yields ii(i') =i J hidv P i(v) + 6(v - ~'),i(v (237) -1 1 1 — <, < - C. - - C. 1 1

18 or in vector form Zg'(o4') P dv -( TI) + ('). (23.8) v - [l ~v ^ -1 Now introduce the vector N(z) 1 vdv () (2.39) -1 which has the following three properties:8 (i) N(z) c A in the complex plane cut from (-1,1), (2.40) (ii) N(z) - l/z as |z| -+ o (2.41) 1 1 242v 1 (iii) Ni(k) -= P dv r(v) + - () -1 < < 1. (2.42) 2ri v 2 / - -1 If one could solve for N(z), then the vector _(4) could be found from Eq. (2.40), and the expansion for 4'(4) would be determined. With this in mind define the matrix (z) -= 1 C - do F(z,). (2043) (Note that Q'(z) differs from the Q(z) of Eq. (2.11) in which the integral is 29 a principle value.) Applying the Plemelj formulae to Q'(z) cn:re finds'"~(M') = Q(L') + irn' E, (-1, < a <' < 1). (2.44) Upon substitution from Eqs. (2.40) and (2.42), Eq. (2.36) may now be reduced to a Hilbert equation for the unknown vector N(z),

19't EZr(o') = ~+(') N+(W') - t-(H') N-(p'). (2.45) Since the vector Q'(z)N(z) is analytic in the complex plane cut from (-1,1) and is required to vanish like l/z as Izl +, the Cauchy integral theorem can be applied to prove that N(z) = - - [Q(z)]1 J doi 1 F(z, lt)'(t). (2.46) /L~ 2jri 2 z i -1 This function N(z) has the required behavior except for z = v, s = 1 M OS where [i'(z) ] does not exist. One can write [a (z)]-'l et|(z)| (Z) (2.47) where Q'(z) is the matrix whose elements are the cofactors of' (z). Then N(z) ='(z) dk 4 F(z Al) r(k) (2.48) () - - 2i z detl "(z)l z ^ ( -1 and to remove the difficulty at ~+v, s = 1 M where det l'(z)l has single zeroes, one requires that the numerator also vanish. In component form this requirement is Ld k F(~v,s k)0, i = 1 N, s = 1 - Mo (2.49) Of course this condition is not true for an arbitrary 4'(sM); however there are 2M unspecified conditions on ij' (), namely the 2M ais, (s = 1 - M) It can be verified, Eq. (2444) will be satisfied if one chooses

20 dt [ ~t(+v,0 ) *(k) a = - (2.50) ~s r1 /J d~ ~ ~ ~v~os-,1)'~ost, ) V1 Hence N(z) as given by Eq. (2.48) has all the required analytic properties. Therefore r(i) exists and can be determined from Eq. (2.42). 2.3 ORTHOGONALITY AND NORMALIZATION OF EIGENVECTORS The eigenvectors and adjoint eigenvectors are orthogonal to each other in the sense 1 d i ^(v,>) t(v,') 0, v v. (2.51) 1 This result immediately follows from Eqs. (2.5) and (2.25). Multiply Eq. (2.5) from the left by [t(vi) and the transpose of Eq. (2.25) from the right by t(v', ). Integrating these two scalar equations and subtracting one obtains - 1) d Q t(vt) Q(v',pl) = 0, (2.52) -1 from which Eq. (2.29) follows. Since this orthogonality property of the eigenvectors is to be used to determine the expansion coefficients in an eigenfunction expansion, one must determine the normalization intergrals. For Region I, vl(-l,l), the normalization is defined as

21 d ( 1,) ~(+V Ot) N S Iss'' -1 Straightforward integration of Eqs. (2.14) and (2.27) yields N -. v - 1 2 v v N = +v2 F bt(v ) en 1 os + ios (v (254) i1 1 05 Nis os a. os L v + 1 v - 1 i=l i os For Region II, vcv. defining, d I (v,l) m(v',()V N' (V) &(v - v') (2.55) and using the Poincare-Bertrand formula, one obtains (after much algebra) m.2 3 M m' /m b N'.M (v) -= v+ (V) 2v) I (2056) -'j2 ~j2 2 j2 4 N3 A )2 or: v bJ M(v) m'Im = j ~ N, (2o57) 2 mm where the (N j+1) dimensional matrix is defined as M(v) = {v 4E+ [Q (v)- QQ(v)Q ] }. (2-58) Thus for the jth subinterval there are (N-j+l) different (v) since the (N-j+l) fold degenerate eigenvectors of the jth region are not mutually orthogonal,

22 It would be convenient for theoretical purposes if these degenerate eigenvectors could be made mutually orthogonal. It the (N-j+l) eigenvectors of the matrix M(v) had been chosen for the b2's and bt's instead of those given by Eq. (2.17), one would have N (v) =- N. (2.59) j j mm' For the remainder of this work, it will be assumed that the degenerate "continuum" eigenvectors, j( (vp,) are also orthogonal. 2.4 INFINITE MEDIUM GREEN'S FUNCTION An immediate application of the full-range completeness and orthogonality properties of the infinite medium eigenfuitctions is to solve the infinite medium Green's function problem. Consider a unit plane source at the origin emitting q. ith group neutrons per unit area per unit time in the direction o'. The angular density G (o,4oxt) -G(x,k() satisfies the homogeneous transport equation (2.1) except at the origin where the source can be replaced by the "jump condition" -0 1 [G~ (O - G(O i)] = ( ) q. (2.60) 4'0.4If the system is subcritical one has the boundary condition lim G (x,) =. (2.61) X-bou~00 cd i ( Expansion of the solution in terms of the eigenfunctions which satisfy boundary condition (2.61), gives

23 -M,X/ No ~x/v ~G (x, ) = aas (~v,Os) e s=l N N i-x/v m > + dv e / A(v) O.(v, x 0 (2.62) j=l jl m=j Application of Eq. (2.60) yields M -4jqt +S - OS -S OS N | N 1 dv s Am () (v ), (2.63) j- 0. ^ =j which is simply a full-range expansion of (~(4-o40)/4jt4)q. For simplicity, it will be assumed the Oms for m = j N have been constructed such that they are mutually orthogonal. Then the orthogonality results of the previous section immediately give for the expansion coefficients - r _( _ - o ) (~ and = m t A(v) d i - ~v T t(+t(v1t) q = (. (2.65) N.(v) - - 4d J(P) J J

CHAPTER III THE ALBEDO PROBLEM In many problems in transport theory, only the angular flux at the surface of a medium is needed. To this end, the half-space albedo problem will be considered in this chapter, since the emergent distribution of this particular problem turns out to be of fundamental importance in determing the emergent distributions of all other half-space problems. Once the emergent fluxes have been found, the interior distributions, if desired, can be obtained from these surface quantities by applying the full-range completeness and orthogonality properties of the infinite medium eigenfunctions which were obtained in Chapter II. The emergent distribution of the multigroup albedo problem can be expressed directly in terms of a generalized S-function matrixo To determine this S-function, two different approaches can be used to obtain equations suitable for its evaluation. The first method derives a nonsingular, nonlinear integral equation for S by using Chandrasekhar's principle of invariance".27 This equation demonstrates how the S-matrix, which is a function of two angular variables, can be decomposed into a product of two matrices, U and V, each of which is a function of only one variable. The second technique obtains from an eigenfunction expansion of the albedo problem, a Fredholm and a singular integral equation for the S-matrix. 24

25 From the decomposition of the S-matrix and the introduction of the adjoint albedo problem, Fredholm and nonlinear integral equations for the U and V matrices are obtainedo These U and V integral equations yield readily to numerical solution and hence the emergent albedo distribution can easily be obtained. 31l GENERALIZED S-FUNCTION Consider an albedo problem for which the incident neutron beam belongs to the ith energy groupQ The angular flux of this "ith albedo problem", i(o,Jo;x,y), is the solution of Eqo (2.1) with the boundary conditions, (i) f.(oo;o,4) =e e1(4-0), > o, ~ > o, (3.1) and lim (ii) x-o %i(o,o;x,4) = 0 (352) The N distinct albedo problems (one for each group) can be handled collectively by introducing the "albedo matrix" T(o,~Jo;xi) defined as Z(O,~o;X,4)L) = [1(Co0;X,04) 2(oP;X,4) 0.. (O,o;X4)] (533) This matrix is the solution of the transport equation (aX E+ ) f(o' 4o;x, ) = C di (o, %;x, ) (3.4) with the boundary conditions (i)!(O, o;o, ) = E(~ - 1o), 1 > o, %l > o ~ (35)

26 One can think of the albedo matrix as a type of Green's function for a deltainc like incident distribution. If the incident distribution is given as in (c( ), o ~ [i ~ 1, then the angular flux, *(x,i), everywhere in the half-space is simply 4(x) 1)= f1 dil T(OW';xyi) inc(,). (537) - 0 - In particular if the generalized S-function is defined as S(~oj,) p= T (oM0;o,-p.), o < J, o 1, (3.8) the reflected distribution at the interface is 1 1 inc t(o,-p) = 1 d1i' S(,;Pinc (3 9) = o S(';.)I/ (p') ( In the one-speed case this S-matrix degenerates to the well known AmbarzumianChandrasekhar S-function. (except for a normalization factor of 2). 2627 With these definitions of the albedo matrix and the S-function, a nonsingular, nonlinear integral equation for this latter quantity will be derived.,,27 By using the "principle of invariance, it is possible to express at some point, x, inside the half-space, the outwardly moving flux vector in terms of the inwardly directed flux vector and the albedo matrix. The principle of invariance states that the reflected flux from a halfspace is unchanged by the addition (or subtraction) of layers of arbitrary thickness to (or from) the medium. This means that at any point, x, inside the half-space, neutrons moving in the positive x direction are reflected in

27 the same manner as if they were incident upon the surface. If t(x,lJ-) is the flux at a distance x inside the half-space, the outwardly moving flux can then be expressed in terms of the inward flux as (x[) = j dp?'T(o,[;o.-P[)(x,[t). (3.10) 1 0 In particular, Eq. (3.10) gives for the ith albedo problem i (o~'~;x,-0) = f1 di' ~(o,[';o,-0) i (o~,j;x,') (3511) Finally if the N albedo problems are treated collectively, Eq. (3.11) yields the following important relation for the albedo matrix (o, 4o;xy-4) = fJ di' Y(oI';o, —) T(o,;x,'), p > o. (3.12) From this equation and Eqs. (3.4)-(3.6) a nonlinear integral equation for T(oP';o,-p.) can be obtained. Differentiation of Eq. (3.12) with respect to x at x = o yields ax (ox=o O dl (,';x, -l) (3x13) (o;x- f 1 dp (o~'';o,o-0) j P(o.0,x).(5.15) Using the transport equation (3.4) to evaluate these derivatives together with the boundary condition (3.5), Eq. (3.13) becomes 1 1 1 1 &Z(o,lo;o, -.) +'(o,; );o, ~-) d o- +'') + 1'~ 4K (o,';o, -l)C + fS 4 —(o,';o, -)C ldi (o,,;o, -,"). (35.14)

28 Multiplication by 4, and the use of the S-matrix definition, gives the result E^S( p o )l +? S( o), + [E'1 d)]['E + 1 d ^([E d)]. (3.15) Since each bracket on the right hand side of this equation is a function of only one angular variable, Eq. (3.15) can be written as S-( 0) + ( - U()cv((). (5-16) where 0 TI nJH'E t S(t d?,' ) (3517) and V(G) = E + dLL S(.p). (3.18) For the special case when N = 1, Eq. (3515) is just Chandrasekhar? s S-equation, 27 and both U and V become the well-known H-function. Unfortunately E and S do not commute, and to obtain ai equatio::ri more amenable to numerical solution, the definition of a "matrix direct product" is introduced. If D is the direct product (denoted by the symbol *) of A and B, i.e., D = A * B then in component form Dij = AijBij, i,j = -N. (5.19)

29 This direct product operator has associative and distributive properties with other direct operators. However, it is neither associative nor distributive with the conventional product3 eogO, (A B)*C A(B*C). (3520) Equation (3l15) can t;hen be written as R( ) " [E + i f d R(',1i)*JA(4 ( ) (x) C[E + 4o f di'R(io,4' )*A(I',0J (5321) where the matrix A is given by [A(J ) )]i -, (3-22) Ji.1' +Gj [ a.nd t+he associated R-matrix is related to -the S-matrix by.("(4t ) --,4tA(~4n^)(1~)*R( ) o (3-23) Further, the U and V matrices expressed in terms of the R-matrix are 1) = E + dpR( A,.)*AA(.,p') (3-24) and V(p.) = E + 32 f d'IR(,)',),) (5325) o0 To obtain, a system of equations equivalent to Eqo (3521) involving only U and V, take the direct product cf Eq, (3o21) and o A(p,. o), arld integrate with resepct to p. The result is

V(^) -t E d)] f (5.26) Similarly, by taking the direct product of Eq. (3.29) and t A([,oi), and integrating with respect to o0, an equation for U is obtained: (l) = E +; f d'A(,I')*[Ii) CV(k)]. (27) 0 e' ~ Equations (5.26) and (5.27) are two simultaneous nonlinear integral equations for the U and V matrix functions. They correspond to Chandrasekhar's 27 one-speed nonlinear H-function equation.7 In fact, if one takes the onespeed limit (N=l) and lets H(4) = U(i) = V(-), the nonlinear integral Hfunction equation, which is so often used, is obtained; namely H() = 1 + ciH(ji) f d i (i), (5.28) o P~+PT' The equations for U and V, as they stand, can be solved numierically by the method of successive iterations, which is commonly called the "power method."6 A computer code using this technique has been written to solve these equations (see Chapter 6). However, it is possible to cast the UI and V equations into a different form whose iterative solution converges faster than that; of Eqs. (3.26) and (3.24). This modification is discussed in Section (353). From Eq. (3.16) a useful identity for the U and V matrices can be obtained. Integrating this equation over iJ from o to 1, and using the definition of U(p), Eq. (3,17), one obtains Z(V(po)~-) + L f dL (t,t)Z = f d ) U(()CV(h0). (5329) K) P-Q 0 o

31 Rearrangement of this equation gives [ 1 d d)]z. (3.30) 0 0o o Finally, integrating this result over p~, rearranging and using Eq. (5o17), the following relation is obtained: 1 1 2Z f dI di(i)Z + Z f d^iV(ji) - fldpX U(t)C fdA V() (5051) 0 0 o o This identity for U and V is most useful in giving a measure of the convergence of the iterated U and V equations0 It can be easily and quickly evaluated in any computer code which is used to obtain a numerical solution for U and V. 5.2 THE ADJOINT ALBEDO PROBLEM In the previous section, it was found that the generalized S-function could be decomposed into two matrices of a single variable, U and V. However, these matrices are not independent, but are related in a simple manner through corresponding adjoint equations. Consider now, the adjoint albedo matrix, f(,;x,4), which is a solution of the adjoint transport equation 1- 1 t 0 E + )^t(o,-; x = S C l dit'1 (co,-v^x^) (5.52) with the adjoint boundary conditions (i) \31( o.~-"x- = 4V ^5(-^) - ^ J > 0, \ > 0, (5-55) tL

32 and (ii) lim T (o,-Yo;x,1) = 0. (3.34) X-*oo The ith column of tne adjoint albedo matrix is simply the "ith adjoint albedo problem solution iT(o,-oC;x, |), which is uniquely defined by taking the ith column of the matrix equations (3.52)-(3.34). A generalized Stfunction may be defined in an analogous manner as was the S-function, namely, t (Go ) = Lt (o,-10;oYk) * (5.55) The adjoint albedo matrix gives the response of an albedo adjoint transport solution for a forced delta-like emergent distribution. In particular if a solution of the adjoint equation has a forced emergent distribution, t em t (-O), > 0, the resultant incident distribution must be 0,'I' (~) = I d7t~ (o,-;~o,) () (3.36) t o or terms of the adjoint S -matrix tinc 1' t tem t i() -= f dci'S (A',, )'). (5.57) To find the relationship between the emergent distribution of the albedo matrix, Y o, o;o,-i), i >0. and the incident distribution of the adjoint albedo matrix, t(o,-o;o,.)), first multiply Eq. (3.4) from the left by (o,-o;x,,). Then multiply the transpose of Eq. (3.32) from the right by ^( o,|o;x,|). Subtraction of these two results and integration over p. from -1 to 1, and over x from 0 to a, yields the identity

33 S d, f dx,] (o,-T;xli) |li J- (ono~l)^^o.^o^^-C S d4'LoT(oio;xU )' (0/~- - I; (o,- -1;x,i)+~T (o,-po;x, )Z f1 d~i /o t dx 0 a- -(o,-^3W 0;^4)) =-Q 0 ~(359) o"'o -1 6 u Use of the boundary condltlons of EqsO (5o5), (5~6), (3..3) and (5~;5') and t Finally, from the deffnitions of the S-function and the adjoint S -function he above dentty mmediately shows the relationshp between tese two functions, 7.Q e 3 f{09) S(^^) o (3041) A nonlinear integral equation for S (>l,M) may be written by taking l O' Use of the boundary ditions of Eq () and using t(3he den (6)ty (33) and The res(3lt34) and integration by parts of the above equation gives _zt (p,o ) +LS (on)l) = (,, (o) >t )t (3)42) t t nally from the denits of the Sfunctio and the a djoidentined asunction functionsG iue, t A nonlinear integral equation for St (~o, ) may be written'by taking the transpose of Eq, (3.15) and using the'I.dentity (3~38)~ The res'ult: 1 t t t (3~442 w.ere the adjoin- "- (4) and V (4) matrices are defined as U () = + st(np) (S 3) and

34 t(i) =E + J d1I' St( (3.44) Finally from (3.17), (3.18) and (3.41) the relationship between the U and V matrices is obtained, namely t() V(p); Vt( ) = ) (3.45) t t These U and V -functions will be used later to derive a Fredholm equation t for V (p), and therefore, in view of the above relation, a Fredholm equation for U(G). 353 REDUCTION OF THE NONLINEAR U AND V EQUATIONS It is possible to put the nonlinear equations for the U and V matrices, Eqs. (3.26) and (3.27), into a form which does not involve the direct matrix products. This modification yields a system of equations which not only simplifies the computational schemes, but improves the iteration convergence. Let us consider first the V-equation (3.26), V() = E + f dkA( t,[)*[j[))(L)]. (3.46) Becasue of the direct product, the term V(yl) cannot be factored outside of the integral in the above equation. Substituting explicitly for A( i,y'), and denoting summation by the repeated index notation (where a repeated lower case Greek subscript signifies summation of the indexed quantity from 1 to N), the integrand in Eq, (3.46) can be written in component form as

35 _ U. (1oc v.(n) ( ^ ^M ^(U 1,)^ ))]., W ) I^ / (5.47)'I d ij1 O j_+cry Now define the matrix (p.) all of whose elements are zero except the kth column which equals the kth column of the V([t) matrix, i.e., k 0... 0 Vlk 0... 0 V2k [; k()ljj = [j(l)] k i 0... 0 v, 0... o, (3.418) With this definition the integrand (3547) becomes D ( ^* W )U( )Sv(GI), (3.49) where the diagonal matrix D (Gt.4') is defined as ^'k [(. = -i 0k (3.50) [*k( 1J CyjI+0c|Jk' Sij Using thisnotation, Eq. (3.46) can be written in the form N V(O) = Z Vi( l) = + I i ) dkt)ID U V. (5. 51) iv ~i=l *Y o Jr *For example with N = 2 the integrand is o ~~v11(1-L) ol F ~^ 01' j^~/G(C')C T P) 0 — 1-~ —- I iV2i(|1) 0 _ 0 0 V2-I KocY1 a'f 02i0pJ.'~ ~L 0i L o _______

36 Similarly the transpose of the U equation, Eq. (3.27), can be written without the direct product. The equation for U(k) is Mt(4) = E: + [i d i'A( [',)* )?(i)] ~ (3.52) By explicitly expanding the integrand this equation, it can be verified to be equal to D (,')V()' )V(,), (3.53) where the matrix U kis defined by -" k / 0... Ukl 0... 0 / * Uk k2 \ *.. 1 \... 0 U k... o/ (3 54) Eq. (3.52) becomes with this notation 1\/ N,Uk~(l> ~cli~+cl-S dCdL'D (),41')~' (4). (3.5) U() = X Uk(~) = E+ f d D(, )( ) (p (5.55) 1=10 - 0n f\I 10,q i=l o The two new matrix equations for U and V, Eqs. (3.15) and (3.55), can be reduced to systems of N vector equations. This is possible because of the particularly simple forms of the matrices V and Uk. If one defines the vector v N ~~i and.~k k v and'd as i "i

37 rVlliG(I uil(I) iVd2i( ) Ui2( ) (3.56) ViG(p) = and u.() = ( 36) -VNi(k) UiN( ) t;hen Eqs. (3 51) and (3.55) become (for i = 1 ~ N) ~i(~) -= ei + ~ f d'? Di(vi,'l)U(t'? )C~vi(v) (057) ard This set of vector equations is exactly equivalent to the U and V matrix equations which involved direct products (Eqso (3.26) and (3.27))o The convergence of an iterated solution of Eqs. (3,57) and (3 58) will be exactly the same as that of the Eqso (3026) and (3o27)o So far nothi.rg new has been gained-save some practice in mat;rix manipulation~ However these vector equations can yield a better iteration scheme, which should converge faster, and require less storage space in the computero Equations (35o7) and (3>58) may now be written, upon factoring vi(I) or -(~), gas

58 i = [E - i f dkltDi( i')D(Dll)Cvi(^) (3-59) 0 and ei = L- dti,(,it)(t ]ui() ~ (3.0) (3.6o) Then solving for u. and v, one obtains -1 vi( [) F= [ ti(4) ]ei 2 (3-61) and Ui() = [Gi({i)] ei, i = 1-N, (3.62) where the matrices F.G() and G, () are defined as 1 Fi( ) = E - f 1' d IDi(', ),U( )C (3.63) and Gi(t) -E - t f'dltDi(t p)V(p^i)C (3.64) 0 1Several three group cases for different values of the transfer matrix, C, have been solved numerically for U(4) and V(i) both from the above equations and from Eqs. (5.26) and (5.27). In all instances not only was the time required for the convergence of U(G) and V(4) less for Eqs. (3.61) and (3.64), but the "convergence rate, defined as the negative of the logarithm of the asymptotic reduction in error per iteration, was better by a factor of 3 or 4. These differences and the method of solution are discussed in detail in Chapter VI.

39 Finally, before ending this section, the question of the uniqueness of solution of t-he nonlinear integral equation for the S-function (and hence for the U and V-functions) should be discussed. As previously mentioned Eqo (3516) for the generalized S-function corresponds to Chandrasekhar's one-speed Sfunction equationO Since this nonlinear equation for Chandrasekhar's S-function 31-32 does not have a unique solu.tion, one suspects that Eqo (3516) (and its associated U and V equations) also may not have a unique solution~ It is possible to give, however, a set of necessary conditions which the required Sfunction must satisfyo The discrete eigenfunctions (v,p)e / ~ss = 1 M, are solutior;s to the transport equation; and since they tend to zero for large x, they are solu Jions tc half-space albedo problems with i.ncident distribut- ions given by ~(v,pj, p > o Thus from Eqo (3o9), the S-function must satisfy. d o sS(") )=. ~( )L ) > 0, s 1= M o (3o67) Integrating thhis condition over [i from 0 to 1 and using the definition of the V-matrix, EqO (3518), one finds o1 d4(vos ) + J dp(Vos) = 1 dI'V(')t(osIV'') (}368) or from Eq. (2.23) I1 dpZ(v9.) - a(v0) = dp'V(').t(vos,') (5}69) s ~L O I,

40 Similarly, consideration of the discrete adjoint eigenfunctions J (-v (- )e ^/os OS s = 1 - M, as half-space adjoint albedo problem solutions, Eq. (5357) gives t as a necessary condition for the S -matrix t(-V osF) = f ld'St (( -Vos-F", F > 0 s =' 1^-M. (-70) 0 ~ J o Using the relation between S and S, (Eq. (5.41)), the above condition becomes t(-vOS ) = f d"S(F',)1t(-v,-F). (5371) Integration over F, from 0 to 1 together with definition (3y17) gives a condition on U Jf dF'-t(...v ) at(-vos) I dU(F) (-v0,-) (5.72) 1OS~1 y (-V S f d~U (CL) (3.2 1 i /\ 0 -'- - Equations (5.67) and (5.71) are 2M conditions which the generalized Sfunction must satisfy. Equations (5.69) and (5.72) similarly place necessary conditions on the U and V matrices. In the one-speed case Eqs. (3.67) and (5.71) become identical, and it has been proved that these equations are a sufficient condition to specify uniquely the real physical S-function of the 24,54 nonlinear S-equation. Also for the degenerate kernel approximation, Pahor, using a corresponding set of discrete eigenfunction conditions, proved that these conditions were sufficient for uniquely specifying his generalized 25 S-function, Although it has not been possible to show that the discrete root conditions for the multigroup case are a set of sufficient conditions, it is felt that they are a severe restriction on the possible solutions (if

41 indeed there be more than one solution), and in all likelihood they are sufficient. 354 FREDHOLM EQUATIONS FOR THE U AND V-FUNCTIONS In -this section Fredholm equations are derived for the S, U and V fanctionso These equations, while not soluble in closed form, are of a type whi.ch usually do not suffer from the shortcomings of nonlinear integral equai-cns (eogo, possible nonuniqueness of solution)o A Fredholm equation for S([,1, ) can easily be obtained by considering an e ignvecto:r expansion for She ith albecd problem. Since the eigervectors are f ull-range complete, the solution to the ith albedo problem can be expanded in -'.._rms of the eigenfunctl.ons which satisfy the bourdary condition at infinity, Eq. (5.2): M X/s 0(o,.o;xJ) = E a(V0o)(osv )e X/V N N / + Z fj dv Z Aj(v)4(v)j e x/v (5 75) j=1j -1 m=j Here i-: has been assumed that the half-space has such a composition that all the roots v, s = 1 M, are finite and distinct. Setting x = 0 in Eq. (3o735) and using the full-range orthogonality relations plus boundary condition (il) the expansion coefficients are: os(v,)'- (vos^'oo)i - d (os )i(ot, (574) J J

42 When these coefficients are substituted into Eq. (3-73), with x = Q, the following non-homogeneous Fredholm equation for the emergent distribution is obtained: *i(oyo;,-l-) = loF(Io,)ei - S1 di' K(H'i,)ti(o,-;o,-~j'), ~ > 0.(5.76) 0 The matrices F(o,4) and K(p',-) have been defined as: M 1 F(~o,~) = 7 s (v~-os,-)~ Vos o ) -^ S-1 _ - oSp N j rN + C dv m-j t (v -tm(v, to) (77) j=l Tj_lm=J Nj(v)'7 and M K(~' ~) = 7E 1 ( )(vos,,t(vos -'s ) u- ~ s=l s N T) 1- Nj,' L+ J dv _ m -I (v tm(V- ) (3.78) j= Tj_ m=j (v) j Both of these matrices can be verified to be continuous functions when their arguments are positive. It is also possible to obtain a singular integral equation for the emergent distribution by considering the incident distribution as given by Eqs. (3.73), (3.74), and (3575); explicitly 5(~-po)i = F(|o,-|)Y - I di'j'K(~',-~)i(oi;O,-~'), [ > O. (3.79) O

45 Recently Case has proposed a new method for obtaining solutions to trans25 port problems, with which he derives the same pair of equations for the emergent distribution expressed in terms of the infinite medium Green's function. When the explicit expression for the Green's function (Chapter II, Sec. 4) is substituted into his equations, Eqs. (3.76) and (3579) are obtained. In the one-speed case, the singular integral equation (3.79) and the 23 Fredholm equation (3.76) may be solved together in closed form. However, for the multigroup situation no closed form solutions have been obtained, and to determine the emergent distribution numerical procedures must be used. Finally the N albedo problems may be treated collectively by using the generalized S-function. From the definition of the S-matrix, Eq. (3.8), the Fredholm equation (3.76) yields 1 S(4o,) Foq(tot) -fl d'K(i,)S( ko,Io > 0. (3.8o) 0 This equation may be solved numerically by standard techniques. However a simpler set of Fredholm equations may be obtained by decomposing the S-function into the U and V functions. A.- A' The Fredholm equation for the U-function can be obtained from Eq. (3.80) very readily. Multiply (3.80) by ui/,, integrate over [o from o to 1, and use the definition of U, Eq. (3.17): the result is

44 U(I - ZE -= i v 0S-) |LJ d-10o( (v o1) + f d' (vos s=11 S m -m + ^ I VdV{^ mj ^. 1 L^- d0"'tm(V,%) Nv,-)) Kr' j=l jT-1 m=j Nj(v) -o J + f di v- j - i Jd'K(',)U(') (3.81) 0 J J 0 Combining the terms in the square brackets, Eq. (3.81) becomes 1 U() = E + t f di'iK(.u',i) - f d,i'KS( )U(p ). (3.82) To find a corresponding Fredholm equation for V,(4), the adjoint albedo problem must be considered. We expand the ith adjoint albedo problem in terms of the decaying adjoint eigenfunctions (which also are full-range complete) as M ~~~t I t( tX /Vs (t(o,-!o;o; ) =' aO(v )2 (V s-e os + zE J dv m A (v)j (v, —i) e/ (3.83) j=l1 j-l m=j J Proceding as before in the ordinary albedo problem, one solves for the expansion coefficients, uses the boundary conditions (3.33), and treats the N-adjoint albedo problems collectively. The result expressed in terms of the adjoint t S -function is lt t' t s (jo, ) = (o,) - d'K(,' )(o,'),o > 0, (3.84) where

45 M F t([oY) = 1 t (Vos-_)( VosYo) s=l N-s os N rj N 1 + Z fJ dv Zm j N m(v) j (v i (. 85) j=1 jl =J N(v) (38) t Multiplying by i/>o and integrating over Ko from 0 to 1 an equation for U (|i) is obtained. Since U (p) = V(4j), this last result can be written as V([) E, + 1 f1 d'K(i,^ ) - f 1 d'V('L)K(ik'). (3j86) \ -_ -1 -1 ~ M -' Equations (5.81) and (3.86) are a pair of nonsingular Fredholm integral equations which could, in principle, be used to evaluate the U and V matrices by the standard methods of Fredholm equations.5 However, the matrix K(i,' ) involved in these equations is quite difficult to evaluate numerically since all the eigenvectors must first be calculated. Although once evaluated, the norm of K~(J,v') is readily found; and the uniqueness of solution and the uniform convergence of the Neumann series of successive approximations can be assured by a check on the necessary convergence condition35 N I -N 1 < i max E1l V idf lIf d Ki. j(^,' ) 2 (3 87) l<i<N j= o o J 355 AN APPROXIMATE EQUATION FOR U AND V In the previous section two Fredholm equations were derived for the U( i) and 3V(>) functionso From these equations a first order approximation for JU and f can be obtained by considering only the asymptotic or dominant eigenfunction in the matrix K(|,jt)o Eqs. (5.81) and (3.86) become upon retaining

46 only terms containing the largest eigenvalue, vw, ~ N t^Ti P-4 - [io''and - ( 4' ) N* N(Q')J V(F ) c E + a(vatv - ) - - i d'V(')(v -( v -). (3.89) First consider Eq. (3.88). If the constant row vector, h is defined as h = f1 dt'7t(v~ _-,)U(J), (3590) multiplication of Eq. (3.88) from the left by! (v -p) and integration over J from 0 to 1 yields for h l d7 ct(v, -i)[E + N (v - v)] 1 + t 1 + N17 d (v,-|)~(v~,-^) Substituting this quantity into Eq. (3.88), collecting terms, and recalling,vt 1 -j 1 4..#t a (Vt ) = v d'j (V ly ) + f d K (V,- t'), (5.92) one finds ~ r' t 0 0 --- ()(.9) N + J= d' 1 (v-') (v+p I ) However

47 Thus Eq. (3.92) becomes f1' T'( (3 95)1) U(^) _)5 V(la) is obtained: v(it) t= + ptin is vy wk (pe water) or te fiin alst ct96) These approximations for U and V have been found to work well only for a very *R are thought to occur in media which are close to being critical-i.e.ry either the absorption is very weak (pure water) or the fission almost compensates for the absorption. And it precisely for those systems characterized by large descrete roots that the iterative solution of the nonlinear integral for U and V is very slow in converging.*e Thus it is important that the initial guess for the starting iteration be a "good" one; and Eqs. (3-95) and (3-96) provide such a starting point. *Recall that in the one-speed case, the nonlinear integral equation was very slow in converging for 2c ( aOs/a) - ]i. But taking the limit of Eq. (2.93) as v e -oo, yields for te one-speed case, H(~) > 1 + 2j. This approximate expression has a maximum deviation from the correct value of the H-function of less than 35%.

CHAPTER IV SYMMETRIC TRANSFER In this chapter, the case of a symmetric transfer matrix is considered. This particular form of C is not so restrictive as it may appear at first glance. For instance, all two-group problems may be transformed into such a case. Also the N-group thermal neutron problem and the thermal N-term degenerate kernel approximation for a nonmultiplying medium can be cast into a form with a symmetric transfer matrix. This symmetric C also appears in the special astrophysical situation of radiative transfer with local thermodynamic equilibrium, the generalized picket fence model for the absorption coefficient, and isotropic 20 scattering. A symmetric transfer matrix greatly simplifies the results of the preceeding chapter. In particular, all the equations for U and V becomes selfadjoint, and thus there is no need to consider an associated adjoint problem. It is possible also to show that for this case all the discrete eigenvalues are either real or imaginary-never complex. Finally a proof is given in Section 4.3 that the solution of the resultant Fredholm equation for the U(W) (or yVl)) matrix exists and that this solution is unique. This uniqueness proof in turn can be shown to imply the half-range completeness of the infinite medium eigenvectors. 48

49 4.1 MULTIGROUP EQUATIONS FOR THERMAL NEUTRONS Before discussing the properties of the multigroup transport equation, a brief derivation of this equation for the case of thermal neutrons in a nonmultiplying medium will be given. Thermal neutron transport theory is usually handled in two ways. The first method which is probably more familiar to the reactor physicist, involves defining effective group constants and directly reducing the energy dependent transport equation to a set of "multigroup" equations without any explicit energy dependence. The second approach involves 36 an expansion of the energy dependence of the flux into an N-term polynomial expansion. This equation in energy polynomials can then be cast into the form of a multigroup equation with a symmetric transfer matrix. For the case of plane geometry, and isotropic scattering, the sourcefree energy-dependent Boltzmann equation can be written as follows: ( + t(E)>(x,,E) = 1 f d1' SI dE's(E'+E)*(x,i',E) (4.1) ox u 2 -' o where Zs(E'-E) is the scattering kernel for scattering from energy E' to Eo 6,37 Using the usual multigroup technique, the energy variable is split into N regions. Integration of Eq. (4.1) over the ith region, the ith multigroup equation is obtained N ( -x + oi)i(x,i) = Z c. J1 di (x,p'), (4.2) dx 1 1 j=l J -1 where the group parameters are defined as $i(x, i) J dEf (x,p,E), (4~5) AEi

50 Cli = dE.t(E)*(x,iH,E), (4.4) *i(x, ~-) AEi Cij S dE f dE'YS(E'*E)*(x,)iE'). (4. 5) 2ri(x, i) AEi AEj To make the multigroup constants a., and c.i independent of x and p. it is usual to assume that the energy dependence of the angular flux is separable. Further for a system in thermal equilibrium a good first approximation is to assume this energy dependence is Maxwellian with an effective temperature T. With these assumptions the multigroup parameters are given by cij = aj f dE f dE'TS(E'+E)M(E',T), (4.6) AEi AEj oi = Qai f dE Zt(E)M(E,T), (4.7) AEi 1 = S dE M(E,T). (4.8) oti AEi The cross section Z (E'+E) must obey the detailed balance relation:38 ZS(E'+E)M(E',T) = Es(E+E')M(E,T), (4.9) or cij = ci (4.10) aJ 1 Finally defining the symmetric matrix [A]. Cij' (4.11) a a a i i i the transfer matrix can be written in the special form

51 C = AD (4.12) where D is a diagonal matrix with elements, ae. v- 1 To show that this form of the multigroup equation can be reduced to the required form (i.e., ordered Z matric and symmetric C), consider the case where the transfer matrix C is written as C = D1AD2 where D1 and D2 are diagonal matrices with positive diagonal elements and A is a symmetric matrix. Clearly this is a slightly more general case than that for thermal neutrons derived above. The elements of the a-matrix generally will not be ordered but will be arranged as Ck -> ~ -> > Cm > 0, 1 > k,,~...,m < N.(4.13) To order them, construct a permutation matrix, P, such that P^ = 5 i = iNPli ik' i - P2 = i = 1iN PNm = limf i = 1 N (4o14) By multiplying Eqo (4~2) from the left by P. one obtains w(hei E + Z')4r(x,i) = Di'geJ dp'(x,'), (4.15) where

52 (x.)9 = PE(xP,) S' pp'l A' = PAP ~D = PDiP i = 1,2. (4.16) Since P = p, it follows by inspection that E is diagonal with ordered elements, a1 > 2 > *.. _> oN (4.17) Further, D' and D2' are diagonal matrices, with positive diagonal elements, and A' is symmetric. ~(1/2) Now, define the diagonal matrice D. /) as 1/2]j [i- jk i = 1,2. (4.18) Multiplying Eq. (4.18) from the left by D /2 D /2, the desired form of the transport equation is obtained: [;x E + E (,) = A" " dJC'(x,4'), (4.19) where "t(x, ) = Dl/2D/2'(x, ) " =Z (4.20) 1/2 1/2 1/2 1/2 A" A" D _ D2 AD, D2 A Thus the thermal neutron problem can be put into a form with a symmetric transfer matrix.

53 If instead of using the multigroup approach, the energy dependence of 4(x,.,E) in Eq. (4.1) is approximated by a finite sum of Laguerre polynomials of order one, i.e., N ~(x,~,E) - M(E,T) fi(x,9)Li(E), (4o21) i=l a symmetric transfer matrix form of the multigroup equation can also be obtained. Substitution of (4,21) in Eq. (4.1) yields Zi-l, ax+ Zt(E)) M(E,T)Li(E)fi(x, ) i=l -2 -1d lfi(xpl1) f dE'Zs(Et+E)M(E',T)Li(E') 0 d2 f 1(xp1 0'(4 -22) Explicit appearance of the energy variable is now removed by multiplying Eq. (4.22) by Lj(E) and integrating the result over energy. Assuming the normalization dE M(E)Li(E)Lj(E) = ij' (4.23) 0 the result is N F> EZ (ij i + Vi) fi(x,k) i - Aij d=i'f (x 9) 0 (4.24) i=l - where Vij = Vji = I dEM(E)Zt(E)Li(E)Lj(E), (4.25) and

54 Aij = f JdE L(E) I dE'ZS(E-'E)M(E')Li(E'). (4.26) By virtue of detailed balance (which Zs(E'-E) is assumed to satisfy), Ai = A. Eq. (4.24) may be written in the matrix form x E + V f(x,) = A J dI'f(x,)) (4.27) where the matrices V and A have elements V.. and A.. respectively, and the - - ijJ lJ vector f(x,i) has components f.(x,4). The real symmetric V-matrix can be diagonalized by an orthogonal transformation,,. as39 it 1 0 v- = = l * (4.28) \ 0 ON where 0 is the transpose (and inverse) of 0, and ai are the eigenvalues of V. Since Et(E) > 0 for all energy, it can be shown that a. > 0, i - 1 ~ N, and 1 for finite N they are bounded from above because all the V.. are finite.:ij Multiplying Eq. (4.27) by 0 one then has (ai E + (x,) f ddi(x,p'), (4.29) ^ TX /(x, i') where *(xp) = Of(x,) (4.30) = A =. a..' IY~I L v

55 Finally, the a.'s may be order in exactly the same manner as was done in Eqs. (4.15)-(4.17). Thus once again one has to solve a multigroup equation with a symmetric transfer matrix. Before ending this section, it is noted that any two group problem-even those including fission-can always be reduced to the symmetric C situation. The similarity transformation 0 NC12 S (4.52) will symmetrize any strictly positive C and leave Z diagonal. On the other hand, if one or both off-diagonal elements of C are zero, the resultant multigroup equations can be solved consecutively by applying one-speed theory. 4.2 DISCRETE EIGENVALUE SPECTRUM Although it is suspected from physical requirements that all the discrete eigenvalues, vos, s = 1 - M, of any subcritical medium must necessarily be real, it has not been possible to prove mathematically that they are never complex. However for the case of symmetric C, it can be shown that the discrete eigenvalues are always real. To prove this result, multiply the eigenvector equation for J(v -) OS Eq. (2.5), by*(vog, t) and integrate over i from 0 to 1, In this manner the following equation involving vo is obtained:

56 L. dpJ I dj4*(v,op)^(vo5,1 ) 11 dkt4+*(Vogs )ZI^(V,os) vOS -1 -1 (4.33) - f' dA!*(Vos,)c f d'P(VosH'), s = 1lM. -1 -1 The right hand side Eq. (4.33) is equal to N N RHS = f ld i1 o.* (v ()0() -i(v- dE fd(4(Vos0)ciIjf d.p' (V d 0,') -1 i ~l i os,1 "1 j s i = l j.i ~ l " ^ - 1 - i j - i i -l - N N j~~~l ~~(4.34) Clearly every term in the above expansion must be real, regardless of v, and hence the left hand side of Eq. (4.) must beupon deompos The integral on the left hand side of Eq. (4.55), which in veiw of Eq. (2.14) can be written as ~N N~N RHS d41 d( V' (vo(o')) i )- v Sv sbi((vo)b (v o) ('i( os' )! Os -1 i=- j=i+l 1i oi (4.56) is also real since it is a s of products of complex conjugate terms. Clearly the integral above expansion must zero, it follows the that the eignd henvalue, vthe left hand side of Eq. (4now) must be reshown that th integral on the vanish only for purely imaginary eigenvalues. Assume, for theand sidake of argumen(433), thatich iveiw of Eq. (214) can be written as oIt is easily,erified that in t.his case f~ o s1. -_ o -os~Os V -)(Vosai-I) is also real since it is a sum of products of complex conjugate terms. If the integral of Eq. (4-36) is not zero, it follows then that the eigenvalue, vos, must'be real' It will now be shown that this integral can vanish only for purely imaginary eigenvalues. Assume, for the sake of argument, that vos is complex and Re[vos] > 0. It is easily verified that in this case

57 0 < (vos ai-I)(v si-i) < (vosai+-)(v*s i+-) > 0 i = l^N. (4 37) Hence each integral in the sum on the right hand side of Eq. (4.45) is strictly positive, and since at least one of the terms bi(vos) bi* (os) is also strictly positive, the sum is strictly positive for Re(vo)s > O. Similarly it can be proved that for Re(v s < 0 the sum is strictly negative. Thus the integral O s (4.56) never vanishes if Re(vos} ) O. However, if Vs is purely imaginary, one has (V oGi-I)(XV oi-iI) = ( osai+)(vos*i+),> (4 8) and each integral in the right hand side of Eq. (4.56) is zero. Thus one concludes the discrete eigenvalues, vos, s = 1 ~ M, lie solely on the real or imaginary axis. However,in Appendix A it is proved that for a finite medium to have a stationary solution (i.e., be subcritical) there can be no imaginary eigenvalues. This result is apparent also from the following physical argument. A transport equation solution for an infinite subcritical medium with some source (not at infinity) must physically tend towards zero at large source distances. Since this solution could be expanded in terms of all the eigenfunctions, the imaginary roots would lead to modes which oscillate at large source distances and in turn lead to negative fluxes. Thus it must be concluded, for a subcritical infinite medium, the eigenvalues must be real. If a particular value o&f C and L yields an imaginary (or infinite) eigenvalue, vo, then a stationary solution for a medium characterized by this C and Z does

58 not exist, i.e., the system must be critical or supercritical. For the remainder of this chapter it will be assumed that any medium under consideration has only real eigenvalues. 4.3 SIMPLIFICATION OF THE U AND V EQUATIONS It is not surprising that the results of the preceding chapters are greatly simplified for the symmetric transfer case, since many of the equations become self-adjoint. In particular the eigenvector equation, Eq. (2.5), becomes self-adjoint and from Eq. (2.27) t(v1) V - (v,) ve(-l,l) or vse, s = 1-M. (4.59) Using this simplification the Fredholm equation for the U-matrix, Eq. (3.81), becomes.~(~) = E + f Ks(i',k)d' - f d ('K(,)U() (4.40) where M s=l S K ") = 1 r (Vos S(os' ) N. ( N 1 (4) + Z dv Z (- (4.41) j=l lj-l m=j N~(v) 0 The transpose of the Fredholm equation for V ), Eq. (3.86), becomes, for symmetric, identical to Eq. (4.40)' It will be shown in the next section that the solution of Eq. (4.40) is unique, and hence from Eq. (3.45) the relation between the U and V matrices becomes quite direct, namely

59 U(k) = V(). (4.42) This result demonstrates that for this symmetric case there is only one fundamental matrix quantity, U( i), which need be computed to obtain the S-function. Thus only one equation (either a Fredholm or a nonlinear integral equation for the U-matrix) need be evaluated. The generalized S-function, in view of Eqs. (3.21)-(3.25), and (4.42) then is given in component form as I(. 0I: += [.U(")c~ lo)]ij, (4 43) The effect on the S-function upon interchanging variables is easily determined from Eq. (4.43) as ^e.( o,^) = s (, o0) e (4) 44) The symmetric C approximation also greatly reduces the computational work for the solution of the nonlinear integral equation. From Eqo (3058) the nonlinear integral equation for U(4) becomes, employing the vectors ui of Eq. (3.56), Ui() = ei + 1 S d'(,')(')C (), -i = 1N. (4,45) o As is expected, the nonlinear integral equation for V(Gi) becomes equal to the transpose of Eq. (4.45), and once again there is only one equation to solve. For the one-speed case, Eq. (4.44) immediately yields the important result that the S(j Ao,f) function is symmetric with respect to the interchange of its

60 arguments. Ambarzumian assumed this result in the original derivation of the S-equation (one-speed version of Eq. (5.15)). This result was first proved 4o 27 by Minnaert using physical arguments and later by Chandrasekhar.7 This invariance result is important in decomposing the one-speed S-function into a product of two identical functions (i.e., U + H and V - H). 4.4 UNIQUENESS OF SOLUTION OF THE U-EQUATION In this section it will be shown that the solution of the Fredholm equation for U( ), Eq. (4,40), exists and is unique. This uniqueness for U(G) in turn implies that the eigenvectors are half-range completeo Consider first, the homogeneous equation U(G) = - 1 dpKS(, )U( ). (4.46) o Assume for sake of argument, the above equation has a nontrivial solution. Eq. (4-46) can be written as a system of vector equations using the vector ui(l) which is the ith column of the U(.) matrix (see Eq. (3.56)), namely f'( d). K (1,)ui( 4), i t- 1 - N. (447) Define now, the vector Xi(4) as ui(4) = Xi(l(). Multiplying Eqo (4. 47) by Xi*(4) and integrating over p, one obtains upon explicit substitution of KS(p. s,)

61 M 1 Jf d~(i)Xi(k) = - z L' d-1 [tjv0,,-)xi()] f d 1i Vos ) O _ _ S=l Ns - O N I j N 1 (x) Xi(i') - Z f dv Z 1 J d ft 7 j=-1 rj n rj(v) o (Xnv,-) [-(v r l)] f a' M( v,-es'c) Xi (') * (4.48) -J 0 4ij Since all the eigenvalues are real, all the eigenvectors I(v, -M) are also real. Hence both sides of Eq. (4,48) are composed of terms which are products of complex conjugates. Thus there is an apparent contradiction in that the right hand side of Eq. (4.48) must be real and negative, while the left hand side is strictly positive. This result proves that Xi(-), i = 1 - N, must be identically zero, or equivalently the homogeneous Eq. (4.47) has only the trivial null vector as a solution. The inhomogenous Eq. (4.40) for the U(i)-matrix also can be written as a vector equation for ui(4), ui(i) k= i() - diK (l,)ui(i), i - lN (4.49) where ki(L) is ith column of the matrix E + f4 f1 dKl' K(Ii',) Such a vector Fredholm equation can be reduced to the usual scalar Fredholm equation35 ft() = g(I) +.b dQK(',o))f(). (4.50) a Consider the variables i and p' ranging over the interval (ON)o Then define the scalar functions f(lJ), g(pl), and K(il',p) in the new interval by the following perscription:

62 f(up) = [u.(i-(j-l)N)]:., (4.51) fg() = [ki(i-(j-l)N)], (4.52) K(l',kt) = [KS(I'-)j-l)NP-(-l)Nl, (4.53) for 4 and 1' in the intervals (j-l)N < p < jN, (4.54) and (1-1)N < K' < 5N. (4.55) Here the subscripts j and I refer to various components of the vectors ui, ki and the matrix KS(p1',,). With these definitions the vector equation, Eq. (4.49), takes the form of the scalar equation, Eq. (4.50), with a = O, and b = N. One of the well known properties of Fredholm integral equations of the form of Eq. (4.50) may be stated as follows: the inhomogeous equation for any arbitrary g([i) has one and only one solution, f(4), whenever the correspond41 ing homogeneous equation (g(l) - 0) has only the trival f(4) = 0 solution. Thus the solution of U([L) from Eq. (4.40) exists and is unique since, as was shown in the previous section, the inhomogeneous equation has only a trivial zero solution. An immediate consequence of the uniqueness of solution of Eq. (4.40) for U(.7) is that the coefficients of the ith albedo problem eigenvector expansion in Eq. (35.73) are also uniquely determined. This in turn implies that the

63 eigenvectors 4(v, p), v > 0, are half-range complete in the sense of Case. This half-range completeness has also recently been proved by Leonard and Ferziger by using a more complicated technique.

CHAPTER V VARIOUS HALF-SPACE PROBLEMS One common technique for obtaining the complete solution of a particular half-space problem is to expand the angular flux in terms of singular eigenfunctions. Generally the emergent distribution of the half-space problem is unknown, and to determine uniquely the expansion coefficients, half-range completeness of the eigenfunctions must be employed. For the degenerate kernel approximation and the symmetric C case, such a half-range completeness proof, which shows explicitly how to obtain the coefficients, has been obtained. However application of such theorems to obtain closed formed solutions is considerably more difficult than in the one speed-case, and to calculate explicit numerical results is highly nontrivial.9 To evade such half-range formalism, a different approach can be used. The solution of any problem is obtained in two distinct steps. First the emergent distribution at the interface is obtained. It will be shown that the emergent distributions for all half-space problems can be expressed directly in terms of the generalized S-function of the preceding chapters. Once the emergent distribution is known, the use of full-range completeness and orthogonality properties of the eigenvectors readily yield the coefficients of an eigenfunction expansion of the flux. In this chapter the complete solution to the half-space Milne, albedo, and Green's function problems are obtained in terms of the S-funct ion or U and V matrices. 64

65 5.1 THE MILNE PROBLEM For every positive eigenvalue vE(O,1) or v=vos, s = 1 - M, a Milne problem can be defined. Denoting its solution by v(Qx,pj), it is defined as the solution of the transport equation, Eq. (2.1), with the following boundary conditions: (i) *v(oyp) = O p > 0, (51) (ii) lim v(xk) = (-v,)ex/ > (502) X-oo where t(-v, i) may be any of the eigenvectors-regular or singular. The first step in obtaining the solution is to find the emergent distribution, * (o, -p), Ai> 0. Consider a solution of the transport equation, 4(x,l), defined as *(x,p) = iV(xp) + a (xx,) (5.3) where \/(x, ) is an albedo problem solution with the boundary conditions (i) *a(~,k) = A(-v,), v > 0, (5-4) (ii) lim!a(xPk) = 0. (5.5) x-0oo Hence from Eq. (5.3), t(x, i) must have the boundary conditions: (i (o) i(o,-) = (-v,), t > 0 (5.6) (ii) lim 4r(x,-i) = (-v)e (57) X-oo

66 Clearly the unique solution for *(x, j) is (x,[n) - (-v,[)ex/v. (5.8) Equations (5.3) and (5.8) then yield for the emergent Milne distribution v(o,-'n) = ( u(-v,-) - v(~o, -) ), > 0 (5.9) The emergent albedo distribution, (o, -p), can be expressed in terms of the S-function. From Eqs. (3.9) and (5.4) ta(o,-lu,) - 1 (-'); (5o10) and hence the emergent Milne distribution in terms of the S-function is,v(o,),(v,) = )- I dS( [,, )( -v,' ) ( 5 11) Once the S-function has been determined, this equation could be used to obtain numerical values for the emergent Milne distribution. However in any computational scheme only the U and V-functions would be obtained, and -1hus this emergent distribution should be expressed in terms of these single -variable functions. This reduction of Eq. (5.11) leads to a far simpler equation for numerical evaluation. By comparing Eqs. (3.17), (3524), (3.51) and (3-53) it is seen that the S(4o, 4) matrix may be written in the form s(~',) = D'u()Cv(l' (~,l') (5.12)

67 where the double index notation is again used to denote summationo Recall also that the eigenvector, (v,-1'T), v > 0, p' > 0, in view of Eqs. (2.6) and (2.9), can be expressed as (v. -') = F(v,-p')Ca(v), v > 0,' > 0. (515) If the diagonal matrix k(v,, Po0) is defined as v-k( v" H ) = Dk(. D' )F(v,-), (5.14) then the integrand of (5.11) is 1 S(p1',p)(v,.-p) = v (lC(' )Mr(vA fl 1' ) Ca(v). (5 15) This expression can be considerably simplified by considering the explicit form of Mk(v,L.,'). Substitution of F and D from Eqs. (2e7) and (3.50) yields (in component form) [ML(V,Y, I).... =. (5.16) 10 ( giV+4 )( i(i+akp+ 1J The identity (ai~+a k1)(oiv+~') ~i akV- ai+k 1 ov(517) may be written as -1 1 P 1 P P 1 (5.18) (cir+ak')( iv+-') ai kv-p Oij+akp.' ci akv —k civ+p' This result transforms Eqo (5.16) to

68 [M(1 a-k 1 i 6* 1;j (519),- iij O- 1- oi CiU+Coi(+kl 1ij oi oiv+[ ) and since = 1 i1 - b -td (5.20) b +dt' d b+d' Eq. (5.19) yields'*[My(v,^')].= {^ - Sij - - 5, -iJ + i- SiJ -(5 21) 11' 1J^ vc I yjL' I Gkov-11 Cgi ] 0i^+0Gkll' 1J CTi i J;oiv+pi' J or equivalently' Mk( V1'I)]ij.. -{ 1[( )]ij. (5i22) Substitution of this result into (5.15) and use of (5.13) gives 1 S(',)(v,-') = Pv )C (v, PpV - -^- U~T(ki)C V(I')DTC a(v). (5.25) oGv-p T7hus the emr-get distfibution is + Pv (1)C f'{di'JV(i')D( (,' )C a(v). (5.24) 0 V-L 0 This last term may be further simplified by considering the nonlinear integral equation for the U-function. The transpose of Eq. (3.55) is.U(~) = E + uU(&)c f d2V(k')D(Il'), (5.25)

69 or solely in terms of Uk () (k) = Ik + k C(k).4 dt? V( (u)k( kt), (5.26) where [Ek] ij ij ik Hence the emergent distribution is (o, -P) P= (vi) - U() d'V(p i)v,-') + Pv (U (p-)-E )C a(v) (527) However from Eq. (2.6) (vP) = PF(vp)c a(v) + G(v,)(v), (528) or substituting for F(v, ) vp) _ Pv E C,(v) + ( v,.)X\(v). (5.29) 0 v-k CY V-^ I Combining the last term in (5.27) with 4 (v,.) the emergent distribution simplifies to t(o,-) (= G(v,l)x(v) + Pv u()C a((v) Pv 1 - P U (p)C f diV(,)(v,-1 ), (5350) aC v-| lt or from (5.28) (o,-p.) = G(Vx(v) + P u()~ - J' )F(v,-L)C (v) (5.31) hY~~~~~~~~~~~~~~~() (~,~z)

70 Finally writing this equation completely in terms of the matrix JU( ), the emergent distribution of the generalized Milne problem is given by the very simple equation v(oG(v) =,)(v) + PF(v,U h(v), v > 0 pI > 0, (5.32) where the constant vector h(v) is h(v) = CE - f dt'V(K')F(v,-') l a(v). (5.33) Now that the emergent distribution for the generalized Milne problem has been obtained, the angular flux inside the half-space can be obtained. The solution may be expanded in the eigenfunctions which satisfy boundary condition (5.2) as,[w(x4,) = 4(-v,)ex/ + Ca(Vos))(vos,)e/ OS s=l + J dv' E Aj(v')t(v',) e (5.34) j=1 j-_l m=J s 0 Full-range orthogonality immediately gives the expansion coefficients: c(v05) 1 JL d44't( voi) 4)v (O, -k) 1 A4 - - - f d1 (v p){G(v,k)2(v) + PFI(v,k)U(I) v) J (5.5 35) s o0 and = 1 tmd(v,(36) Am(v') =- ) ldkC^i (v',lp)G(vp)X(v) + PF(v,~)U~)h(v)3. (5.56) No ^~(v') ~...

71 Often the Milne problem of most interest is the one related to the largest discrete eigenvalue, v~. The asymptotic behavior of this particular problem for large x is *aS(x,=) = e(-v,)ex/v~ + c(v)(v,)ex/v (5.57) A quantity of interest for this problem is the extrapolated end point, x0, defined such that p (xo) = dtas (x0o,) = a(v~)e x/v ~ + Qa(v)a(v)eeXo/v= 0 (5o38) Solving for x and substituting for a(v~) from Eq. (5.35), the extrapolated o end point is -1 xQ~ - - ~~ f d Vv,p)& (o,-), 0V 1 S d or in terms of the U(4) matrix xO= 1 f n{j1 fd~t(v~)C F2(v~,t)U(>t)h(v~) i (5940) Xo = 2 ~ VP(o - ^s ( 40) 5.2 THE ALBEDO PROBLEM In Chapter III the albedo problem was discussed, and the complete solution obtained. For sake of completeness the general results briefly are reviewedo The generalized S-function was defined in terms of the emergent distributions of the N albedo problems. From Eqso (3.1), (3.3), and (539), the emergent distribution for the ith problem, $i (~'o,;~' -I), is J^(o,4o;oo,-l) = -(l —o,)ei, > o. (5041)

72 The complete solution, i.(o,lJ;x,ll), can be expanded in an eigenfunction expansion as was done in Eq. (3.75). M *i(o, oX, W) = l. (VOS)0(vo e oS S=1 TN n. N/\. + N J dv( N Am (V>)(v)) e-x/v (5.42) j=l qj-1 flm=-j From Eqs. (5.1) and (3.1) full-range orthogonality gives for these expansion coefficients (Vs) N ( oso)i - S dt (vos,-)S(to, )., (5o43) a'X~~vosY) (Vos~IJ0) f )e ) Nos s o Nos o and A v) = (v, e f dNTv(v- -1Sy( ^ (5.44) AJ m(v )' t (v )ei - 1 f ( ~tm ii rv"(v) -j 0nr(v) o J~ c~J These results can be expressed more simply when the transfer matrix of the problem becomes symmetric. In this case the expansion coefficients can be expressed in terms of the emergent distribution of the generalized Milne problem. The transpose of this emergent distribution is, from Eqo (5.11), V kL O T(o,-~) = ~(v,) - 1 J d'(v,- ),P',), (5.45) or in view of Eq. (4.55),(~O\ ) )= (vf) - 1 d1 l(v,- )S(LI ). (5.46) t p o Comparison of this equation with Eqs. (5o43) and (5.44) shows that the expansion coefficients can be written in the simple form

75 c(vos) V= 0(o, -to1)e(i, (5-47) Os and A(v) = Am (5- 48) Jj (v) PVj where vjm(x, p) is the generalized Milne problem solution asscciated with continuum eigenvector j m(-v, ). Finally using Eqso (5~32) and (4o51), these coefficients in terms of the U(O) matrix are ( [o F(vos) os) - dt'U(i')F(vos,-W')C a(vo) (5J49) a("os) = ^ o oM o -! ^ l s'-^c}^O.., and Aj'(v) = i G(v,io)( + m F(Vo) o)c (E-'Uv,-) a(v). aj J o) For the nonsymmetric C situation, the expansion coefficients for the albedo problem could be expressed in terms of a generalized adjoint Milne problemo However since this latt;er problem has no'immediate physical meanrirg t-his approach will not be pursued furthero 5,3 HALF-SPACE GREEN'S FUNCTION As a final example of the use of the generalized S-function r technique, the half-space Green's function problem will be solvedo This problem is the most general half-space problem in the sense that all other half-space problems can, in principle, be generated from this functiono The half-space Green's function, with the source neutrons belonging solely to the ith energy group, Gi(xo,~o;x, ), is defined by the equation

74 E + )Gi(xonko;xkt) = C 1 dl'Gi(xo,lto;x,i') + 6({i-Mo)6(x-xo)ei, xo>O, (5.51) with the boundary conditions (i) Gi(xo,( o;o,4) = 0, > 0, (5.52) (ii) lim Gi(Xo,o;x, [I) = o. (5553) X —oo The first step towards obtaining the solution, is to determine the emergent distribution, Gi(xo, 0;o,,-), p. > O. Consider the Green's function to be composed of two parts: G(xoo;x,) = Gi(Xoo;X ) + xo( > 0,54) where i(xo,.0o;x,p ) is the known infinite medium Green's function discussed in Section 2.4. The albedo problem solution, 4a(x,p), satisfies the homogeneous transport equation with the boundary conditions (i ( ) (on) i(= ( o, o,), > 0, (ii) lim a(x,I) = 0. J (555) x-oo Clearly Gi(xo,llo;x,p.) defined by Eq. (5.54) satisfies Eq. (5.51) and has the required boundary conditions. The emergent distribution of the albedo solution, ja(o,-.), can be expressed in terms of its incident distribution from Eq. (3.9). Hence from Eq. (5.54) the emergent distribution Gi (Xo, p0; o, -g ) is

75 00 f1 d~i' S( [i' p,) 00 5, 56 Gi (Xo,);O,-~-l) -= Gi(xo,o;~o,-) -, 1 d' S( i', )Gi(x,,u;o,'). (5..56) Since the angular flux for the half-space Green's function is now known at x = 0, for all i, the complete solution can be found by using the full range completeness and orthogonality properties of the infinite medium eigenfunctions. Explicitly M _X/V G(xoko;x,) = Gi(Xoo;xji) + Z aC(vos)O(vos )e s=l (5.57) + ) J dvt L A_(v)t (v,|j) } e-x/ j=l j-l =j where (v ) -- L (Vos q,) { i(Xooo;o,T) - ls(,ll,.) (x;,~ o;o, )) (5.58) and 7 1.vtm 1G 1 A~j(v) =d[4t~ (VBI)d \S([Gl.jXo,%;o,kt) - _ d.iS(1t, (x,. N.( v) o " I-1 o (5.59)

CHAPTER VI NUMERICAL RESULTS In most analytical treatments of transport theory, the extension of results to numerical evaluation is far from trivial. To obtain numerical values for a particular half-space solution, the singular eigenfunction expansion coefficients must be evaluated. This evaluation can be quite difficul.t since it involves principle value integrals and functions which vary rapidly. Even to calculate the emergent distribution (which often is all that is required), methods based on half-range eigenfunction expansions still require evaluation of the expansion coefficients. However, the emergent distributions are readily evaluated from the present S-function formulation without recourse to complicated numerical techniques. When the solution is needed inside the half-space, the present method is still thought to be superior to half-range methods even though expansion coefficients must also be evaluated. The difference is that the S-function method uses a full-range expansion which allows the use of the known full-range orthogonality relations, while half-space methods must use the more complicated half-range orthogonality properties or constructive half-range completeness theorems to obtain the coefficients. To demonstrate the feasibility of solving multigroup problems in terms of the generalized S-function, five different examples are considered. The first four are for light water thermal systems. The first case, Case I, is for ordinary water and Cases II, III and IV are for borated water with con76

77 centrationsof 1.025, 2.99 and 6.55 barns per hydrogen atom respectively. Three broad energy groups were chosen and defined as Group 1: 0 < E1 <.0255 eV Group 2:.0253 < E2 <.5520 eV (6.1) Group 3:.5520 < E3 < 2.58 eV The thermal spectra evaluation and cross section averaging to determine the group constants were performed by the INCITE code, using the McMurry-Russell H20 kernel at room temperature (295~K). The group constants for the four cases are listed in Table I. As a final example, Case V, a half-space of uranium-238 enriched with 2% uranium-235 is considered. This case, which includes fission in the transfer matrix, was broken into six energy groups. The group constants for this 2% enriched case were calculated from the 6-group constant tables originally developed for highly enriched pure uranium systems such as Godiva and Topsy. 7' The energy groups and the group constants for Case V are listed in Table II. 6.1 ITERATIVE SOLUTION FOR THE U AND V-FUNCTIONS In Chapter III twosets of nonlinear integral equations for the U(p) and V(i) matrices were derived. Both of these sets, Eqs. (3.26) and (3.27) and Eqs. (3.61) and (3.62), can be solved readily by numerical methods. In this section the numerical procedures and results of the computational schemes for both sets are discussed briefly.

78 TABLE I THREE-GROUP MACROSCOPIC CROSS SECTIONS FOR WATER (cm-l) CASE I: Pure Water oi = 4.8824, 02 = 3.2345, 03 = 1.7467 3.8180.35242.012285 2C 1.0326 2.8669.65299 1.145x10-9.0002069 1.07789 CASE II: Borated Water, 1.025 barns/hydrogen atom o0 = 4.9270, 02 = 3.1692, 03 = 1.7493 /37953 ~32387.012239 2 = ( 1.045 2.8005.65012 1.143x10-9.0005813 1.0763 / CASE III: Borated Water, 2.99 barns/hydrogen atom 01 = 5.0914, 02 = 3.0720, 03 = 1.7696 /.7659.27047.012204 2C =(1.0454 2.6828.64697 1.140xl0-9.001379 1.0796 CASE IV: Borated Water, 6.35 barns/hydrogen atom ol = 5.3220, 02 = 2.9760, 03 = 1.7853 3.7901.21636.012019 2C = 1.0481 2.5341.63509 1.132x10-9.002291 1.0737

TABLE II SIX GROUP URANIUM CROSS SECTIONS (98% U-258, 2y/ U-255) Group Energy Oj i Transfer Cross Sections (cm-i) i (Mev) (cm-1) Ocjl ai2C 0 CTi4 Oi5 016 1 5.0-00 c1917.0761.0125.oo00165.000584.000686.00112 2 1.4-5.0.2105.0448.1079.00276.C000982.00115.o001o88 5 0.9-1,4.2164 o0356.0267.1399.000480.oo000564.000916 4 0.4-0.9.2512.0528.0o565 0395.2165 oo000605.000985 5 o0.1-0.4.5919.0528.0558.0266.0240.5797.000492 6 0-0.1 5655.00452.00512.0oo487.00587.00587.5452 NU-255 9.56 x 1o28 atoms/cm5. NU-258 4.69 x 26 atoms/cm5.

80 In any numerical scheme, integration of some function f(L) over t, must be approximated by some summation procedure. In all of the present work a 16 point Gauss quadrature technique was employed, i.e., b 16 f dtf([i) w(4k)f(4k) (6.2) a k=l where "k are the quadrature ordinates for the range (a,b), and w(4 k) are the corresponding Christoffel numbers. With this integration approximation, the iterative schemes for the set of nonlinear integral equations, Eqso (3526)-(5.27) can be written as: k) E+k Z w()9A(t1,0k) *[U ((n)()c k(=CV() ivl6 1) D1 ( (6.5) and 16 U(( n+)(k) = E+tk w()A(k,)*U n)( k)Cn+l)()], k = 1^16 I l (6.4) where the superscripts on U( T) and V([j) are the iteration indexo Similarly the iteration scheme for the equations for Z([) and (6i) without direct products, Eqs. (3.61)-(3564) can be represented as, 16 (n) c2 — l v(nl() = [E w(-i)Di(IkQ)U (G1) C i' k = 1 16, (6.5) lk Ew(j)D9i(V tIk, e k = 1 16,. u(n+l)() = - k w()Di(k, )v(n )() k - 1 ~ =10k (6.6) For both these schemes a good starting point is to take U(o)(V) _ V(o)(4) = Eo For a large number of groups, Eqso (6.5) and (6.7) as they stand, require per iteration considerably more computation than Eqso (6.3) and (6.4)o This

81 follows the fact that to evaluate the matrix inverse of Eqo (6o }) or Eqo (6.6) involves finding N2 cofactors of an N x N matrix. A more efficliet, but equivalent, technique would be to use Cramer's methodo From Eqs. (;5f54) ard (5o60) one obtains detIFj(Gtk) (Ky) v(n l)( k)] = d <F (), (6~7) - kj det F{k )i and [unil)(k^)] = ietg I, j GN)l (6k8) ji k S det |{G(k)| 0 where the N x Nmatrices F and G are given by 16(n) ( F(~k) = E - kE W(c)-i(k )( (6.9) and 16 G(k) = E -kF w([q)Di(tk 1V(nl)( (6}10) The matrices F ( k) and G (i ) are obtained by replacing tihe jt i rw by the ~j k kj k vector e. Because ei has a particularly simple form, evaluat:n of Eqso (6o7) or (608) involves calculating only N cofactors cf an N x N matri.x, For a large number of groups this savings can be appreciableo 4'7 Two computer programs in the FORTRAN IV language have been written to solve by iteration Eqs (635)-(6 4) and Eqo (67)-(6.8) for the U(p.) andj (.) matriceso These programs, called GENUV arnd MILNE respectively) are listed in Appendix Bo To investigate the properties of these two compwtatiornal schemes,

82 the four three-group cases of water (discussed at the beginning of this chapter) were solved for MU(J) and V(i) on The University of Michigan's IBM-360 system. Instead of storing successive iterations of U(k) and V(4) to measure convergence, identity (3531) is used to a given measure of the error in U(4) and V(J4) Every five iterations the matrix 16 _ 16 1_ v w(()! u(n)(~) + E V(~) - U(J)C W( Im)V(m)/ (6.11) 2~=1 \ m=l is evaluated. The maximum deviation of the elements of (6011) from the elements of the known Z matrix is then used as a measure of the error for each iteration. With this definition of the error in U(n)(4) and V(n)(4) at the nth iteration, a plot of error versus iteration index can be made. In Fig. 1 the iteration error versus the iteration index is given for both iteration schemes for the four cases of water. It is quickly seen that as the absorption decreases the convergence rate correspondingly decreases. This result is not too surprising for in the one speed case it is well known that iteration of the nonlinear integral equation (3p28), for H(4), which is a specialization of the U(J-) and V(4) equations, converges extremely slowly as c - 1/2 (the angular averaging factor of 2 has been absorbed into the present definition of c).27 However from Fig. 1, the convergence of the "Cramer's" scheme (Eqs. (6.7)(6,10)) is far superior to the direct scheme of Eqs. (6.3) and (6.4). In fact, the convergence rate of the Cramer' scheme is a factor of 5 to 351/2 times greatero

IO1 --- General Direct Iteration I. Pure Water 10-2 \ —- Cramer's Iteration IT.Borated H20-1.025 b/H-Atom JI..BoratedH20-2.99 b/H-Atom g\oN>~ 6 | 12. Borated H20-6.35 b/H-Atom J 10-8- \ -:3: 0-14 \ \ N 04\ \ \ \ \ \ 10- 10-168 10-18 N %c- r2 10-18 i I N i I i iX i I i 10 20 30 40 50 60 70 80 90 100 110 120 NUMBER OF ITERATIONS Fig. 6.1. Convergence of U-V iteration schemes.

84 From Fig. 1, it may appear that even this Cramer's method is slow to converge. But it should be noted that the four cases presented as examples were all for relatively weakly absorbing situations. The INCITE code which generated the group parameters for these cases, also gave effective one group constants. The effective one-speed multiplication factors, 2c, for Case I through IV are.995, *984,,958) and.920, respectively. For other cases which have more absorption or are farther removed from criticality, the Cramer's iteration scheme converges quite rapidly. In those situations, which are close to criticality, the approximations of the Fredholm equations for U(p) and V(p), given by Eqs. (3.42) and (5.95), can be used to give a good initial starting point to the iteration scheme. A program to calculate these Fredholm approximations, called FRED, is listed in Appendix B. Although the Cramer's scheme does converge relatively quickly, the computational time required for a large number of groups is appreciable. For example, the six-group calculation required approximately ten minutes on the IBM-360 Model 70 computer. To improve the convergence rate when a large number of groups is involved, various well-known schemes for accelerating the convergence could be used (e.g., successive overrelaxation, residual poly6,30 nomials, etc). 6.2 MILNE PROBLEM To demonstrate the ease with which emergent distributions for half-space problems can be numerically evaluated from the U(|1) and V(4) functions, the classical Milne problem (i.e., v = v~, largest of v, s = 1 ~ M) is considered for the five multigroup cases previously mentioned. This particular problem

85 first requires knowledge of the discrete eigenvalue spectrum, and a method by which the discrete eigenvector, ~(v~,p), can be evaluated. Once these quantities have been found, straight forward evaluation of Eqs. (5.32) and (5.39) gives the emergent flux and extrapolated endpoint. Unlike the computations done by others for this problem, no difficult mathematical techniques such as principle value integrals are encountered in the present technique. In this section the computer programs used to evaluate this problem are briefly described, and the results presented. These programs, written in FORTRAN IV for the IBM 360, are all listed in Appendix B. To find the discrete eigenvalues of the dispersion relation, Eq. (2.15), a series of three subprograms were written, DISP, MINV, and ROOT. The first evaluates the matrix Q(1/y) for any y E(o,l), and MINV then evaluates the determinant. The program ROOT then searches for the values yo for which detln(l/yo)I = O. This search is accomplished by calculating the quantity detl|(l/y)| for successively decreasing values of y and watching for a change in sign in detl|(l/y). The use of increasingly finer grids whenever a change of sign is encountered, allows the zero to be evaluated as accurately as desired. In the present case all zeros were calculated to within an accuracy 7 16 of one part in 107, although ROOT allows accuracy to within one part in 10 The only major limitations on the above procedure is that it will not always find successfully multiple zeros, or if the initial search grid is too coarse it may pass over zeros which are very closely spaced. This latter restriction is easily circumvented for a difficult situation by making the search grid finer

86 For all five cases, two discrete roots were found. The first four cases for water were also investigated recently by Metcalf, who used two group con19 stants for the same four cases. The results for all five cases are tabulated below along with Metcalf's two-group results. For the classical Milne problem VI = VOl TABLE III DISCRETE EIGENVALUES ~Case v~ ~(cm) \Two Group Result9 Case v (cm) vv (cm) ( (cm) I Pure Water 2.22151.638781 2.2221 II borated Water 1.31951.636656 1.3190 III borated Water.847391.625854.84534 IV borated Water.659514.610169.65105 V 2% enriched 12.3793 5.21725 Uranium The evaluation of the discrete eigenvector L(v, [t) and its normalization a(v^) is calculated by the subprogram AVEC. From Eqs. (2.9), (2.11) and (2.12) the normalization vector is given by [E-T(v)2a(v) = O 0 (6.12) where T(v~) is the diagonal matrix [T(v)]ij = 2vI tanh-l 1) (6.15) Equation (6.12) is readily solved by matrix inversion (MINV) by using only the first N-l equations in (6.12) and setting aN(v~) - 1. The subroutine entry

87 EIGEN of this subroutine allows calculation of jt(vw~ ) or ((v~,4) for the positive Gaussian integration ordinates, [k' k = 1 - 16 for an input parameter of C or Crespectively. From these evaluations of the discrete eigenvectors, the full-range orthogonality normalization, N~, is readily calculated by 16 N, - f d ~' (v') (v 4) E kW(ak)[4P(^Qk) Qk k=l + (vt6 k )(v k)]. (6.14) Once the quantities v~ and a(v ), together with the U(G) and V([l) matrices have been calculated, the emergent Milne problem distribution r (o,-4) is -v e readily evaluated from Eq. (5.32) for v = v, ioeo, - (o,-) = F(V,-)U(4)C(E - f di'V(t')F(vj,-4)Cla(v~) (6.15) The above calculation is performed by the mnain program MILNE which used Cramer's scheme for finding U(4) and V(4). This program is listed in Appendix B, Since the vector a(v ) was arbitrarily normalized in the subroutine AVEC such that aN(v ) = 1, it is not possible to directly compare results for different C's and v's. To aid comparison the program MAIN also gives the emergent distribution normalized to unit density; namely, N N f d1t[v (opv)]. E Z dp[tv (o,-)] = 1. (6.16) i=l -1 ~ 1 i=l o 1i In Figo 6,2 to 605 the emergent distributions for the Milne problem corresponding to the largest discrete eigenvalues for the four three-group water caseso Figo 60.6 is a polar plot of the normalized emergent Milne distribution

1.4 Pure Water z 1.2 0 C,) 0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0~ I~o~ 1.0- ^^-^G In 1.0 - w 0.^.6 w I ^^^'GroupIT xI03 -jw.2 a: 0.2.3.4.5.6.7.8.9 1.0 -PIL Fig. 6.2. Emergent Milne distribution (normalized to unit density) for Case I.

In Borated Water g11.202 Bans/orHydrogen Atom rcr (n Group ij.8 (I) [r.6 CI 2 ru >W.41-/ ^^^ ^Oroup2 xl02 w.2~~~~~~~~~~~~~6. (9 * ~' -3 *4 C-5 6 —1? — -— (9 Fig 6. Emergent Milne ediernutt for Case II. ibution (normalized to unit density)

1.4 Borated Water z 2.99 Barns/Hydrogen Atom / 0 1.2- co _ Group Il Fig. 6.4. Emergent Milne distribution (normalized to unit density) for Case III. ~~~~i. 6h mretMm itiuin nraiet ntdniy Ld~~~~o Cae II

1.4 Borated Water z 12 6.35 Barns/Hydrogen Atom 1 1.2 - roup I n,U I — / 0 I..0 I-L Z w.D 2 w..6 GrouplT x I0.2 O0.I.2.3.4.5.6.7.8.9 1.0 -A/ Fig. 6.5. Emergent Milne distribution (normalized to unit density) for Case IV.

2% ENRICHED URANIUM-6 GROUPS Vacuum Medium Group mII Group Group I \.2.4.6 8 Group T-' Group EZ L- I -Group T Fig. 6.6. Normalized emergent Milne distribution for Case V.

93 for the six-group uranium case, Case V. In this latter figure only half the emergent distribution is plotted since the distribution is symmetric about t = 0. In the four cases for pure and borated water, two interesting results are noted. For pure water (a system close to criticality) the emergent distribution is almost linear with the cosine of the emergent angle, i.e., -[i. Then as absorption is increased the emergent flux becomes increasingly more anisotropic. This result was also noted by Metcalf.19 As a final numerical example for these five cases, the extrapolation length was calculated (performed by the MILNE program). This quantity is calculated by straightforward evaluation of Eq. (3.59). The results for the five cases are shown in Tab:';- IV, and as before the four water cases are compared with the results Metcalf obtained using a two group analysis. TABLE IV EXTRAPOLATION LENGTHS Extrapolation Length (cm) Case Extraplation Le h Calculated by Metcalf calculated by MILNE I pure water -.205947 -.2058 II borated water -.214407 -.2140 III borated water -.233944 -.2319 IV borated water -.294657 -.2560 V 2% enriched -2.21798 These three-group calculations agree quite well with the two group results in the first three cases. However there is an appreciable difference for the most heavily borated water case. This is at first surprising since the discrete eigenvalues for this case agreed reasonably well. To investigate this discrepancy all the two group calculations were performed by the MILNE

94 program, and Metcalf's results were confirmed. Hence one is led to believe that the introduction of the third high energy group has become quite significant. In fact from Figs. 6.2 and 6.5 it is seen that the flux at the interface for the third group is one hundred times larger for Case IV than for the pure water situation As a final check on the accuracy of the emergent distribution, MILNE calculates the coefficient of the asymptotically increasing mode. This quantity should be unrii;ty From Eq. (55o4-) this coefficient is 1 1'- L t16 _ n1 duaes t(h.fr (o,-1) = f- rE n(jkW (v, k)ev (01,-k) (6.17) n a ases the I Nrity was less than 10 In all cases the difference from unity was less than 10 5

CHAPTER VII CONCLUSIONS In this work the solutions of various half-space N-group transport problems in plane geometry were found. These solutions were obtained by combining the techniques of Chandrasekhar's principle of invariance and Case's singular eigenfunction expansions, and extending them to the multigroup caseo In particular, it was shown that the expansion coefficients of a multigroup eigenfunction expansion for any given problem could be expressed directly in terms of a "generalized S-function." To obtain equations for this generalized S-function, which is closely related to the emergent distribution of the half-space albedo problem, two different approaches were used. The first, obtained from Chandrasekhar's principle of invariance a nonsingular, nonlinear integral equation for this functiono Furthermore, this method demonstrated that the S-function, which has two angular arguments, could be decomposed into two auxiliary functions, U and V, each of which depends on only a single variable. Although it was not possible to solve analytically the nonlinear integral equation for the S-function (or the corresponding nonlinear integral equations for the U and V functions), numerical solution by iteration was found to be quite straightforward. However this approach does appear to have one defficiency. From analogy to the one speed equations, it is suspected that the nonlinear integral equations may not have a unique solutionO As a result, a set of necessary conditions for 95

96 S (and for U and V) were derived; and, again from the one-speed analogy, it is expected that these necessary conditions may indeed be sufficient. However, a proof of sufficiency is still lacking. The second approach obtained, from an eigenfunction expansion of the albedo and adjoint albedo problems, Fredholm equations for the functions S, U, and V. These equations also could not be solved analytically; and although they generally do not have the short-comings of the nonlinear integral equations (i.e., possible nonuniqueness), their solution by numerical techniques, while also straightforward, is more difficult since all the infinite medium eigenfunctions must first be evaluated to calculate the Fredholm kernel. In the specific situation of a near-critical system, the Fredholm equations gave a good approximate analytic expression for the U and V functions by neglecting all but the dominant mode entirely. Throughout this research great emphasis was placed on techniques amenable to numerical evaluation. This reduction, of analytic results to numerics has, until quite recently,been decidedly neglected for energy-dependent problems. This work demonstrated how emergent distributions for any half-space problem are readily calculated. Although evaluation of the eigenfunction expansion coefficients (from which the iterior distribution may be calculated) is still a difficult problem because the explicit forms of the eigenfunctions must first be calculated, the method described here is still felt to be superior to other half-space methods. The simplification arises from the fact that full-range expansions can be employed rather than the more difficult half-range techniques.

97 As an example of the ease with which the nonlinear integral equations for the U and V functions can be computed, a series of computer codes were written in the widely-used FORTRANIV language to solve for the emergent distribution of the general N-group Milne problem. These programs are designed for any number of groups and arbitrary cross section data (with the proviso the data represent a subcritical system). Finally for the special case of symmetric transfer (as is found in thermal neutron transport problems) it was shown how the results for a general transfer matrix are greatly simplified. For this case many interesting results can be proved: uniqueness of solution of the Fredholm equations, reality of the eigenvalues, and half-range completeness of the infinite medium eigenfunctions can be demonstrated. As in any research of this nature, many related extensions and areas of further investigation suggest themselves. Among those areas which deserve further attention are the nonlinear integral equations for the U and V functions. Although these equations are readily evaluated numerically for the U and V functions, the determination of their uniqueness (or sufficient conditions for uniqueness) would put the present theory on a much firmer mathematical basis. For the symmetric transfer case it was shown that subcritical systems must have real eigenvalues. In the general case, however, this was not possible, and an investigation of the relation between the group parameters, subcriticality and the eigenvalues would be quite interesting.

98 It is felt that the present method for solving multigroup half-space problems with isotropic scattering can be readily extended to other transport problems. In particular,the anisotropic scattering situation seems quite amenable to the generalized S-function approach. More useful perhaps-would be an extension to slab problemso Again using the analogy of the one-speed case, it should be possible to derive nonlinear matrix integral equations for the reflected and transmitted flux. Thus while the main effort of this work has been to multigroup, isotropic, half-space transport problems, it is expected that the techniques explored and developed here will be found useful in exploring both the numerical and analytic treatments of other energy dependent transport problems.

APPENDIX A DISCRETE EIGENVALUES FOR SYMMETRIC TRANSFER In Chapter VI, the eigenvalues, ~v, s = 1 M for symmetric transfer OS were shown to be either purely real or purely imaginary. From physical arguments, it was then demonstrated that for subcritical systems,vos could not be 49 imaginary. This appendix will present a more rigorous derivation of this facto Consider a half-space which at time, t = O. has some arbitrary angular flux distribution, *(x,, o)o The resultant flux for t > O, |(xi, t), is given by the time-dependent multigroup equation +( + M + Z) f(xi, t) = C 1 d4'*(x,4',t) (A.l) If the system is "subcritical", this flux must tend to zero after a long time, lim ia.e., t(xit) = 0. This requirement will be shown to imply that the. -too W, discrete eigenvalues, vos, cannot be imaginary~ Taking the Laplace Transform of Eqo (A1l) one obtains (s + x + ) ) (x,4,x) = C f d' (xI',s) + *(xko), (A.2) where -st ~ (x,s) = fo ds e *(x,4,t).(A5) -T o 0 Then taking the Fourier Transform of (A.2) one has (s + ikli + Z) f (k,,s) = C h(k,s) + x(k,,o), (A.4) 99

100 where 4 (k,,,s) - f dx e l (x, 4,s) (A.5) — 00 -ikx x(k, o) - - j dx e lV(x-pJo), (A.6) ~- -O 00 and h(k,s) f- dpi' (k,p',s) (A.7) -- f Integration (f Eq. (A.4) with respect to I, yields for hk(k,s) h(k,,s) = [E - (k, s)]1C 1 X(k,,o) (A.8) The matrix A(k,s) is defined by A(k,s) f J' dp'(s + iki' + )1 C. (A.9) From Eq. (A.4) and (A.8) the Fourier transformed flux, Vf (kJl,s, is f(k,, s) - (s+ik;i -) C { [E-A(k, s)) ]-1+E }C X(k, kl, 3 (A. 10) This equation m ay be simplified'by noting { A(ks) E - [E - A(ks)] (2E -A s) M(k, s) (2E - A(k,s)], (A.11) det I E-lA(k, s)| where M'(k,s) is the matrix formed by the cofactors of the matrix LE - A(k,s)]. If another matrix M(k,s) is defined as

101 M(k,s) = M'(k,s)(2E - A(k,s)}, (A.12) then Eq. (A.10) becomes (k,4, s) detlE-A(ks) (s + iki + ) - C M(k, s)C1 X(k,, o). (Ao13) To obtain the Laplace transform of the flux, r (x,, s), the inverse Fourier transform of Eqo (A.13) is taken, 1 o0 ikx 1 r I(x,p^,s) = - f dk e ~t(k,p,s) 1 cO idk eikx 1 )(s + iik + ) 1C M(k,s)C x(k, o) 2jrT - det IE-A(k, s)I (Ao14) From this quantity the asymptotic time behaviour of the angular flux may be determined by application of the "final value theorem," of Laplace transformso This theorem states lim lim (A lt i (xIt) = -S s* (x,,s). (A.15) Hence from Eq. (A.14) the asymptotic flux is lim l(xim,t) fim ~ ikx -_ - _s lt+im X ) = dke (ik+E)-'C M(k o)Cx(kiO)det (s S++00 SO /W deto I E-A(k, s~. (A.16) For k belonging to the real axis, the matrix M(k,o) never vanishes; and hence the suberiticality requirement that the asymptotic flux be zero depends on whether the following quantity vanishes:

102 lirn s o det|E-A(k,s) (A6) To find the behaviour of this quantity for small s,consider the time independent transport equation, Eq. (2.1), and look for solutions of the form ikx e 0(k,4). These solutions are identical to the eigenvectors of Chapter II, ((v,.) with v replaced by 1/ik. The dispersion relation for the discrete eigenvalues, vOS, Eq. (2.13), in terms of the k parameter (denoted by ko), becomes in view of Eq. (A.9) det|E- A(ko,o)| = o, ko /(-i,i). (A.17) Further from Chapter IV, these discrete eigenvalues, ko, are either real or imaginary. Since the root vo (and hence ko) was assumed nonmultiple, a Taylor series expansion for detlE - A(ko,s)| for small s gives detlE - A(ko,s) als + 2 s2 +... (A.18) where al, a2,... are constants independent of s. Thus if k is real (or os Vos imaginary) the quantity (A.16) does not vanish as s -+ o and the asymptotic flux does not decay to zero. Therefore for imaginary eigenvalues, vos, the system cannot be subcritical. If onthe other hand, the k's are confined to the imaginary axis (vOs on the real axis), Eq. (A.16) goes to zero as required for subcritiality.

APPENDIX B COMPUTER CODE LISTINGS In this appendix several programs which were used to calculate the U and V functions and the emergent Milne problem distribution are listed. All these codes were written in the FORTRAN IV language for the IBM-560 computer. In listing these codes, attempts have been made to organize the programs and to include pertinent "comment" cards so that the program flow is readily apparent. It is not expected that anyone familiar with FORTRAN will have difficulty in understanding these codes. Throughout these codes, "double precision" length variables have been used (which allow accuracies of up to sixteen significant figures). Finally each listing is prefaced by a brief description of the program's purpose and restrictions and by a list of the principal symbols. B.1 PROGRAM MILNE Purpose This main program calculates first the U and V matrices from the Cramer's scheme for given C and E matrices. Then after all the discrete roots, vOS, s = l-M, are found, the emergent distribution for the Milne problem corresponding to the largest discrete eigenvalue is evaluated. Finally the extrapolation length is calculated, and the emergent distribution normalized to unit density. 103

104 Restrictions (i) This program calls the subroutines GAUX, ROOT, DIS, MINV, AVEC, PRT, and these must be supplied. (ii) The dimension statements 3-7 must be changed for problems with different number of groups. List of Principal Symbols Program Symbol Definition IG N (number of groups) IPR Printing parameter for ouput every 5 iterations. EE(IG,IG) E (unit matrix) X(16) Ilk, k 1 -1.6 (Gaussian ordinates) SIG(I) i C(IJ) cij RT(IG) Holds discrete eigenvalues, vos, in descending size. IMU, IMUP Counter used to specify integration ordinate. U(16, IG, IG) JJ() V( 16, IG, IG) V( ) ITER Iteration counter IEQ Denotes subscript of ui( ) and vi(1[). SMAX Maximum deviation of Eq. (5.531) from the E-matrix NUO V~ (largest discrete eigenvalue) EIG(K, I) ti(vock) EIGN(K, I) i(vo, -'k) ADEIG(K, I) 4( Vo, k) ADEIGN(K,I) t (vo k NPLUS N, EXTRAP Extrapolation length.

105 MILNE C THIS PROGRAM SOLVES THE U AND V EQUATIONS BY CRAMMER'S METHOD AND THEN C SOLVES FOR THE EMERGENT MILNE DISTRIBUTION USING SIMPLE APPROACH C NO DISCRETE CONDITONS ARE USED — SOLVES ONLY THE GENERAL EQUATIONS n'." O 1~. IMPLICIT REAL*8(A-HM-Z).n_ 2 _ __ _ COMMON/GAU/X(16) C THE FOLLOWING FIVE (5) CARDS ARE TO BE CHANGED WHEN THE NUMBER OF C GROUPS ARE CHANGED'00n3-'REAL* 8 AA(16,3,3),8(3,3),SIG(3),C(3,3),U(163,3),V(16,3,3)'~0:4 REAL*8 EE(3,3)/9*O.ODO/,ASCR(3),BSCR(3),S(16,16,3,3),VET(16,3) 0% *5 REAL*8 D(3,3),DSCR(3,3),RT(3) ~0-:J 6 ___ _ REAL*8 EIG(16,3),EIGN(16,3)tADEIG( 16,3),ADEIGN( 163),VEC( 16)!0''-7 - DIMENSION L L (3), LLLLL 3) sn -~r08 EXTERNAL DIS C CALCULATION OF THE MU ORDINATES FOR INTEGRATION FROM 0 TO 1 ^00n9 SUM=GAUX (0. OD, 1.D0 ) 2r!n0 5 FORMAT(ICTHF ORDINATES FOR THE MU INTEGRATION ARE'/('',8G15.7)) ~"-11 _ ___ WRITE(6,5) (X(I),I=1,16) C READ IN THE INPUT DATA 01 2 1 FORMAT(I2)?'13 ~ 35 READ(5,1) IG c:n14 IF (IG.EQ;.) RETURN -'"l. 5 9 8 J=1,IG nl 6 8 EE(J,J)=1.D0 n\'~ 7 IPR=1 2 FORMAT(8G10.7) -01. 9 ~ READ(5,2) (SIG(J),J=1,IG) ~n2' READ(5,2) ((C(J,K),K =1IG),J=1,IG)'i:::?l tWRITE(6,311) n22? 311 FORMAT('INOW NORMALIZE THE SIGMA AND C MATRICES') C?23 9D 9 J=1,IG'".24 DO 7 K=1,IG "~25C ~ C( J,K)=C (J,K )/ (2. DO*SIG( G ) e 2 6 7 ) (J,K)=C(JK) n'27 9 SIG( J)=SIG(J)/SIG(IG) C PRINT THE INPUT DATA o'0 P3 FORMAT('OTHE SIGMA MATRIX IS'/('',8G15.7)) ^029 WRITE(6,3) (SIG(J),J=1,IG) ~^,^'~ "4 FORMAT('OTHE C MATRIX IS'/('',8G15.7))'" 031 3WFWRITE(6,4) ((C(J,K),K=l,IG),J=l IG) C CALCULATE THE DISCRETE ROOT 0n32 00 33 J=IIG -33 33 RT(J)=1.D0 34 DELX=..1D ~n ~035 DO 34 J=1,IG n036 IF (J-1) 39,39,30 03 7 30 JJ=J-1:" 38 XST=1.DO/RT(JJ) +.lD-7 "039 IF ( (XST+.1D-7).GTi.DOO) GO TO 38 0n04n DEL=1.DO-(1.DO/RT(JJ)) 0041 IF (OEL.LE..1DO) DELX=DEL/2.00 0042 GO TO 34.043 39 XST=.1D-5 ),4 _+ _34 RT(J)=1.DOn/ROOT(DIS,C,SIG, IGXST,1.DO,DELX,.1D-7,DSCR,BSCR) 0n45 38 WRITE(6,36) (RT(J),J=1tIG) n046 36 FORMAT{'OTHE DISCRETE ROOTS FOR THE ABOVE'C AND SIGMA MATRICES ARE

106 MILNE 1'/(6GI5.8) C START WITH APPROXIMATION ONLY FOR U NAMELY U=EE "147 DO 6 IMU=I,16 nr48 DO 6 J=1,IG',^49 DO 6 K=1,IG.'5..6 U( I MU J, K=EE( J,K) 551 WRITE(6,11) -052 I FORMAT(I1THE ZERO-TH APPROXIMATION FOR THE U AND V MATRICES IS THE 1 UNIT MATRIX') C ITERATE THE V EQUATION FIRST —-INVOLVES ONLY THE U MATRIX 0n53 ITER=1'n054 1 D00 17 IMU=1,16 r,', 5 5 AMU= X (IMU) C CALCULATE U.C=S(1,IMU,JK) rn56 D 12 IMUP=1,16 i"57 D0 12 J=1,IG 058 DO 12 K=1,IG 0 59 SUM=n. OD 6n D^00 13 L=1,IG -n06 I 13 SUM=SUM+U( IMUPJ,L)*C( LK)'n62 12 S(1,IMUP,J,K)=SUM C IF IEQ=I WE DO THE I-TH EQUATION if6-' l00 17 IEQ=1, IG C NOW EVALUATE O(MU,MUP)=F(IEQ).U.C==AA(IMUPJ,K)'n64 D0 14 IMUP=1,16 r655 AMUP=X(IMUP) t066h 0DO 14 J=1,IG:067 90 14 K=1,IG 0t0688 A(U=14 AA(IMUPJ,KIS(1,IMUP,JK)*AMU/(SIG(J)*AMU+SIG(IEQ)*AMUP) C DO INTEGRAL OF F(IEQ).U.C OVER MUP, AND CALCULATE (EE-INTEGRAL) " 69 SUM=GAUMAT(AA,DSCR,IG) n7 D00 15 J=1,IG 0071 D0 15 K=1,IG ^n72 15 DS JK)EE(JK SCR(JK)E -SCR(J,K) C NOW CALCULATE DETERMINANT OF DSCR, AND SOLVE THE V EQUATIONS C SOLVE THE V EQUATIONS 00n73 ( CALL MINV(OSCR,IG,OETD,LLLLLLLLIG) n074 IF (DETD) 19,16, 19'0.75 16 WRITE(6,18) ITER 076 GO TO 35 00-'7-7 7.18 FORMAT ('OA SINGULAR MATRIX HAS BEEN ENCOUNTERED AT ITERATION',15) n^078 19 DO 17 J=1,IG'"79 17 V(IMU,J,IEQ)=DSCR(JIEQ) C CTHE BEGINNING OF THE U EQUATION CALCULATION ^080 2? no 27 IMU=1,16 i 81 AMU=X(IMU) C FIRST CALCULATE S(2,IMUPJ,K)=(TRANS V(MUP)).(TRANS C) nr8? D0 22 IMUP=116 00830 DO 22 J=1,IG.084 00 22 K=1,IG "O85 SUM=O.ODO In086 DO 23 L=1 IG 0087 23 SUM=SUM+V(IMUP,LJ)*C(K,L)

107 MILNE'088 22 S(2,IMUP,J,K)-SUM C CIF IEQ=I, WE 00 THE I-TH U EQUATION 0Q89 DO 27 IEQ1,#IG C NOW CALCULATE D(MUMUP)=F(IEQ).(TRANS U).(TRANS C!,0090 DO 24 IMUP=1,16:091 AMUP=X(IMUP) "092 DO 24 J=1,IG 93 00DO 24 K=1,IG "094 24 AA(IMUP,J,K)=S(2,IMUPJ,K)*AMU/(SIG(J)*AMU+SIG(IEQ)*AMUP) C DO INTEGRAL OF D(MU,MUP) OVER MUP 0O95 SUM=GAUMAT AA, SCR, IG) 1096 DO 25 J=1,IG ->-0n97 DO?5 K=1,IG _-'0098 25 DSCR(JK)=EE(J,K)-DSCR(J,K) C NOW CALCULATE DETERMINANT OF DSCR, AND SOLVE FOR THE U EQUATIONS C SOLVE THE U EQUATIONS l099 CALL MINV(DSCRIG,DETD,LLLLLLLLLG) q1l' nIF (DETO) 29,26,29 "I1 26 WRITE(6,18) ITER 1"2 GO TO 35 1"3 29 DO 27 J=l,IG 0 1.4S 27 U(IMU,IEQ,J)=DSCR(J,IEQ).....-1-5. IF (IPR-5) 32,31,32.1.'"6 31 WRITE(6,77) ITER 1107 77 FORMAT('OWE HAVE JUST FINISHED ITERATION NUMBER',15) -'1'8 IPR=O C C TEST ON U AND V -"^109-UI NT=GAUMAT(V,8 IG)!11! n VINT=GAUMAT(U,D, IG) "111 DO 71 J=1,IG 112 DO 71 K=1,IG n- 133 OSCR (J,K)=0.D0 -114 DO 71 L=1,IG 11 5 71 OSCR (JK )= CSCR( JK )+C(J J, L8 (L, K) n116 DO 72 J=1,IG -117 00 72 K=1,IG l11 8 VET(J,K)=0.D00 "119 DO 72 L=1,IG 12"r 72 VET(J,K)=VET(J,K)+D(J,L)*DSCR(L,K) ^121 00 74 J=1,IG "122. DO 74 K=1,IG 1123 B(J,K)=SIG(J)*B(J,K) 0124 D(J,K)=SIG( K)*D(J,K) ^125 74 8(J,K)=.500*(B(J,K)+D(J,K)-VET(J,K ) 126 WRITE( 6,75 ) (8(tJ,K),K=l IG ),J=1 IG) 0127 75 FORMAT (t CHECK ON U AND V'/(# 1 8G15.7)) C CONVERGENCE TEST 0128 SMAX=O.ODO 0129 00 62 J=1,IG n13n DO 63 K=lIG 0131 63 DSCR(JK)=C.ODO "132 OSCR(JJ)=SIG(J).133 DO 62 K=1,IG

o08 MILNE 0134 OEV=DABS(DSCR(J,K)-B(J,K)) "1 35 IF (DEV.GT.SMAX) SMAX=DEV 1136 62 CONTINUE 0137 WRITE(6,67) SMAX "138 67 FORMAT('OMAXIMUM DEVIATION FROM THE SIGMA MATRIX IS',G15.7) "139 IF (SMAX.LT..1D-8) GO TO 40 C CHECK ON THE NUMBER OF ITERATIONS —IF GREATER THAN 50 IT DOES NOT C CONVERGE. THUS GO ON THE NEXT DATA SET "'14 - t32 IPR=IPR+1 "141 IF (ITER.EQ.75) GO TO 64 114? ITER=ITER+1 "143 GO TO 10 r144 64 WRITE(6,65) <'14'5 65 FORMAT('?U AND V HAVE NOT CONVERGED AFTER 75 ITERATICNS-THUS GO ON 1 TO THE NEXT DATA SFT') 146 66 GO TO 35 C C CALCULATON OF THE EMERGENT DISTRIBUTION FOR MILNE PRGBLEM LSING SIMPLE C TECHNIQUE "147 40 NUO=RT(1)'148 CALL PRT(U,V,IG,ITER) r'149 WRITE(6,67) SMAX:151 WRITE(6,97) NUC 1!51. 97 FORMAT'I1THE LARGEST DISCRETE ROOT IS',G15.8) C CALCULATE THE INTEGRAL OF THE DISCRETE EIGENFUNCTION SECTOR A!15? CALL AVEC(ASCR,NUO,C,S IG, I G,DSCR,8SCRLLLLLLL) 1153 WRITE(6,28) (ASCR(J),J=I,IG) "154 28 FORMAT(I^INTEGRAL OF DISCRETE EIGENVECTOR FOR LAKGEST NUO'/!('',8615.7)) C CALCULATE THE CONSTANT VECTOR B m155 DO 81 J=1,IG'156 BSCR(J )=O.ODD "57 00 81 L=1,IG "158 81 BSCR( J )=SCR(J)+C(J,L)*ASCR(L) "5159 DO 84 JMUP=1,16'16'" ODC 83 J=1,IG ^161 nO 83 K=1,IG "'162 83 AA(JMUPJ,K)=V(JMUP,J,K)/(SIG(K)*NUO+X(JMUP) ) ^163 DO 84 J=l,IG "164 VET(JMUP,J)=:.ODO "165 DO 84 L=1,IG "1i66 84 VET( JMUP,J)=AA(JMUPJ,L)*BSCR(L+VET(JMUPJ) 1157 VINT=GAUVEC(VET,ASCR, IG) 1168 00 82 J=1,IG 0169 0( 1, J)=0.ICOI "17 00D 89 L=1,IG M171 89 D(1,J)=D(1,J)+C(J,L)*ASCR(L) "172 82 BSCR(J)=(BSCR(J)-NUO*D(1,J))*NUO n173 WRITE(6,85) (BSCR(J),J=1,IG) "1'74 85 FORMAT ('.CONSTANT VECTOR B IS',(4G15.7)) C CALCULATED THE EMERGENT DISTRIBUTION "175 00 88 JMU=1,16 ^176 DO 88 J=1,IG 0177 VET(JMU,J)=0.ODO

109 MILNE 178 00DO 87 L=1GIG 0179 87 VET(JMU,J)=VET(JMU +U(JMU J T M, J, L *BSCR(L) 0180 88 VET(JMU,J)=VET(JMU,J)/(SIG(J)*NUO-X(JMU)) C PRINT OUT THE UNNORMALIZED EMERGENT DISTRIBUTION 0181 WRITE(6,45) _ _ _ __ _ n182 45 FORMAT('OTHE UNNORMALIZED MILNE EMERGENT DISTRIBUTION,PSI(Ot-MUU, 1 FOR ASCENDING MU IS'/) 0183 00 47 J=1,IG 0184 WRITE(6,46) J 0185 47 WRITE(6,48) (VET(JMUJ),JMU=1,16) 0186 46 FORMAT ('OGROUP NUMBER IS',i3) 0187 48 FORMAT ('',8G15.7) C C CALCULATION OF THE DISCRETE EIGENFUNCTIONS 0188 CALL AVEC(ASCR,NUO,CSIG,I GDSCR,BSCRLLLLLLL) "189 CALL EIGEN(EIGNUO,IG) 0190 CALL EIGEN(EIGN,-NUO,IG) 0191 00 117 J=1,IG 0192 DO 117 K=1,IG -193 117 D(J,K)=C(K,J) 0194 CALL AVEC(ASCR,NUOD,SIG,IG,DS BS,BSCRLLL,LLLL 0195 CALL EIGEN(ADEIG,NUO,IG) 0196 CALL E IGEN(ADEIGN,-NJO, IG) ~197 WRITE(6,100) 0198 WRITE(6,102)((EIG(JMUJ),JMU=1,16),J=1,IG ^199 WRITE(6,102) ( ( E IGN(JMU J,JMU=1,16),J=, IG) 02-00 WRITE(6,102 ) ((ADEIG(JMUJ),JMU=1,16),J=1, IG 0201 WRITE(6,102)((ADEIGN(JMU,J),JMU=1,16),J=1,IG) n022 100 FORMAT('ITHE EIGENFUNCTIONS FOR ASCENDING MU ARE') 0203 102 FORMAT (2'',8G15.7/)/2('',8G15.7/)/) C CALCULATION OF THE NORMALIZATION BY INTEGRATION 02.4 00D 105 JMU=1,16 02-5 FST=O.ODO n20 6 DO 104 J=1 IG 2207 104 FST=FST+EIG( JMU, J)*ADEIG(JMU,J) 1208 105 VEC(JMUI=X(JMU)*FST 0209 FST=GAUS(VEC,IG) 0211 DO 106 JMU=1,16 0211 TST=O.ODO 0212 DO 107 J=1,IG 0213 107 TST=TST+ADEIGN(JMUJ *E IGN(JMUJ) 0214 106 VEC(JMU)=X(JMU)*TST 215 T ST=GAUS(VEC, IG) 0216 NPLUS=FST-TST 217 _ WRITE(6, 09) NPLUS _ 0218 109 FORMAT ({ONPLUS BY DIRECT INTEGRATION IS',G15.7} C CALCULATION OF ASYMPTOTIC COEFFICIENT 0219 DO 110 JMU=l,16 220 SS=O.ODO n221 DO 11 J=1,IG 0222 111 SS=SS+VET ( JMU,J)*ADEIG(JMU,J). 0223 110 VEC(JMU)=X(JMU)*SS 0 224 SS=GAUS( VEC, IG) /NPLUS 0225 WRITE(6,112) SS 0226 112 FORMAT('OASYMPTOTIC COEFFICIENT IN MILNE PROBLEM EIGENVECTOR EXPAN.............

110 MILNE ISION IS (SHOULD BE UNITY)',G15.8) C CALCULATION OF THE EXTRAPOLATION ENGTH n277 DO 113 JMU=1,16 "'22 8 S S=:O.":DO "229 DO 114 J=1,IG -'?2-3' 114 SS=SS+VET JMU,J)*ADEIGN(JMU,J)?23 113 VEC(JMU)=SS*X(JMU)'232 EXTRAP=-GAUS(VEC,IG)/NPLUS 2233 EXTRAP=. 5OO*NUO*DLG (-EXTRAP)'234 WRITE(6,115) EXTRAP "235 115 FORMAT ('0EXTRAPOLATION LENGTH IS',G15.7) C NOW NORMALIZE EMERGENT DISTRIBUTION TO UNIT DENSITY:236 VINT=GAUVEC(VET,BSCR,IG)'237 WRITE(6,48) (BSCR(J) J=,IG) q238 NORM=O.ODO'-239 DO 116 J=1,IG ~"'24 116 NORM=NORM+BSCR(J) a241 DO 51 J=1,IG'247 DO 51 JMIJ=1,16 "243 51 VET( JMU,J)=VET( JMU,J)NORM C PRINT OUT NORMALIZED FLUX'244 WRITE(6,52) "' 245 52 FORMAT('1THF EMERGENT DISTRIBUT ION,NORMALIZED TC UNIT DENSITY AT T IHE INTERFACE, IS'/'246 0D 53 J=lIG ^747 WRITE( 646) J'248 53 WRITE(6,48) (VET(JMU,J),JMU=1,16),249 GO TO 35 025n 999 FORMAT(1HO) ^251 END

111 B.2 PROGRAM GENUV Purpose This main program iterates Eqs. (6.3) and (6.4) for U(vt) and V(g) for arbitrary C and I matrices. This method has previously been called the "direct iteration" scheme (see Section 6.1). Restrictions (i) This program calls the subroutines PRT and GAUX, and they must be supplied. (ii) The dimension statements -5 must be changed if the number of groups is changed. List of Principal Symbols Program Symbol Definition IG N (number of groups) IPR Printing iteration counter EE(IG, IG) E (unit matrix) SIG(I) Ci C(I,J) cij v(I,J,K) [V(Hi) jk U(I,J,K) [u(i) jk VINT SldfV(t) 0 o UINT fldtU(k) 0

112 GENUV ~ THiS PAO4RAM CALCULATES VARIOUS TWO GROUP PROBLEMS.... —I —TIbSCRlt GNIT-ION"-iS —US'ED-SOLVES -ONLY -THE GENERAL — EQUATIONS C0G01 IMPLICIT REAL*8(A-HtM-ZJ -J7T02 - -.COM0ti4ON/GAU/X 1 6) C THE FOLLOWING CARD IS TO BE CHANGED WHEN THE NUMBER OF GROUPS IS CHANGED 9X':3-.. REAL*8 AA( 16,2,2),B(2.2),SIG(21,C(2,2),U(16,2 2),V(16,2,2 ) ~0004 REAL#8 EE(2,2)/4O0.00/, ASCR(2), BSCR(2), S(16,16,2,2), VET(16,2) -30 — ~ —-— xAtorte i2i, — C READ IN THE INPUT DATA GCC~. - - 1 — FORMAT(-i2-) - --.-407 35 REAL05, l) IG O- --— 8 - -- -rIF:-trG.-.QO -' RETURN C:~ 9 1 PR.=I ~C,.O --— UU 8 J=liG ic11 8 EE4 J-,J=l1.D.0-3 2. -. 2 FORMATSG/I0. 7 ) 0013 READ45,2) (SIG(J),J=1,IG).4-......... -- — READ4 5, 2} 1 (Ci,K) K=1,IG) *JS=, IG) ~ PRINT THE INPUT DATA ~:~O.T5 I3- FORMAT'ITHE SIGMA MATRIX IS'/('',8G15.7)) 01OGA6 RITE(6,3) tSIG(JI,J=1,IG) 001T. -. 4 FORMAT('OTHE C MATRIX IS/('',8G15.7)).~-Oa8 WRIE (6b,4 (C(JK),K=IIG) J=l 1G) C CALCULATION OF THE MU ORDINATES FOR INTEGRATION FROM 0 TO 1 G:19 SU)=AU X (0.000,1.D00).... ~; 5 FORMAT'OTHE ORDINATES FOR THE MU INTEGRATION ARE'/('',8G15.7)) 0021 WRITE(6,51 (X(I),I=I,16). INITIAL4ZE U AND V TO THE UNIT MATRIX 0022 DO 6 IMU-1L,16 -023- D00 6 J=1- IG ~024 00 6 K=1,IG ":Xt-25- -.- 1/(I'Mi u,-J,K i E'' J, K I.C026 6 U( It J,K=E ( JK) C ITERATE THE U AND V EQUATIONS 0027 ITERR 1 X-C028 10 DO 17 IKU=1,16:.0029 AMU=. ( IM U) C THE V EQUJATION —-FIRST CALCULATE C.V(MU),0030 DO 12 J=1,IG.0.3. r.D..DO 12 K=1IIG ~0032 SUm=O.0DO -3 —......- 00 — 13 L=1,IG 0034 13 SUM=SUN+ C(J,L)*V(IMU,L,K)..-J.t357~ 12 B (JAt^ =SUH M C NOW CALCULATE AA(MUPRIME)=B.Ui(UPRIME).C.V(MU).0036 — - DO 15 I-IUP=1, 16 - C37 AMJP=X (IMUP) 3003& D00 — 15 J-IIGS ~C39 00 15 K=1IG -00.1l DO'1 L=_1,IG RiOZ- --...14- SU —SUM+ U( I MUPJ, L) *B[ —t-,K) 0043 15 AA(IMIUP,J,K)=SUM4AMU/(SIG (J)*AMU+SIG(K )*AMUP) -..-.C NOM INTEGRATE OVER NUPRIME AND GET V(MU).044 SU UGAUMAT AA. I G

115 GENUV 0C45 DO LI 4=:1,1G 00 17 J=11lG ~-~-ED —..... — IG0047 17 V(.IMkJ, K)=B(J,K)+EE(JK) - --.. DO THEJ — EQUATION- FIRST CALCULATE U(NU).C tC48 00 30 IMU=1,16..C..... -9 A =XI'I.U 0050 00 22 J=l, IG ~51-' - --..'-" 2 —K' G- G. -..,0052 SU=0.000..... ~530D 23 L=1,IG 0054 23 SUM=SUM+ UIIMUIJL*C(L,K)....00552- -22 8(JAX=S UM C NOW CALCULATE A.U(NUJ.C.V(MUPRIME)..5... ~.......0..Z5 UP-1, 16 00 5 7 AMUP=X( IUUP) 0058 - Do 25- J=1, IG 0059 00 25 K=1,IG -0001 DO 2.- L=1IG ~-0 —2.24- S.'UIsUM-IU B 8 JL)* V ( IMUPL, K OOb3 25 AA, 1UP,.J, K )=SUM*AMU/(S G(J ) *AMUP+S IG( K )*AMU).. C NOC. INTEGRATE OVER MUPRIME AND GET U(MU),464 S-UX=GAU4A T(AA, 8,LG) -00-'-:.5- - DO 27 Jl, IG:00.6 00 27 K=1,IG....-.27 U( If,J,K)=Bi.J,K)+EE(J,K) 0068 30 CONTINUE...:iS.........I 1F i4PR-5J 32,31,32 0070 31 CALL PRTiU,V,IG,ITER) -CC 71 -. IPR=0'TEST ON U AND V 0072- Z VINT.GAUMAT ( V'B, I G )'0 -C13 U NTGAUnAT U,D,1G) -0074 WRITE6,81)} tB(J,K),K=1,2),J=1,2).0015 iRITE(6,81{ tD(.J,K ),K=1,2),J=1,2) OC7-6 -' DO 7,1 J=1, 1G iCC07 00 71 K=.,IG1 -~ v0-"A78.. A AAt 1, 4J','K.K=0.000 -0.7 900 DO 71 L=.IIG 0 80 71 AA( 1,JK3=AA1 J, K)+C(JL)*B(L,K) SC61 DO 73 J=-1,IG CCe2 DO 7.3 K=lIG:0083 VET i,K )=0.000.8- 4. - - D-71 = G,CC65 73 VETJdKJ)=VET JK)+D(J,L)*AAtI L,K) >. C86...i..dRTTE-6*I 1 Y((VET JK), K=1,2i,J=1,2) ~087 81 FORMAT (0:' G,415.7) C0{8.-. D — -- -- T — t-hK I G. —---------- —. ~0.089 DO 71 J=1, IG.C~000-B-i.{,:jl = S iGTK'B'fTKj, —J0091 DI.K,.Jj= SIGIJ)*DOK,J) JUZ- — 74 8-(-is 50-* (8 BIK,,J +DO, J)-V T ( K, J) ) 00C93 idRITE(6,975i ((BK,Jt),J=1,IGJK=1,fG) 0054.- - 75 -FORMAT —( —. CHECK ON U AND V'/(','8Gl5.7)).OS5 32.IPR=PR+l C THERE iS NO CONVERGENCE TEST —JUST WHETHER U AND V HAVE BEEN C' —ITER-TE- ENT TES- - CC96 IF 4TEREQ.100) GO TO 35,.0.9: lt.~R!TER+.l 0098 o0 TIO 10 - b.~C-1 raENO

114 B.3 SUBROUTINE: AVEC(A, NUO,C, SIGIGG,AA,AX,LLLLLLL) Purpose This subroutine calculates the normalization of a discrete eigenvector, a(vo) = d1 dL(v,, ) by evaluating Eq. (6.12). Dummy Variables Program Symbol Definition A(IG) Vector where a(vj) is stored. NUO Discrete eigenvalue, vo. C(IG,IG) C matrix SIG(IG) Stores matrix elements a.. IG Number of groups. AA(IG, IG),AX(IG),LLL(IG),LLLL(IG) Temporary storage for quantities generated by the subroutine. ENTRY EIGEN (EIG,NUO,IG) Purpose This entry calculates the discrete eigenvectors (v 0,p)?) p > 0, for the Gaussian ordinates, Ek, k = 1-16. Dummy Variables Program Symbol Definition EIG(KI) NUO vo IG Number of groups. Restrictions (i) v|ol must be greater than unity.

115 (ii) AVEC must be called before EIGEN is used. (iii) The subroutine MINV must be supplied. (iv) The external function GAUX must be called before using AVEC to generate the correct Gaussian ordinates. AVEC 0 I ] SUBROUTINE AVEC( A,NUCC,SIG, IG, AAAX,LLLLLLLLL ~n:"2 IMPLICIT REAL*8(A-H,M-Z) O'"'n.-3r KREAL*8 A(1),NUaC(IG,IG),SIG(1) -: n'04 REAL*8 AA( IG, IG ), AX( 1).:0"' 5 )DIMENSION LL (1,LLLL( 1) 000n"'6 rT(X)=DLOG((1.0+X*NUC)/(X*NUO-I LCO))*NUO'100' 7 IN=IG-1'"0-8D {O)C 13 J=l,IN " o i' —O9 SG=SIS G(J) "c01v^ -SG=T(SG)'lil DC 12 K=1,IN n12 12 AA(JK)=C(JK)*SG 0013 AA(J,J)=AA(J,J)-1. n' 01 4 13 AX(J)=C(J,IG)*SG nSC- 1 CALL MINV(AAIN CETAX,LL LLLL, IG) ho16 00 14 J=1,IN 017 A(J)O..OO -"018 0DC 14 L=IIN 791. ~14 A(J)=A(J)-AA(JL)*AX(L) 2."......_'L~ A(IG)=1.D:21 RETURN ~...22..2f ENTRY EIGEN(EIGNUCIG)'~023 REAL*8 EIG(16,IG) 02n7 Q4 CCOMMON/GAU/X(16) -'-n::, C00 16 J=1,IG 0n:26^ AX(J)=0.000 02?7 DO 16 L=1,IG 5 0 2? 8 1.6 AX (J) AX(J ) +C (JL)*A(L) 9029 17 DO 19 JMU=1,16 n03C AMU=X(JMU) -031 DO 19 J=1,IG nn032 19 EIG(JMU,J)=NUO*AX(J)/ SIG(J )NUO-AMU) 033 RETURN 0034 END

116 B.4 SUBROUTINE: ROOT(FN, C, SIG, IG, XST, XEND, EPS1, EPS2, DSCR, BSCR) Purpose This subroutine finds the simple zero, xo, of an arbitrary function f(x), in the interval xl to x2. The method of calculation used in this subroutine is discussed in detail in Section 6.2. Dummy Variables Program Symbol Definition FN f(x) C(IG, IG) C SIG(IG) Z IG N (number of groups) XST xl XEND x2 EPS1 Rough grid size. EPS2 Allowable error in xo. DSCR,BSCR Are scratchfiles used for temporary storage. Restrictions (i) There must be only one zero between XST and SEND. (ii) IXST+EPSl| must be less than I XEND. (iii) EPS1 and EPS2 have the same sign. (iv) An external function, FN, is required.

117 ROOT 00Q01 REAL FUNCTION ROCT*8(FNCSIG IG,XSTXENDtEPS1 EPS2) nO 2 IMPLICIT REAL*8 (A-Ht-Z ) O003 REAL*8 FN,XST,XEND,EPS1,EPS2,C(IGIG)tSIG(IG) O00.4 EP=EPS1 0005 S=OABS(EP)/EP 0006 Y=XST -0001:7 10 Yl=FN(CSIG IGY) 0008 11 Y2=FN(CSIG IGtY+EP) n009 IF (YI*Y2) 1215,14 OOL10 14 IF {(S*(Y+2.DO*EP).GE.XEND*S) GO TO 12 0011 Y=Y+EP'012 YI=Y2 n013 GO TO 11 0014 12 IF ($*EP.LT.EPS2*S) GO TO 13 0015 EP=EP/1O.C0 0016 GO TO 11 0017 15 IF (Y2.EQ*.0001 Y=Y+EP 0018 13 ROOT-Y 0019 RETURN 002C END

118 B.5 EXTERNAL FUNCTION: DIS(C,SIG,IG,Y,A,TN) Purpose This subroutine calculates the determinant of the dispersion matrix for any value of the argument. Dummy Variables Program Symbols Definition C(IG,IG) C SIG(IG) E IG N (number of groups). Y 1/v (argument of dispersion matrix). A(IG,IG) Q(v) (dispersion matrix). TN(IG) Scratch vector used temporary storage. DIS det IQ(v) Restriction (i) The subroutine MINV is called and must be supplied. DIS l"0-1 REAL FUNCTION OIS*8(CSIG IGY,ATN) nOY12 IMPLICIT REAL*8 (A-H,M-Z).n.3 REAL*8 Y,C(IG,IG),SIG(IG),A(IGIG),TN(IG) C CHANGE THE FOLLOWING CARD WHEN THE NUMBER OF GROUPS IS CHANGED -0n4 DIMENSION LLL(3),LLLL(3) C CONSTRUCT THE DISPERSION MATRIX "-0in5 0D 11 J=1,IG,"0.9'~6 fTN(J)=DLOG((SIG(J)+Y)/(SIG(J)-Y)) 0^ O7 DO 11 K=1,IG ~0-~",'8 11 A(J,K =C(J, K*TN(J)/Y ~ r0n9 DO 12 J=1,IG..i.. 12 A(JJ)=A(J,J )-.DO n011 CALL MINV(A,IG,DETD,LLL,LLLLIG) "012 OIS=DETD 0013 RETURN n014 END

119 B.6 SUBROUTINE: PRT(U, V,IG, ITER) Purpose This subroutine points out the value of the U and Vmatrices. Dummy Variables Program Symbol Definition U(K,IJ) [ k)]ij V(K,I,J) Iv(k)] IG N (number of groups). ITER Iteration index for U and V. PRT nnol SUBROUTINE PRT(U,V,IGITERI 0002 IMPLICIT REAL*8(A-HM-Z) 00n3 REAL*8 U(16,IGIG)t V(16,IIGG) n0n4 10 FORMAT(1AT ITERATIGN NUMBER',14,'VALUES OF V(MU,J,K)AND U(MUJ,KI 1 ARE LISTED BELOW'/) 0005 WRITE(6,10) ITER 0006 11 FORMAT(8G15.7) -007 12 FORMAT(SOU(MU',I2,'0,1I2,') FOR ASCENDING VALUES OF MU') 0008 13 FORMAT( V(MU',I2,t,',12,') FOR ASCENDING VALUES OF MU') 0009 DO 20 J=1,IG 0aon DO 20 K=1,IG 0n-i- I WRITE(6,12) J,K 0012 WRITE(6,11 ) (U( I,J,K),I=1,16) 0013 WRITE(6,13) J,K 0014 20 WRITE(6,11) (V(IJ,K),I=1,16) 0015 RETURN 0016 END

120 B.7 EXTERNAL FUNCTION: GAUX(AA, BB) Purpose This initial entry calculates the sixteen Gaussian integration ordinates, k-k, k = 116, for the interval (AA, BB). The resultant ordinates are stored in the COMMON storage |GAUI-vector X(16). The following entry points then integrate matrix functions, vector functions, and scalar functions. Entries (1) GAUMAT (A,B,IG) Program Symbol Definition A(K,I,J) [A(k) ] ij' k = 1^16 (A(fi) is some arbitrary matrix function. BB 16 B(IJ) / dt[A( )].. W(4k)[A(k )]ij AA k=l IG N (dimension of A and B) (2) GAUVEC (AV,BV,IG) Program Symbol Definition AV(I,J) [AV( jk) 1. k = 1'-16 ( AV() is some arbitrary vector function. BIRB 16 BV(I) I d[[AV(t)]. w(Bk)[AV(k)]i AA k=l IG N (dimension of vectors A and BV). (3) GAUS (AS,IG) Program Symbol Definition AS(K) f(pk) where f( i) is an arbitrary scalar function. 16 GAUS S daf(L)' Z W(k)f(k). AA k=l IG Any integer.

121 Restriction (i) GAUS must be called before any of the entry points are called. GAUX n001 REAL FUNCTION GAUX*8(AA,BB) 0002 IMPLICIT REAL*8(A-H,M-Z) 00^3 REAL*8 AABB nn0-4 COMMON/GAU/XI(16) 0005 REAL48 Z (8 )/.0950125098,.281a6035508,.4580167777,.6178762444, 1.7554044C84,.8656312024,.$445750231,.9894009350/, 2WT(8)/.1894506105,.1826034150,.1691565194,.14995959888 3.1246289713,.0951585117,.0622535239,.0271524594/ 0006 C=.500*(BB-AA) n007 0=. 5D*(BB+AA) 0008 DO 9 J=1,8 0Q09 K=9-J n010 9 XI(J)=-C*ZI (K)+ 0011 DO 8 J=9,16 0012 K=J-8 n003 8 XI(J)=C*ZI(K)+D 0014 GAUX=XI(1) 0015 RETURN C 0016 ENTRY GAUMAT(A,B,IG) 0017 REAL*8 A(16,IG,IG), B(IG,IGi 0018 DO 20 K=1,IG 019 DO 20 J=1,IG nQ20 SUM=0.ODO n021 DO 21 1=1,8 0022 L=9-I n003 21 SUM=SUM+ A(L,J,K)*WT(I) 0024 DO 22 1=9,16 00?5 L=I-8 0n26 22 SUM=SUM+ A(I,J,K)*WT(L)'n27 20 B(J,K)=SUM*C 0028 GAUMAT=1.CO 0029 RETURN C 0030 ENTRY GAUVEC( AV,8V, G) t.00 31 REAL*8 AV(16,IG),BV(1) 0032 DO 30 K=1,IG 03,33 SUM=0.00D 0034 DO 31 1=1,8 0035 L=9-I 0036 31 SUM=SUM+AV ( L K)*T ( ) 0037 DO 32 1=9,16 00838 L=I-8 3:039 32 SUM=SUM+AV(I,K)*WT(L) 0040 30 BV(K)=SUM*C n041 GAUVEC=1.DO 0042 RETURN C 0043 ENTRY GAUS(AStIG) 0044 REAL*8AS(16) 0045 GAUS=.000D 0046 DC 41 1=1,8 0047 L=9-I 0048 41 GAUS=GAUS+AS(L)*WlT(I) 0049 DO 42 I=9,16 0050 L=I-8 0051 42 GAUS=GAUS+AS(I)*WT (L) 0052 GAUS=GAUS*C 0053 RETURN 0054 END

122 B.8 SUBROUTINE: MINV(A,N, DL, M,NC) Purpose This subroutine calculates the determinant and inverse of an arbitrary square matrix. Dummy Variables Program Symbols Definition A(N, N) Arbitrary square matrix which is replaced by its inverse. N Dimension of matrix A. D Determinant of A. L,M Scratch vectors used in calculating the inverse. NC An integer which allow evaluation of the determinant of a minor of A. For example if NC = N-l the inverse and determinant of the minor of element [A]l1 is calculated.

123 MINV 0001 SUBROUTINE MINV(A,N,0,L,M,NC) 0002 DIMENSION A(I),L(1),M(11 0003 DOUBLE PRECISION A,D,BIGAHOLD C SPECIAL CASE FOR A DEGENERATE MATRIX N=1 0004 IF (N.GT.1) GO TO 11 o-0005-' D=A(1) 0006 A(1 =1.DO/D 0007 RETURN C INITIALIZATION 0008 11 ICOUNT=O 0009 0=1.00.0 0-i D00 10 I=1,N 0011 10 L(I =l 0012 K=l C SEARCH FOR LARGEST ELEMENT IN THE RESIDUAL MATRIX "013 15 BIGA=0.0 0014 DO 30 J=K,N "0'15 IJBA SE NC*(L(J) -) 0016 DO 30 I=K,N 0017 IJ=IJBASE+L (I) 0018 IF (DABS(BIGA)-OAB$(A(IJ)) 20,30,30 0019 20 BIGA=A(IJ) 0020 IZ=LI ) n021 JZ=L(J) 0022 KSUB=J 0023 30 CONTINUE 0024 IF (BIGA) 50,40,50 0025 40 0=0.0 0026 RETURN C PERFORM ROW INTERCHANGE AND MODIFY RESIDUAL SUBSCRIPTS 0027 50 L(KSUB)=L(K) 0028 IF (IZ-JZ 60,80,60 0029 60 ICOUNT=ICCUNT+1 0030 L(ICOUNT)=IZ 0031 M( ICOUNT =JZ o032 DO 70 J=1, N 0033 HOLD=A(IZ) 0034 A(IZ)=A(JZ) 0035 A(JZ)=HOLD 0036 IZ=IZ+NC 0037 70 JZ=JZ+NC 0038 JZ=M(ICOUNT) 0039 0=-0 C REPLACE PIVOT BY RECIPROCAL AND DIVIDE PIVOT ROW 0040 80 IZ=NC*(JZ-1) 0041 A(IZ+JZ)=1.0 0042 I=JZ.O043.DO- 90 J=1,N 0044 A(I)=A(I)/8IGA 0045 90 I=I+NC C REDUCE THE MATRIX 0046 DO 110 I=1,N n047 IF (I-JZ. 100,110,100 0048 100 HOLD=-A(IZ+I ) 0049 A(IZ+I) O,

124 MINV 0050 IJ=I 0051 KSUB=JZ 0052 DO 110 J=1,N n053 A(IJ)=A(IJ)+HCLD*A(KSUB) 0054 I J=I J+NC 0055 KSUB=KSUB+NC 0056 110 CCNTINUE C 0057 D=D*BIGA 0058 K=K+1 O059 IF (N-K) 120,15,15 C PERFORM COMPENSATIOG COLUMN INTERCHANGES 0060 120 IF (ICOUNT) 15C,150,130 0061 130 IZ=NC*(L(ICOUNT)-1) 0062 JZ=NC*(M(ICOUNT)-1) 0063 DO 140 I=1,N 0064 IZ=IZ+1 0065 JZ=JZ+1 0066 HOLD=A(IZ) 0067 A(IZ)=A(JZ) 0068 140 A(JZ)=HOLD 0069 ICOUNT= ICCUNT-1 007- GO, TO 120 0071 150 RETURN 0072 END

125 B 9 PROGRAM FRED Purpose This program evaluates the analytic approximations Eqs. (3595) and (3.96) for the U and V function. The program symbols are the same as those in MILNE and hence will not be repeated hereo

126 FRED _C THIS PROGRAM TESTS THE FREDHOLM EQUATION APPROXIMATION n00n1 IMPLICIT RFAL*8 (A-H,M-Z) 00n2 __ COMMON/GAU/X (16) C WHEN PUTTING INTO MAIN MUST INCLUDE NEWLY DEFINED MATRICES ESCRCSRC 00O03 REAL*8 U(16,2t2'1,V(16,2, 2),AA(16,2,2) EE(2,2)/4*0.ODO/ 0r04 RFAL*8 EIG(16,2) ADEIG( 16,2) EIGN(16,2),ADEIGN(16,2) 0nO55 _ _ REAL*8 VEC( 6)tESCR(2 ) ASCR('2),BSCR(2),C(2,2),D(2,?),DSCR(2t2) n:00:6 REAL*8 SIG(2),CSCR(2) 07 n 7 DIMENSION LLL(2)LtLLL(2) __ C CALCULATE THE MUl ORDINATES AND READ I1 INPUT DATA 0n8_ StUM=GAUX ( n. Dn,.Dn 00rn9 5 FORMAT ('nTHE ORDINATES FOR THE MU INTEGRATION ARE',/('',8G15.7)) nOltn WRITE(6,5) (X(I),I=l,l6) o00n1 4 FORMAT(I2?,G8.8) onl..? 6 READ(5,4) IG,NUO ~n1.3 2 FORMAT (8G1t.7) 0.4 READ(5,2) (SIG(J),J=lIG) S1 5 IF (SIG(l).FQ.n.0.Dn) RFTURN ~. _016 READ(5,2) ((. J,K),K-:=,IG),J=1,IG) n1 7 nO 8 J=1,IC 0018. FE(J,J)=I1.[l, 0o190 nDO 8 K=1,IG 0n2n. O(JKI)C(K,J) C o00n1 3 FORMAT ('1NEXT SET OF DATA') 00"22 WRITE(,3) C C CALCULATION OF THE EIGENFUINCTION NORMALIZATION —VECTOR A 0023 CALL AVFC(ASCRNUO,C,SIGtIGr,fSCR,SSCR, LLLLLLL) n004 WPITE(6,2.1) (ASCR(J),J=1,IG) On05_ 2nl FORMAT(' INTEGRAL OF DISCRETE EIGENFUNCTION IS',(',5G15.8)) 0026 CALL EIGN( FIGNUO IG) 0027 CALL EIGEN(EIGN,-NUO,IG) 00 28 CALL AVEC(B SCR NUO,D,SIG, I G,oSCR,CSCR, LLL,LLLL) _n029 WRITF(6,2n)? (BSCR(J) J=, IG) 00?3en 2.2 FORMAT(' INTEGRAL OF ADJOINT EIGENFUNCTION IS',(',5G15.8)) n31_ CALL EIGFN(AnEIGNUO,IG) Dt32.......CALL EIG FN(AEIGN,-NUOIG) 0n.33 WRITE(6,t1^2) ( (E IG( JMUJ), JMU=1,ib),J=l, IG) 0034 WRITE( 6,T 1.? ) ( (F IGN(JMU, J),JMU=1, 16),Jt1, IG) 0.035 WRITE(61,12) ((AOEIG(JMUJ ),JMU=, 16),J=l, IG) n0-36 WRITE(6,1^2) (ADEIGN( JMU,J)tJMU=1,16), J=1IG) 0037 102 FORMAT (2('',8G15.7/)/?('',8G15.7)/)/ _ C CALCULATION OF THE DISCRETE EIGENFUNCTION NORMALIZATION —NPLUS 03 8 _DO 105 JMU=l,16 0039 FST=0.000 n04 DO 104 J=l,IG 0041 104 FST=FST+EIG(JMU,J)*ADEIG(JMU,J) 0042 1 05 VEC (JMU)X (JMU) *FST ^043 FST=GAUS(VEC,IG) n44 DO 106 JMU=1,16 10.45 TST=O.ODO 0046 OO 107 J=IIG 0047 107 TST=TST+ADEIGN(JMUJ)*EIGN(JMU,J).0048 106 VFC(JMU)=X(JMU)*TST

127 FRED 0n49_ TST=GAUS(VEC, IG) ~.005 NPLUS=FST-TST n51 WRITE(6l hn9) NPLUS an52 109 FORMAT('nNPLUS BY DIRECT INTEGRATION IS',G15.8) C CALCULATF U AND V APPROXIMAYIONS C CALCULATE THE SCALAR DIVISOR:n53 00 212 JMU-=116 __54 SUM= 0. 0 D )om^ f)O5 5 OKO 211 J=1,IG O^n56 211 SUM=SUM+ACF IG( JMU,J )*FIG(JMUJ),0057.?12 VFC(JMU)=SUM*X(JMU) 05n 58 SCALAR=GAUS(VEC,IG) n059 WRITF(6?1.3 ) _6 _ 213 FORMAT('OTHF SCALAR INTEGRAND AND THE RESULTANTINTEGRAL ARE') n.61 ^WRITF( 6,17) (VEC(J),J=,l.16) nn62 217 FORMAT ('',RG15.7) 0n63 WRITE(6,?18) SCALAR __ 64 2.18 FOPMAT(GI6.8) 0065 CALL GAUVEC( FIGESCRIG) n66 CALL GAUVEC(ADEIG,CSCRtG) 0n67 WRITE(6,214) (CSCR(J),J=1,IG),(ESCR(J),J=1,IG) n068 214 FORMAT('OINTEGRALS OF EIGN AND ADEIGN ARE',('',2G15.7)) C CALCULATE U AND V 0:6c9 00 215 JMU=1,16 ~070 00O 215 J=1lIG 0n71 0D0 216 K1IG n072 U( JMU, JK=(X(JMU)*EIGN(JMU,J)*CSCR(K )/SCALAR 0073 216 V(JMUtJK)=(X(JMU)*ESCR(J)*ADEIGN(JMUK) /SCALAR.0:74 U(JMUJtJ)=U(JMUJ,J) +1.On 0n75 215 V(JMUJJ)=V(JMUJ,J) +1.0o 0o76 CALL PRT(U,V, IG,7) 0n77 GO TO 6 n'778 END

REFERENCES 1. Case, K, M., and Zweifel, P. F., Linear Transport Theory, Addison-Wesley, Reading, Mass. (1967). 2. Mendelson, M. R., Doctoral Thesis, University of Michigan, Ann Arbor, (1964). 35 Kuscer, I., McCormick, N.J., and Summerfield, G. C., Ann. Phys., 30 (1964) 411. 4. Mitsis, G. J., Argonne National Laboratory Report, ANL-6787, (1963). y J 5. Kuscer, I,, Proco Symposium on Neutron Thermalization and Reactor Spectra, Ann Arbor, (1967) (to be published). 6. Ferziger, J. and Zweifel, P. F., The Theory of Neutron Slowing Down in Nuclear Reactors, MIT Press. Cambridge, Mass. (1966). 7. Case, K. M., Ann. Phys. 9 (1960) 1. 8. Bednarz, R. J., and Mika, J, R., J. Math. Phys., 4 (1963) 1285. 9. Leonard, A., and Ferziger, J. H., Nucl. Sci. Eng., 26 (1966) 170. 10. Leonard, A., and Ferziger, J. H., Nucl. Sci. Eng., 26 (1966) 181. 11. Leonard, A., and Ferziger, J. H., Ann. Phys., 22 (1965) 192. 12. Mika, Jo R., Nucl. Sci. Eng,, 22 (1965) 2355 135 Steward, Jo C,, Kuscer, I., McCormick, N. J., Ann. Phys., 40 (1966) 3210 14, McCormick, N. J., Kuscer, I., Trans. Amer. Nucl. Soc., 9 (1.966) 444. 15. Zelazny, R,, and Kuszell, A,, Proc. Seminar on the Physics of Fast and Intermediate Reactors, Vol. II, IAEA, Vienna (1962). 16. Yoshimura, T., (submitted to Nucl. Sci. Eng.) 17. Siewert, C. E., and Shiek, P. S., J. Nucl. Eng., 21 (1967) 5835 18. Zelazny, Ro, and Kuszell, A., Ann, Phys., 16 (1961) 81. 19o Metcalf, D,, Doctoral Thesis, The University of Michigan (1968). 128

129 20. Siewert, C. E., and Zweifel, P. F,, Jo Math. Physo, 7 (1966) 2092. 21. Fuchs, K., and Collatz, S., Kernenergie, 7 (1964) 386. 22. Zelazny, R., Proco 3rd UN Into Confo PUAE (1964). 23. Case, K. M., Proc. Sympo on Transport Theory, Am. Math. Soc., April (1967) (to be published). 240 Pahor, S., Nucl, Sci, Eng., 26 (1966) 192. 25. Pahor, S., Nucl. Sci, Engo, 29 (1967) 248. 26. Ambarzumian, V. Ao, Astr. J. Sovo Union, 19 (1942) 300 27. Chandrasekhar, S., Radiative Transfer, Dover, New York (1960). 28. Gel'Fand, I. M., and Shilov, Go E., Generalized Functions, Academic Press, New York (1964). 29. Muskhelishvili, N. Io. Singular Integral Equations, Noordhoff Groningen, Holland, (1946)o 30. Wachspress, E. L., Iterative Solution of Elliptic Systems, Prentice-Hall, Englewood, (1966). 31. Mullikin, T. W., Astrophys. J., 136 (1962) 627. 32. Mullikin, T. W., Astrophys. J., 139 (1964) 379. 33. Mullikin, T. W,, Astrophys. J., 139 (1964) 1267. 34. Pahor, S., and Kuscer, I., Astrophys. J., 143 (1966) 888. 355 Mikhlin, S. G., Integral Equations, Pergamon, New York, (1964), 36. Shapiro, C., and Corngold, N., Trans. Am. Nucl. Soc., 6 (1963) 26. 37. Reactor Physics Constants, 2nd ed., Argonne National Laboratory Report, ANL 5800, (1963). 38. Williams, M.M.R,, The Slowing Down and Thermalization of Neutrons, Interscience, New York, (1966). 39. Perlis, F., Theory of Matrices, Addison-Wesley, New York (1962).

150 40. Minn.aert, Astrophyso J., 95, 405. 41o Courant, R., and Hilbert, D., Methods of Mathematical Phys ics Interscience, New York, (1953) 116. 42. Curtis, R. L., and Grimesey, R. A., "INCITE A Fortran IV Program to Generate Neutron Spectra and Multigroup Constants Using Arbitrary Scattering Kernels," IN-1062, Idaho Nuclear Corporation (to be published). -453 McMurry, H. L., Russell, G. J., and Brugger, R, M., N-icl. Sci. Eng., 25 (1966) 248o 44. Hansen, G. E,, "Properties of El.ementary, Fast-neutron Critical Assemblies", Prov', Geneva Conf. (1958). 45. Abramowitz, M,, and Stegun, I. A,, Handbook of Mathematical Functions, National Bureau of Standards, AMS 55 (1964) 916. 46. Margenau, H.,, and Murphy, Go M., Mathematics of Physics aind Chemistry, Van Nostrand, New York (1956). 47. FORTRAN IV Language, IBM Systems Library, C28-6515-N (1968). 48. Shure, F. C., and Natelson, M., Ann. Phys., 26 (1964) 274. 49. Case, K. M., (private communication) February, (1968). 50. Savant, C. J., Fundamentals of the Laplace Transformation, McGrawRill, New York (1962).

UNIVERSITY OF MICHIGAN 3 9015 03524 5029