Quasi-Monte Carlo Approaches to Option Pricing John R. Birge Department of Industrial and Operations Engineering The University of Michigan Ann Arbor MI 48109-2117 Technical Report 94-19 Revised June 1995

Quasi-Monte Carlo Approaches to Option Pricing John R. Birge* Department of Industrial and Operations Engineering University of Michigan Ann Arbor, MI 48109 (313)764-9422 Abstract: Complications such as varying interest rates and complex contingencies can prohibit analytical computation of option and other derivative prices. A typical approach is to rely on Monte Carlo simulation in these cases and to use statistical properties of assumed random sequences. This approach has two drawbacks. First, the validity of a pseudo-random sequence for statistical randomness is somewhat questionable. Second, even assuming randomness, the error from central limit arguments decreases slowly, inversely proportional to the square root of the number of observations. Quasi-Monte Carlo sequences, on the other hand, do not rely on a randomness assumption and have order of magnitude better asymptotic error rates. We show how quasi-Monte Carlo sequences can be used in option pricing, give some analytical justfication for their preference over crude or pseudo-Monte Carlo methods, and present some empirical evidence based on simple call option models. Keywords: simulation, option pricing, quasi-random 1. Introduction The landmark paper of Black and Scholes [1973] spawned many other approaches to option pricing and to the evaluation of general contingent claims (see, e.g., Jarrow and Rudd [1983]). Many of these approaches try to find simple analytical calculations as in the Black-Scholes formula. When the contingencies become complex, interest rates vary, or other simplifying assumptions are invalid, analytical formulas may either contain excessive errors or may not be available. In these cases, Monte Carlo simulation can be used to represent many random phenomena that can influence the option price (see, e.g., Boyle [1977]). * Based on work supported by the National Science Foundation under Grant ECS 8815101 and 92-15921. The author thanks Sanjeev Acharya and Jim Wartinbee for assistance in the computational implementations. 1

The Monte Carlo method (see Hammersley and Handscomb [1964]) has its basis in statistical theory, in particular, the central limit theorem that allows the calculation of confidence intervals about the mean value of a random variable based on a sequence of independent random observations. Practically, the validity of this approach depends on the ability to generate sequences of random numbers. Thus, random number generation is a critical element in using the Monte Carlo method (see Deak [1990]). The general practice is to use pseudo-random sequences that share properties with random sequences, such as frequency of increasing or decreasing runs of various lengths. These pseudo-random sequences are constructed from a variety of techniques involving simple operations on previous numbers in the sequence. Their use and statistical justification in the Monte Carlo method has, however, been questioned (see, e.g., Zaremba [1968]). Even assuming the statistical basis, the error in Monte Carlo estimates decreases at an asymptotic rate that is inversely proportional to the square root of the number of observations, 0(, o. An alternative is to rely on numerical properties of the sequence and on how closely a sequence fills in each region in the support of the random variables. This measure, called the discrepancy, can be reduced by constructing sequences that more deliberately fill in the random support. These sequences are often called quasi-random numbers or low-discrepancy sequences. Their use in approximating integrals or expectations is called the quasi-Mlonte Carlo method (see Niederreiter [1978] for a survey and comparison with pseudo-random numbers). The great advantage of quasi-random sequences is that the asymptotic error generally reduces to O( ). The difficulty is, however, that the constants that can be calculated for this order represent a worst case and may be quite large. In practice, however, the error may be much less. We give some indication of this below. Calculating the error for any individual example may be difficult but another recent discovery gives even greater potential for the use of quasi-Monte Carlo sequences. W'ozniakowski [1991] has shown that the expected error of using particular quasi-Monte Carlo sequences is approximately O( N) for integrating over a general class of continuous functions in many dimensions. This observation effectively suggests, on average, crude Monte Carlo requires the square of the number of observations required by quasi-Monte Carlo to achieve the same error. This conquering of the curse of dimensionality (at least in 2

expectation) has led to renewed interest in improved numerical integration and simulation (see Yam [1991], Guterl [1994]). We suggest some direct uses of this observation below, although it does not directly apply to simple options which correspond only to a small subset of all functions considered in Wozniakowski's result. The general asymptotic error bounds in quasi-Monte Carlo methods do, however, indicate that they can improve on option pricing with pseudo-Monte Carlo. This paper presents both analytical and empirical evidence for this claim. In Section 2, we present our basic approaches for option pricing, which we interpret as integration over a region with dimension equal to the number of distinct periods of price changes. Section 3 gives some analytical comparisons of quasi- and pseudo-random sequence approaches. Section 4 presents our empirical observations. Section 5 gives conclusions. 2. Monte Carlo Option Pricing The Monte Carlo method generally refers to procedures for approximating the integral, I = fR f(x)dx, of a function, f, over a region, R, where f: Rn -+ SR, and R C Rn. The basic idea is to choose a sequence of points, {xk}, i = 1,..., N, uniformly distributed in R and to estimate I by i(N) = f(xk). (1) k=l If the sequence {Xk} indeed represents independent random observations, then the {f(xk)} values can also be considered observations of independent random variables with mean, I. To distinguish the random variables from their observations we use bold-face notation, e.g., x7. Assuming I and a finite second moment, NI(N) can be interpreted as the sum of independent random variables. The central limit theorem can be invoked to determine the asymptotic normal distribution of I(N) and to obtain a confidence interval on I using an observation of i(N) (for derivations, see, for example, Feller [1971]). The fundamental result is that I(N) is asymptotically normally distributed with mean I and variance n Vf, where Vf = IR lf (z)112dx-I2, the variance of f(x). Thus, V (I(N)-I) is asymptotically a standard normal with zero mean and unit variance. If <> denotes the distribution function of the standard normal, then a level ca confidence interval can be 3

obtained from an observation i(N) of I(N) by noting, a = P[-'(1) < / (N)- < (-1( - 1-2)], so that with probability a, ___ y'VJ~~ l1-a I E [I(N) - 4-(_ ),I(N) + -1(-)] (2) The two primary difficulties in using this method are that deterministic sequences generally replace the random variables in I(N) and that the interval in (2) only changes proportionally to. The deterministic sequences are pseudo-random sequences that pass tests for randomness, but then the applicability of (2) is rather tenuous. A further implementation difficulty is that single pseudo-random streams are frequently used to obtain each component of the n-dimensional variable x. The result may be that a sequence that appears quite uniform in one dimension may be distributed with substantial collinearity (or lattice structure) when applied sequentially for higher dimensions. In these cases, generation of more points simply fills in each line (or hyperplane in higher dimensions). The error can never improve beyond the error in integrating along the lines. This error may be especially high if the integrand concerns the sum of the components as in option pricing. To overcome the difficulty of collinearity, pseudo-random vector generation schemes are possible (see, for example, Niederreiter [1991]). Another option is to use quasi-random numbers that explicitly consider the full dimension of the problem. These sequences avoid the randomness assumption and attempt to achieve low errors based purely on numerical (not statistical) analysis. In the next section, we give some of the analytical error properties of these sequences.. Assuming streams of pseudo- or quasi-random vectors, I(N) can be constructed by simply evaluating f at each point. We will assume that R is the unit hypercube, [0, 1]n, in'n, and that f is constructed so the components of x are independently distributed. In this way, we need only generate uniform points in the unit hypercube and then compute f. To illustrate the approach, we assume a simple model of a call option (without early exercise) and find the appropriate f. We suppose that n is the number of periods into which the time to maturity is divided. In practice, the size of periods might vary and their endpoints might correspond to potential interest rate changes, dividend dates, or possible exercise points in an American option. We could include correlations among periods, but, for our illustrations, we assume independence of price changes across periods. 4

Now, assuming that stock returns have a lognormal distribution, we can simply invert the uniform components of x to obtain n normally distributed random values, sum these values, and exponentiate to obtain the price appreciation after n periods. Using the standard risk neutrality argument (see, e.g., Cox and Ross [1976]), the call price can then be discounted by a riskless bond rate to achieve the call option price. To state this more formally, assume that ST(X) is the random stock price at maturity date, T, Ct is the call price at time t, K is the strike price, and r is a riskless return (assumed constant for this discussion). Making the additional assumptions of frictionless markets and no early exercise, we can write Ct as Ct = e-r(T-t)( (ST(X)-K)+dx). (3) [O,1]n Hence, in (1), we have f(x) = e-r(T-t)(ST(x) - K)+. We just need to define ST. In general, we might consider many independent factors that all combine to effect the price at time T. In the simplest case, each xz corresponds to the quantile of the proportional price change in period i where price changes, (Sti/Sti,1), i = 1,...,n, among periods are independent. In this way, we let 6i(xi) = (Sti/Sti_l) and then ST(X1,..,Xn) = StlI=1i(xi). (4) We also assume for ease of exposition that the proportional price changes are lognormallv distributed with mean return (r - 2)(ti+l - ti) and standard deviation a /ti7-Ti, so that log(6,(xi)) - (r - 9 )(ti+1 - ti) = - \ 2 ~ (xi). (5, a /t+l -t~ Combining (3), (4) and (5), we have Ct = e-r(T-t)( (St(IIle () /t-t+(r- -)(t+-t) - K)+dx). (6) Now, Monte Carlo or quasi-Monte Carlo approaches to evaluating (6) only require finding sequences of points, xk, k = 1,..., N, as in (1) and using the error results that accompany these sequences. We could use more complicated integrands than those in (6) but we choose the simple form above because analytical results are so readily available. Since we assume the price changes in each period are independent and lognormally distributed, 5

their product, ST, is also lognormally distributed with mean return, (r - e)(T- t), and standard deviation, aJvT -t. This observation leads to the Black-Scholes' formula for log(St/K) + (r + a2/2)(T - t) Ct = st' VTV- t _er(T- log(Stl/K) + (r + a2/2)(T - t) ) (7) -Ke( - -t (7) Of course, we do not need to use Monte Carlo approaches to evaluate (6) if (7) or other analytical representations are available. However, when more complicated types of integrals are included, then such straightforward analytical solutions may not be available. We use the simple form here to allow us to evaluate the accuracy of pseudo- and quasi-random schemes against a known analytical value. In the next section, we present some analytical results about the accuracy of these methods. 3. Analytical Error Analysis of Quasi-Monte Carlo Schemes The main basis for error analysis of pseudo-random schemes is the use of the confidence interval in (2). In the option pricing case presented here, the error in this estimate is proportional to the standard deviation of the integrand divided by AN/. If this quantity is unknown, a sample variance can be used to estimate it with a t-distribution in place of thle norrnal. The result allows asymptotic error bounds with confidence level a (assuming truel randomness). The -K proportionality in the error decrease means, for example, that achieving a confidence interval that is 1/10 of a standard deviation requires 100 observations ullt that achieving an error of 1/100 of a standard deviation requires 10,000 observations (issunling the asymptotics apply). In contrast to this slow error decrease, quasi-random numbers may provide better asylniptotic error bounds that are proportional to I instead of. We will consider two sulch sequences here. The basic motivation for them is to achieve low discrepancy, the difference between the fraction of sequence points within any region of the unit hypercube aind the volume of that region. The discrepancy should decrease to zero. One example of a quasi-random sequence is constructed from irrationals, such as the roots of the first n primes. We will denote this sequence as {ck}, where component i of tile kth sample is ak (i) = kP(i) - [kP/i)J, (8) 6

where P(i) is the ith prime number, i = 1,..., n, and L[k\P-(7J denotes the greatest integer less than or equal k/P(i). Hammersley and Handscomb [1968] show the following (which we present without proof). Theorem 1. If the Fourier series of f is absolutely convergent, then there exists a constant cl such that: f(x)dx - Z f(ak)) _< c N9' I N 1 + logN (9) O'lln N k=1 N Thus, there exists a constant cl for any f with these properties that achieves an error bound with probability one that is proportional to 1+logN In contrast to the straight N' Monte Carlo approach, the growth in required iterations to achieve given accuracy is much less than quadratic. For example, to reduce the error bound in (9) to 1/10 of c1 requires 49 points while reducing the error bound to 1/100 of c1 requires 647 points. While these analytical results for the quasi-random sequence appear quite good, they depend on the constant c1 which is not easily estimated. They do, however, indicate that asymptotic results may be much better with quasi-random than pseudo-random sequences. Another sequence we shall use is called the Halton sequence (Halton [1960]) based on representing integers in terms of primes. We suppose that k has a representation in base P(i) as k aP(i)k-1 + a2P(i)k-2 + * + ak, and let =k(i) = k1 ak-i+lP(i)- The sequence is then k = (k(l),...,Ok(n)). Halton shows that this sequence also has low discrepancy. Using a standard result on functions of bounded variation, we can state this as the following. Theorem 2. If f is of bounded variation over [0, 1]n, then there exists a constant c2 such that: ( f x)dx- 1]' f(dk)) < C2 (logN) (10) lomi b,lpn T k=l(0 Comparisons between Theorems 1 and 2 are difficult again because of the difficulty in estimating the constants. In general, c2 is much less than c1. Another advantage of the Halton sequence may be in computation since {ok} constructions require many digits of /Pi) when k is high. With these caveats, we can still find the worst-case reductions of 7

error from c2 in (10). For three (n = 3) dimensions, with the Halton sequence, reducing the error bound to 1/10 of C2 requires 6910 points while reducing the error bound to 1/100 of C2 requires 176, 000 points. A somewhat better comparison is that the additional digit of accuracy requires 25 times more samples compared to 13 times as many samples using the {ak} sequence and 100 times with standard Monte Carlo. We use the Halton sequence given above because it does not a priori require the total number of observations N. Knowing N (or fixing N at certain values) can reduce the error order to (1ogN)n-1 error order to {o9- using, for example, the Hammersley sequence (see Hammersley and Handscomb [1964]). We use methods for variable N, however, to observe the errors directly as N increases. Another sequence, proposed by Faure [1982], has demonstrated smaller errors for the same number of observations (Fox [1986]) with the same asymptotic error rate as in (10) but with generally additional computation time. In our computational comparison, we use this sequence as well. We also include a sequence due to Sobol [19671 (see also Antonov and Saleev [1979]) with the same asymptotic rate and different practical computational characteristics. The Hammersley points have found further justification in a construction by Wozniakowski [1991]. He shows that shifting the Hammersley points leads to a sequence of points that achieves the best average case error for numerical integration over continuous rieal functions on the unit hypercube equipped with the Wiener sheet measure (the mean over all functions at any point is zero and the covariance between values at two points is the product of minimum coordinate values of the points). Woiniakowski shows that to achieve an error of e requires, on average for functions in this class, O((loge)-( )6-1) function evaluations. This contrasts with Monte Carlo's requiring 0(E-2) observations and grid points which require O(e-n) evaluations. The average case asymptotic advantage over Monte Carlo and any grid point scheme has generated much speculation about the potential improvements in integration over MNonte Carlo methods. The average case result would not extend directly to the simple option example we consider here but, for path determined derivatives (e.g, options on weighted average values of securities over a fixed time horizon or CMOs (see Paskov and Traub [1995] for an implementation)), the class of functions may fit quite closely (each component in determining the price over time is a Wiener process). This theoretical basis 8

remains of interest, but the results are only for average cases and, again, asymptotic constants may be large. Our goal in the next section is to show that the error of quasi-Monte Carlo may indeed be much better than these analytical results for functions that arise in option pricing. 4. Empirical Observations The results in Section 3 provide some motivation for the use of quasi-random points over pseudo-random points in evaluating integrals such as the option price in (3). A difficulty in using the analytical results with this model is, however, the estimation of the constants in the error analyses. Indeed, the function with the inverse normal integrand is not even of bounded variation. Nevertheless, the low discrepancy of the points in {lak} and {(k} leads to low errors in calculating these integrals. All computer runs in this section were performed using the FORTRAN compiler on an IBM RS6000 workstation at the University of Michigan. For a pseudo-random number generator, we used the portable FORTRAN generator by Schrage [1979] because of its general acceptance (see, e.., Law and Kelton [1991]), ease of use, and sefficiency. For the option pricing model, we tested ten, thirty, and one hundred eighty period models. We used identical distributions in each period to make the analytical calculations easier. The initial price (St) and strike price (K) were first both set to 40 and then K was varied to 35 and 45. The annual riskfree interest rate was r = 0.1 with annual volatilities of a = 0.1, 0.3 and 0.6. The three horizons, T - t = 10, T - t = 30 and T - t = 180 days, were chosen to give some indication of changes from relatively low to high dimensions. The analytical values from (7) range from Ct(T - t = 10, a = 0.1) = 0.322 to Ct(T - t = 180, a = 0.6) = 7.52. Figures 1-13 give indications of the relative accuracy merit of each sequence scheme. We then discuss the computational burden of each method and its relevance in practice. Figures 1 to 3 give the relative error in percentage for the ten, thirty, and one hundred eighty period models, St = K = 40, and a = 0.3 for each of the pseudo-random sequence (pseudo), the sequence using prime roots (alpha, {aCk}) plus the Halton, Faure and Sobol sequences, as the number of samples, N, increases. In Figure 1 (T - t=10), Sobol has consistently the lowest error until Faure reaches the same level at around 28,000 iterations. The alpha sequence error is also below 0.5% after 5,000 iterations. Halton shows steady 9

improvement, but pseudo displays significant oscillation. In Figure 2, alpha performs best up to 28,500 iterations, where Faure then becomes consistently the most accurate. Sobol's error is quite low but shows some oscillation. Halton is again consistent but not very accurate, while pseudo suffers once more from oscillation particularly at low iteration numbers. In Figure 3, Halton and Faure are not present because their errors never reached 2% or less. The alpha sequence generally has the lowest errors. The pseudo sequence has significant oscillation while Sobol has less pronounced oscillation. Some general observations are that the alpha and Sobol sequences achieve relatively low errors quite quickly. The pseudo-random sequence shows significant oscillation in errors. The Halton and Faure sequences are the most consistent in error improvement but they are not competitive in the higher dimensions in these trials. Figures 4 and 5 give the option values obtained for T- t = 180, a = 0.3 under each sequence to illustrate the performance of Halton and Faure. Note that the values from these sequences are converging at relatively slow rates from values far from the actual. Figure 5 shows that pseudo and Sobol are biased somewhat above the true value while alpha obtains values below the true value. (This tendency is not, however, uniformly true but does indicate that other variance reduction techniques, such as antithetic variates, may be useful.) The effect of varying volatilities is explored in Figures 6 (o = 0.1) and 7 (a = 0.6) for T - t = 180. In general, the results here are quite similar to those for a = 0.3 except that errors are higher for greater volatilities. Again, Halton and Faure never achieve low errors in 100,000 iterations, pseudo has significant oscillations, and alpha produces slightly more consistent values than Sobol. Figures 8 and 9 demonstrate the effect of varying the strike price at out-of-the-money or in-the-money values with K = 35 and K = 45. The results are again quite similar to the previous examples for T - t = 180. The Sobol sequence, however, appears slightly more consistent than alpha for K = 45 and greater than 70,000 iterations. The advantage of the quasi-Monte Carlo procedures depends on terminating a simulation with an accurate result earlier than the pseudo-Monte Carlo method. The pseudoMonte Carlo method's benefit depends on the extent to which one can apply the confidence 10

interval as in (2) to set a priori run lengths for given desired errors. As we saw above, it is not clear how (2) is interpreted with this deterministic sequence. In our case, (2) is often violated for the sample chosen. The meaning of the a-level confidence is not clear with the fixed sequence. The use of sample variances may also lead to significant errors when variances are large but not well detected in the standard Monte Carlo scheme (see, e.g., Fox [1986])). We consider various measure of termination efficiency in Figures 10-13. These results give the ratio of the stopping times from the quasi-Monte Carlo procedures to the time for pseudo. Each trial's result (given as T - t, a, K (if not 40)) is displayed. Figure 10 shows this result for one alternative, suggested in Fox [1986], to compare values at 2N with values at N observations and to stop if these values are within some relative tolerance (0.5% in Figure 10) of each other. Note that alpha terminates within half the time of pseudo in all but the 180, 0.6 case. Sobol has somewhat less consistency but terminates earlier than pseudo in all but the K = 45 case. Halton terminates earlier than pseudo for T - t = 10 and T - t = 30 but has difficulties for T - t = 180 as mentioned above. Faure also has difficulties for T - t = 30 in addition to the 180-day trials in this test. Another approach, similar to statistical tests based on mean and range, is to consider the range of observations over some interval compared to the overall mean value. We consider this measure for an interval of 5000 observations and an error of 0.5% in Figure 11. In this test, alpha and Sobol consistently require fewer observations than pseudo with alpha approaching termination as much as ten times faster than pseudo. Halton and Faure again terminate earlier than pseudo for ten and thirty days but not for 180-day models. To consider other stopping conditions, we consider the last iteration with errors above some tolerance in Figure 12 for an error of 1% and Figure 13 for an error of 0.5%. For the 1% error, alpha achieves this result at least twice as fast as pseudo in every case and is ten times faster in several cases. Sobol terminates in this approach faster than pseudo in each case and is almost 50 times faster in the ten-day models. Halton and Faure again have difficulties for the higher dimensions. For the 0.5% error in Figure 13, alpha again achieves earlier termination than pseudo but is not as consistent as with the larger error. Sobol is, however, more consistent in improvements relative to pseudo in this case. The results for Halton and Faure are as in the other tests. 11

A possible advantage of the pseudo-random scheme is in efficiency of calculation time. In general, we would anticipate the use of these techniques only in problems which are too complicated for analytical solutions and, in which, each function evaluation is much more costly than the random number generation time. For this simple example, function evaluation was quite fast so the times vary slightly. The results for ratios of sequence generation times to the pseudo generation times appear in Table 1 below. Method Ratio to Pseudo Alpha 1.2 Halton 2.5 Faure 5- 10 Sobol 9-10 Table 1. Ratio of computation times over all trials to pseudo-random generation times. Our results are consistent with Fox [1986] and Bratley and Fox [1988]. In general, they report Halton times approximately twice their pseudo times and Faure and Sobol times approximately 5-6 times the values for pseudo. Bratley and Fox [1988], however, report much improved (by an order of magnitude) timings for Sobol with a nonstandard XOR operator in the compiler. Since many derivative evaluations would generally require much greater computation timles thlan number generation times, the results in Table 1 have less relevance. Overall, the results here indicate that the alpha sequence may have the most consistency for 1% acctliracVy. Sobol may have greater consistency for smaller error values, but this accuracy should llbe weighed against computation time. Faure in these trials is quite accurate in low dimenslions but not effective in higher dimensions. Halton does not appear as effective as the other methods. These results are consistent with Paskov [1994] and Paskov and Traub [1995] who applie(d the Sobol and Halton sequences to CNIO evaluations. In their studies, Sobol achieves greater accuracy than pseudo-random schemes as well as the Halton sequence. 5. Conclusions We have presented quasi-Monte Carlo approaches to option pricing. We demonstrated some of their analytical advantages over the use of pseudo-random streams in standard 12

Monte Carlo. We also gave some empirical evidence that quasi-random streams can produce more accurate results in fewer iterations than standard Monte Carlo. Quasi-Monte Carlo also offers the advantage of less fluctuation than standard Monte Carlo and makes on-line error approximations possible. The analyses in this paper concentrated on crude pseudo-Monte Carlo and quasi-Mnnte Carlo schemes. In practice, variance reduction techniques are used to lower the error in either crude scheme. Additional variance reduction techniques should have roughly equivalent effects on pseudo- or quasi-Monte Carlo approaches (see, for example, Fox [1986]). In the quasi-Monte Carlo case, however, the reduction in error is caused by reducing discrepancy rather than variance. These results show that quasi-Monte Carlo may have an advantage in a simple model where analytical results are available. The advantage may be even greater in more complex models (such as the CMO evaluations in Paskov and Traub [1995]) which require substantial computation for each function evaluation. They may also be applicable to evaluating pathbased options due to their excellent average case behavior which is an order of magnitude better than standard Monte Carlo. 13

References Antoneev, I.A. and V.M. Saleev. "An economic method ofc computing LP,sequences." USSR Computational Mathematics and Mathematical Physics, 19 (1979), 252-256. Black, F., and M. Scholes. "The pricing of options and corporate liabilities." Journal of Political Economics, 81 (1973), 637-659. Boyle, P. "Options: A Monte Carlo approach." Journal of Finance, 32 (1977), 323-338. Bratley, P., and B.L. Fox. "Algorithm 659, Implementing Sobol's quasirandom sequence generator." ACM Transactions on Mathematical Software, 14 (1988), 88100. Cox, J., and S. Ross. "The valuation of options for altenative stochastic processes." Journal of Financial Economics, 3 (1976), 145-166. I. Deak. Random Number Generators and Simulation, Akademiai Kiad6, Budapest (1990). Faure, H. "Discrepance de suites associees a un systeme de numeration (en dimension s)." Acta Arithmetica, XLI (1982), 337-351. Feller, W. An Introduction to Probability Theory and Its Applications, Volume II, New York:Wiley (1971). Fox. B. L. "Implementation and relative efficiency of quasirandom sequence generators." ACM Trans. Math. Software, 12 (1986), 362-376. Guterl, F. "Suddenly, number theory makes sense to industry." Business Week, June 20 (1994), 172-174. Halton, J. H. "On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. " Numer. Math., 2 (1960), 84-90. Hammersley, J. M., and D.C. Handscomb. Monte Carlo Methods, London:Methuen (1964). Jarrow, R. A., and A. Rudd. Option Pricing, Homewood, Illinois: Irwin (1983). Law, A. M., and VW. D. Kelton. Simulation Modeling and Analysis, New York:McGrayHill (1991). 14

Niederreiter, H. "Quasi-Monte Carlo methods and pseudorandom numbers." Bull. Amer. Math. Soc., 84 (1978), 957-1041. Niederreiter, H. "Recent trends in random number and random vector generation." Annals of Operations Research, 30 (1991), 323-346. Paskov, S.H. "New methodologies for valuing derivatives." Technical Report, Department of Computer Science, Columbia University, New York (October 1994). Paskov, S.H., and J. Traub, "Faster valuation of financial derivatives." Technical Report, Department of Computer Science, Columbia University, New York (January 1995). Schrage, L. "A more portable random number generator." ACM Trans. Math. Software, 5 (1979), 132-138. Sobol, I.M. "On the distribution of points in a cube and the approximate evaluation of integrals." USSR Computational Mathematics and Mathematical Physics, 7 (1967), 236-242. Woiniakowski, H. "Average-case complexity of multivariate integration." Bull. Amer. Math. Soc. (new series), 24 (1991), 185-194. Yam, P. "Math Exorcise." Scientific American, July 1991, 29. Zaremba, S.K. "The mathematical basis of Monte Carlo and quasi-Monte Carlo methods." SALM review, 10 (1968), 303-314. 15

Percent Error 0 r- CO o Ln -- n row Dn c D n ~,i Oi 1 00 100 _ _i:,. _....., _' 1600 ---------- 3100 4600 6100 7600 9100 1 0600 ~'~ 12100 13600 r 15100'0 16600 C,:D - 18100,, CD f m 19600 0 1r~ ~ ~ 21100 C24100 \ \ 25600' _ 27100!'j. \ 28600 I J> \ j\ \ 30100 \ \ 31600 \ \ \ M7 33100 | 34600 36100 37600 1 I \ \ 39100 \ (D (D (D (D (D Z335 0, -' -~. W~O 0I I I: I2. 0 0 Q) CD OCD )0~

Percent Error 500 4500 8500 — __ —-- 12500 16500 - 20500 24500 >/ I 2 8500 + I 32500 - / } > 36500, 40500 44500 _ 48500 <. -. - 0 m71 ~, 52500 i c 56500 565 60500 0 84500 _.: o 68500 l \ 72500 - \ 76500 1.' t 80500 84500 88500;i I 96500 ( (D CD (D (D (D o _ o 0 0 9 -9 - - -9

Percent Error 0 0 0 0 o i 500 I t t t 4500 8500 12500 " 16500 20500 24500 28500 32500: > ~ 36500 - > 40500 C: 44500 -. — m 48500 t m o 52500 c 56500 O 2) 60500 \ Q 64500' I 68500 \ \. 72500 II 76500 \ 80500 84500 88500 92500 \ 96500 (D CD (D (D (D ___ - ou p ~

Value c;. e301 0) 03 (0 500 i i 4500 8500 12500 16500 20500 24500 28500 32500 36500 40500 44500 / 48500 0 52500 o C "56500 / (D 4h 60500 / w 64500 / _ 68500 II 72500: 76500i - 80500 84500 \ 88500 92500 96500 t ~1'1 I: 1 D ~

Value L 01 r 1 co cn - c0 01 01 ) 500+ t 4500 - 8500 12500 16500 20500 \ 24500 28500 32500 36500 40500 44500 48500 -n ~ 52500 \ -SD 56500 u0 1 >, (1 60500 64500 A 3 68500 I 72500 i X, 76500 80500 1. 84500 - 88500 \ 92500 96500 i "l c-: (I)

Percent Error 0 0 0 0 - - - o N) b ) X N) M ) r X 500 - - 4500 8500 12500 16500 20500 24500 28500 32500 36500 S 0 40500 0 44500 \ m~~~D ~~~~~~m 48500 < 0 52500 "\.. ~ 56500 ) 60500 64500 \ 68500 \ \ 72500 76500 \ 80500 84500 88500 92500 \ 96500 \ \ (D D (D (D (D -0 ~ o o9 -o oW C - ID 0Q3 Q ~ ________0.

Percent Error 00...CD. I\).> 0) 00 -l ODc 500.t I - - - -__=. 4500 - 8500 12500 16500 20500 24500 28500 32500 36500 - 40500 \ 44500. \. 48500' C I 502500 3 56500 "- -~ \ c o 60500 \ 64500 ) \ 68500 76500 \ \ 80500 84500 88500 92500 96500 7., __ 3 - -:>I I 84500 _;~~~~~~

Percent Error 0 0 0 0 -. - O mo o o o 0 - 0. 4) 500 4500 8500 12500 16500 20500 24500 28500 32500 36500 i " is 40500! 44500 ~ m 48500 \ ( \ 0 7 52500 co 56500 60500 64500 \ \ \ 68500 _ 72500 II 76500 80500 II [] 84500 88500 92500 96500 I D o = g

Percent Error 0 0 0 0 -~ - - o o o o ~ ~ o r\ b. m ^ M A ooX M 500 t _.... 4500 - 8500 12500 16500 20500 24500 28500 32500 _ -- --- 36500 10. \ (D 40500 - \ 44500 X 48500 o 52500 (D (D 0( ( 5 o6500 ~o ~ 60500_C C) 64500 68500 3 72500 0 76500 80500 84500 88500 92500 96500 ~ C(D CD mD o0 =2 D 0

Stopping Time Ratios: 11(2N)-I(N)1<0.5%(1(2N)) 1.5 1.3 1.1 o U quasi-alpha = 0.9 (I) LIH H H T] Halton 0F 3. 9 0.7 tu 0.5 n0.3 0.1. -0.1 ) - cIO co (D (O (o ) i).........0 co0 00 00 Trial Figure 10

Stopping Ratios: Range/x-bar<0.5% 1.5 1.3 1.1 | 0.9 - quasi-alpha I1 Halton o 0.7 o> U * I i I H II H Faure 0 0.3 0.1 -0 1 C O CO CO CD () L() "''-' "' m m ~ ~ ~o ~C I t 0 0 0 0 0 0 0 0 0 03') ( CO -) 0'): CO CO Trial Figure 11

Stopping Ratios: Last N with error>1% 1.5 1.3 1.1 o 0 I U quasi-alpha U) 0 Co Li Halton 0. 0.7 Cu I0. Ch I~ L~ I LO L Ir- 0 0 <b~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~a PC: 0.5~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~0 0 Trial Figure 12 3 03 00 C 0 000 CIo CO ^ ^~~~~~ 0 0 00 Trial Figure 12

Stopping Ratios: Last N with error>0.5% 1.5 1.3 1.1 0 D 0*9 i | l 1 * quasi-alpha - 0.5 9 0.3 Q) J I B B_ Halton -0.50 0o Trial Figure 13 o o co co co o -I cc C)- C (0 CC C 3 0 03 0~ Trial Figure 13