Results 1  10
of
23
The rendering equation
 Computer Graphics
, 1986
"... ABSTRACT. We present an integral equation which generallzes a variety of known rendering algorithms. In the course of discussing a monte carlo solution we also present a new form of variance reduction, called Hierarchical sampling and give a number of elaborations shows that it may be an efficient n ..."
Abstract

Cited by 708 (0 self)
 Add to MetaCart
ABSTRACT. We present an integral equation which generallzes a variety of known rendering algorithms. In the course of discussing a monte carlo solution we also present a new form of variance reduction, called Hierarchical sampling and give a number of elaborations shows that it may be an efficient new technique for a wide variety of monte carlo procedures. The resulting renderlng algorithm extends the range of optical phenomena which can be effectively simulated.
Quasirandom methods for estimating integrals using relatively small samples
 SIAM Review
, 1994
"... Abstract. Much of the recent work dealing with quasirandom methods has been aimed at establishing the best possible asymptotic rates of convergence to zero of the error resulting when a finitedimensional integral is replaced by a finite sum of integrand values. In contrast with this perspective to ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Abstract. Much of the recent work dealing with quasirandom methods has been aimed at establishing the best possible asymptotic rates of convergence to zero of the error resulting when a finitedimensional integral is replaced by a finite sum of integrand values. In contrast with this perspective to concentrate on asymptotic convergence rates, this paper emphasizes quasirandom methods that are effective for all sample sizes. Throughout the paper, the problem of estimating finitedimensional integrals is used to illustrate the major ideas, although much of what is done applies equally to the problem of solving certain Fredholm integral equations. Some new techniques, based on errorreducing transformations of the integrand, are described that have been shown to be useful both in estimating highdimensional integrals and in solving integral equations. These techniques illustrate the utility of carrying over to the quasiMonte Carlo method certain devices that have proven to be very valuable in statistical (pseudorandom) Monte Carlo applications. Key words, quasiMonte Carlo, asymptotic rate of convergence, numerical integration
Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems
, 2002
"... ..."
Computing the Maximum Bichromatic Discrepancy, with applications to Computer Graphics and Machine Learning
 in Computer Graphics and Machine Learning. Journal of Computer and Systems Sciences
, 1996
"... Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy problems. 1. Introduction The main theme of this paper is to present efficient algorithms that solve the problem of computing the maximum bichromatic discrepancy for axis oriented rectangles. This problem arises naturally in different areas of computer science, such as computational 1 The research work of these authors was supported by NSF Grant CCR9301254 and the Geometry Center. learning theory, computational geometry and computer graphics ([Ma], [DG]), and has applications in all these areas. In computational learning theory, the problem of agnostic PAClearning with simple geometric hypothese...
Computing the Discrepancy with Applications to Supersampling Patterns
 ACM TRANSACTIONS ON GRAPHICS
, 1996
"... Patterns used for supersampling in graphics have been analyzed from statistical and signalprocessing viewpoints. We present an analysis based on a type of isotropic discrepancyhow good patterns are at estimating the area in a region of defined type. We present algorithms for computing discrepanc ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Patterns used for supersampling in graphics have been analyzed from statistical and signalprocessing viewpoints. We present an analysis based on a type of isotropic discrepancyhow good patterns are at estimating the area in a region of defined type. We present algorithms for computing discrepancy relative to regions that are defined by rectangles, halfplanes, and higherdimensional figures. Experimental evidence shows that popular supersampling patterns have discrepancies with better asymptotic behavior than random sampling, which is not inconsistent with theoretical bounds on discrepancy.
New Methodologies for Valuing Derivatives
, 1997
"... Highdimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low discrepancy algorithms are sometimes superior. We report on numerical testing which compares low discrepancy and Monte Carlo algorithms on the evaluation of financial derivatives. The testing ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Highdimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low discrepancy algorithms are sometimes superior. We report on numerical testing which compares low discrepancy and Monte Carlo algorithms on the evaluation of financial derivatives. The testing is performed on a Collateralized Mortgage Obligation (CMO) which is formulated as the computation of ten integrals of dimension up to 360. We tested two low discrepancy algorithms (Sobol and Halton) and two randomized algorithms (classical Monte Carlo and Monte Carlo combined with antithetic variables). We conclude that for this CMO the Sobol algorithm is always superior to the other algorithms. We believe that it will be advantageous to use the Sobol algorithm for many other types of financial derivatives. Our conclusion regarding the superiority of the Sobol algorithm also holds when a rather small number of sample points are used, an important case in practice. We have built a software sy...
Sequential Monte Carlo Techniques for the Solution of Linear Systems
 Journal of Scientific Computing
, 1994
"... Given a linear system Ax = b, where x is an mvector, direct numerical methods, such as Gaussian elimination, take time O(m 3) to find x. Iterative numerical methods, such as the GaussSeidel method or SOR, reduce the system to the form whence x = a + Hx, x = ∑r=0ּHra; and then apply the iterations ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Given a linear system Ax = b, where x is an mvector, direct numerical methods, such as Gaussian elimination, take time O(m 3) to find x. Iterative numerical methods, such as the GaussSeidel method or SOR, reduce the system to the form whence x = a + Hx, x = ∑r=0ּHra; and then apply the iterations x 0 = a, x s+1 = a + Hx s, until sufficient accuracy is achieved; this takes time O(m 2) per iteration. They generate the truncated sums s xs = ∑r=0ּHra. The usual plain Monte Carlo approach uses independent “random walks, ” to give an approximation to the truncated sum x s, taking time O(m) per random step. Unfortunately, millions of random steps are typically needed to achieve reasonable accuracy (say, 1 % r.m.s. error). Nevertheless, this is what has had to be done, if m is itself of the order of a million or more. The alternative presented here, is to apply a sequential Monte Carlo method, in which the sampling scheme is iteratively improved. Simply put, if x = y + z, where y is a current estimate of x, then its correction, z, satisfies z = d + Hz, where d = a + Hy – y. At each stage, one uses plain Monte Carlo to estimate z, and so, the new estimate y. If the sequential computation of d is itself approximated, numerically or stochastically, then the expected time for this process to reach a given accuracy is again O(m) per random step; but the number of steps is dramatically reduced [improvement factors of about 5,000, 26,000, and 700 have been obtained in preliminary
Computing High Dimensional Integrals with Applications to Finance
 Joint Summer Research Conference on Continuous Algorithms and Complexity, Mount
, 1994
"... Highdimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low discrepancy algorithms are sometimes superior. We report on numerical testing which compares low discrepancy and Monte Carlo algorithms on the evaluation of financial derivatives. The testing ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Highdimensional integrals are usually solved with Monte Carlo algorithms although theory suggests that low discrepancy algorithms are sometimes superior. We report on numerical testing which compares low discrepancy and Monte Carlo algorithms on the evaluation of financial derivatives. The testing is performed on a Collateralized Mortgage Obligation (CMO) which is formulated as the computation of ten integrals of dimension up to 360. We tested two low discrepancy algorithms (Sobol and Halton) and two randomized algorithms (classical Monte Carlo and Monte Carlo combined with antithetic variables). We conclude that for this CMO the Sobol algorithm is always superior to the other algorithms. We believe that it will be advantageous to use the Sobol algorithm for many other types of financial derivatives. Our conclusion regarding the superiority of the Sobol algorithm also holds when a rather small number of sample points are used, an important case in practice. We built a software system ...
A weighted error metric and optimization method for antialiasing patterns. Eurographics
 Computer Graphics Forum
, 2006
"... Displaying a synthetic image on a computer display requires determining the colors of individual pixels. To avoid aliasing, multiple samples of the image can be taken per pixel, after which the color of a pixel may be computed as a weighted sum of the samples. The positions and weights of the sample ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Displaying a synthetic image on a computer display requires determining the colors of individual pixels. To avoid aliasing, multiple samples of the image can be taken per pixel, after which the color of a pixel may be computed as a weighted sum of the samples. The positions and weights of the samples play a major role in the resulting image quality, especially in realtime applications where usually only a handful of samples can be afforded per pixel. This paper presents a new error metric and an optimization method for antialiasing patterns used in image reconstruction. The metric is based on comparing the pattern against a given reference reconstruction filter in spatial domain and it takes into account psychovisually measured anglespecific acuities for sharp features. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation – Antialiasing
Handling Uncertain Data in Array Database Systems
"... Scientific and intelligence applications have special data handling needs. In these settings, data does not fit the standard model of short coded records that had dominated the data management area for three decades. Array database systems have a specialized architecture to address this problem. Si ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Scientific and intelligence applications have special data handling needs. In these settings, data does not fit the standard model of short coded records that had dominated the data management area for three decades. Array database systems have a specialized architecture to address this problem. Since the data is typically an approximation of reality, it is important to be able to handle imprecision and uncertainty in an efficient and provably accurate way. We propose a discrete approach for value distributions and adopt a standard metric (i.e., variation distance) in probability theory to measure the quality of a result distribution. We then propose a novel algorithm that has a provable upper bound on the variation distance between its result distribution and the “ideal ” one. Complementary to that, we advocate the usage of a “statistical mode ” suitable for the results of many queries and applications, which is also much more efficient for execution. We show how the statistical mode also presents interesting predicate evaluation strategies. In addition, extensive experiments are performed on real world datasets to evaluate our algorithms.