Results 1  10
of
30
Recent Advances In Randomized QuasiMonte Carlo Methods
"... We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional inte ..."
Abstract

Cited by 59 (12 self)
 Add to MetaCart
We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional integral. We review several QMC constructions and dierent randomizations that have been proposed to provide unbiased estimators and for error estimation. Randomizing QMC methods allows us to view them as variance reduction techniques. New and old results on this topic are used to explain how these methods can improve over the MC method in practice. We also discuss how this methodology can be coupled with clever transformations of the integrand in order to reduce the variance further. Additional topics included in this survey are the description of gures of merit used to measure the quality of the constructions underlying these methods, and other related techniques for multidimensional integration. 1 2 1.
A Hilbert space embedding for distributions
 In Algorithmic Learning Theory: 18th International Conference
, 2007
"... Abstract. We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in twosample tests, which are used for ..."
Abstract

Cited by 53 (26 self)
 Add to MetaCart
Abstract. We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in twosample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation. Kernel methods are widely used in supervised learning [1, 2, 3, 4], however they are much less established in the areas of testing, estimation, and analysis of probability distributions, where information theoretic approaches [5, 6] have long been dominant. Recent examples include [7] in the context of construction of graphical models, [8] in the context of feature extraction, and [9] in the context of independent component analysis. These methods have by and large a common issue: to compute quantities such as the mutual information, entropy, or KullbackLeibler divergence, we require sophisticated space partitioning and/or
Extensible Lattice Sequences For QuasiMonte Carlo Quadrature
 SIAM Journal on Scientific Computing
, 1999
"... Integration lattices are one of the main types of low discrepancy sets used in quasiMonte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any nonnegative ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
Integration lattices are one of the main types of low discrepancy sets used in quasiMonte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any nonnegative integer m. Thus, if the quadrature error using an initial lattice is too large, the lattice can be extended without discarding the original points. Generating vectors for extensible lattices are found by minimizing a loss function based on some measure of discrepancy or nonuniformity of the lattice. The spectral test used for finding pseudorandom number generators is one important example of such a discrepancy. The performance of the extensible lattices proposed here is compared to that of other methods for some practical quadrature problems.
On Resampling Algorithms for Particle Filters
 Nonlinear Statistical Signal Processing Workshop
, 2006
"... In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respec ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity. 1.
Deterministic Design for Neural Network Learning: An Approach Based on Discrepancy
"... The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand, but is part of the learning process. In particular, the consistency of the Empirical Risk Minimization ( ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand, but is part of the learning process. In particular, the consistency of the Empirical Risk Minimization (ERM) principle is analyzed, when the points in the input space are generated by employing a purely deterministic algorithm (deterministic learning). When the
Variance and Discrepancy with Alternative Scramblings
, 2002
"... This paper analyzes some schemes for reducing the computational burden of digital scrambling. Some such schemes have been shown not to affect the mean squared L² discrepancy. This paper shows that some discrepancypreserving alternative scrambles can change the variance in scrambled net quadrature. ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper analyzes some schemes for reducing the computational burden of digital scrambling. Some such schemes have been shown not to affect the mean squared L² discrepancy. This paper shows that some discrepancypreserving alternative scrambles can change the variance in scrambled net quadrature. Even the rate of convergence can be adversely affected by alternative scramblings. Finally, some alternatives reduce the computational burden and can also be shown to improve the rate of convergence for the variance, at least in dimension 1.
A Deterministic Learning Approach Based on Discrepancy
 In Proceedings of WIRN’03
, 2003
"... The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand, but is part of the learning process. In particular, the consistency of the Empirical Risk Minimizati ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand, but is part of the learning process. In particular, the consistency of the Empirical Risk Minimization (ERM) principle is analyzed, when the points in the input space are generated by employing a purely deterministic algorithm (deterministic learning). When the output generation is not subject to noise, classical numbertheoretic results, involving discrepancy and variation, allow to establish a su#cient condition for the consistency of the ERM principle. In addition, the adoption of lowdiscrepancy sequences permits to achieve a learning rate of O(1/L), being L the size of the training set. An extension to the noisy case is discussed.
QuasiMonte Carlo algorithms for unbounded, weighted integration problems
 Journal of Complexity
, 2004
"... In this article we investigate QuasiMonte Carlo methods for multidimensional improper integrals with respect to a measure other than the uniform distribution. Additionally, the integrand is allowed to be unbounded at the lower boundary of the integration domain. We establish convergence of the Quas ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this article we investigate QuasiMonte Carlo methods for multidimensional improper integrals with respect to a measure other than the uniform distribution. Additionally, the integrand is allowed to be unbounded at the lower boundary of the integration domain. We establish convergence of the QuasiMonte Carlo estimator to the value of the improper integral under conditions involving both the integrand and the sequence used. Furthermore, we suggest a modification of an approach proposed by Hlawka and Mück for the creation of lowdiscrepancy sequences with regard to a given density, which are suited for singular integrands. Key words: QuasiMonte Carlo integration, weighted integration, nonuniformly distributed lowdiscrepancy sequences This paper is devoted to QuasiMonte Carlo (QMC) techniques for weighted integration problems of the form
Adaptive QuasiMonte Carlo Integration Based on MISER and VEGAS
"... Summary. QuasiMonte Carlo (QMC) routines are one of the most common techniques for solving integration problems in high dimensions. However, their efficiency degrades if the variation of the integrand is concentrated in small areas of the integration domain. Adaptive algorithms cope with this situa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Summary. QuasiMonte Carlo (QMC) routines are one of the most common techniques for solving integration problems in high dimensions. However, their efficiency degrades if the variation of the integrand is concentrated in small areas of the integration domain. Adaptive algorithms cope with this situation by adjusting the flow of computation based on previous integrand evaluations. We explore ways to modify the Monte Carlo based adaptive algorithms MISER and VEGAS such that lowdiscrepancy point sets are used instead of random samples. Experimental results show that the proposed algorithms outperform plain QMC as well as the original adaptive integration routine for certain classes of test cases. 1