Results 1  10
of
80
QuasiRandomized Path Planning
 In Proc. IEEE Int’l Conf. on Robotics and Automation
, 2001
"... We propose the use of quasirandom sampling techniques for path planning in highdimensional conguration spaces. Following similar trends from related numerical computation elds, we show several advantages oered by these techniques in comparison to random sampling. Our ideas are evaluated in the con ..."
Abstract

Cited by 78 (8 self)
 Add to MetaCart
(Show Context)
We propose the use of quasirandom sampling techniques for path planning in highdimensional conguration spaces. Following similar trends from related numerical computation elds, we show several advantages oered by these techniques in comparison to random sampling. Our ideas are evaluated in the context of the probabilistic roadmap (PRM) framework. Two quasirandom variants of PRMbased planners are proposed: 1) a classical PRM with quasirandom sampling, and 2) a quasirandom LazyPRM. Both have been implemented, and are shown through experiments to oer some performance advantages in comparison to their randomized counterparts. 1 Introduction Over two decades of path planning research have led to two primary trends. In the 1980s, deterministic approaches provided both elegant, complete algorithms for solving the problem, and also useful approximate or incomplete algorithms. The curse of dimensionality due to highdimensional conguration spaces motivated researchers from the 199...
Range counting over multidimensional data streams
 Discrete & Computational Geometry
, 2004
"... \Lambda \Lambda Abstract We consider the problem of approximate range counting over streams of ddimensional points. In the data stream model, the algorithm makes a single scan of the data, which is presented in an arbitrary order, and computes a compact summary (called a sketch). The sketch, whose ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
\Lambda \Lambda Abstract We consider the problem of approximate range counting over streams of ddimensional points. In the data stream model, the algorithm makes a single scan of the data, which is presented in an arbitrary order, and computes a compact summary (called a sketch). The sketch, whose size depends on the approximation parameter &quot;, can be used to count the number of points inside a query range within additive error &quot;n, where n is the size of the stream. We present several results, deterministic and randomized, for both rectangle and halfplane ranges. 1 Introduction Data streams have emerged as an important paradigm for processing data that arrives and needs to be processed continuously. For instance, telecom service providers routinely monitor packet flows through their networks to infer usage patterns and signs of attack, or to optimize their routing tables. Financial markets, banks, web servers, and news organizations also generate rapid and continuous data streams.
A randomized quasiMonte Carlo simulation method for Markov chains
 Operations Research
, 2007
"... Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyunifor ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce and study a randomized quasiMonte Carlo method for estimating the state distribution at each step of a Markov chain. The number of steps in the chain can be random and unbounded. The method simulates n copies of the chain in parallel, using a (d + 1)dimensional highlyuniform point set of cardinality n, randomized independently at each step, where d is the number of uniform random numbers required at each transition of the Markov chain. This technique is effective in particular to obtain a lowvariance unbiased estimator of the expected total cost up to some random stopping time, when statedependent costs are paid at each step. It is generally more effective when the state space has a natural order related to the cost function. We provide numerical illustrations where the variance reduction with respect to standard Monte Carlo is substantial. The variance can be reduced by factors of several thousands in some cases. We prove bounds on the convergence rate of the worstcase error and variance for special situations. In line with what is typically observed in randomized quasiMonte Carlo contexts, our empirical results indicate much better convergence than what these bounds guarantee.
From Discrepancy to Declustering: Nearoptimal multidimensional declustering strategies for range queries (Extended Abstract)
, 2001
"... Declustering schemes allocate data blocks among multiple disks to enable parallel retrieval. Given a declustering scheme D, its response time with respect to a query Q, rt(Q), is defined to be the maximum number of disk blocks of the query stored by the scheme in any one of the disks. If Q is the ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Declustering schemes allocate data blocks among multiple disks to enable parallel retrieval. Given a declustering scheme D, its response time with respect to a query Q, rt(Q), is defined to be the maximum number of disk blocks of the query stored by the scheme in any one of the disks. If Q is the number of data blocks in Q and M is the number of disks then rt(Q) is at least Q/M. One way to evaluate the performance of D with respect to a set of queries Q is to measure its additive error the maximum difference between rt(Q) from Q/M over all range queries Q ∈ Q. In this paper, we consider the problem of designing declustering schemes for uniform multidimensional data arranged in a ddimensional grid so that their additive errors with respect to range queries are as small as possible. It has been shown that such declustering schemes will have an additive error of Ω(log M) when d = 2 and Ω(log d−1 2 M) when d> 2 with respect to range queries. Asymptotically optimal declustering schemes exist for 2dimensional data. For data in larger dimensions, however, the best bound for additive errors is O(M d−1), which is extremely large. In this paper, we propose the two declustering schemes based on low discrepancy points in ddimensions. When d is fixed, both schemes have an additive error of O(log d−1 M) with respect to range queries provided certain conditions are satisfied: the first scheme requires d ≥ 3 and M to be a power of a prime where the prime is at least d while the second scheme requires the size of the data to grow within some polynomial of M, with no restriction on
Mergeable Summaries
"... We study the mergeability of data summaries. Informally speaking, mergeability requires that, given two summaries on two data sets, there is a way to merge the two summaries into a single summary on the union of the two data sets, while preserving the error and size guarantees. This property means t ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
(Show Context)
We study the mergeability of data summaries. Informally speaking, mergeability requires that, given two summaries on two data sets, there is a way to merge the two summaries into a single summary on the union of the two data sets, while preserving the error and size guarantees. This property means that the summaries can be merged in a way like other algebraic operators such as sum and max, which is especially useful for computing summaries on massive distributed data. Several data summaries are trivially mergeable by construction, most notably all the sketches that are linear functions of the data sets. But some other fundamental ones like those for heavy hitters and quantiles, are not (known to be) mergeable. In this paper, we demonstrate that these summaries are indeed mergeable or can be made mergeable after appropriate modifications. Specifically, we show that for εapproximate heavy hitters, there is a deterministic mergeable summary of size O(1/ε); for εapproximate quantiles, there is a deterministic summary of size O ( 1 log(εn)) that has a restricted form of mergeability, ε and a randomized one of size O ( 1 1 log3/2) with full mergeε ε ability. We also extend our results to geometric summaries such as εapproximations and εkernels. We also achieve two results of independent interest: (1) we provide the best known randomized streaming bound for εapproximate quantiles that depends only on ε, of size O ( 1 1 log3/2), and (2) we demonstrate that the MG and the ε ε SpaceSaving summaries for heavy hitters are isomorphic. Supported by NSF under grants CNS0540347, IIS07
Shape Fitting on Point Sets with Probability Distributions
"... Abstract. We consider problems on data sets where each data point has uncertainty described by an individual probability distribution. We develop several frameworks and algorithms for calculating statistics on these uncertain data sets. Our examples focus on geometric shape fitting problems. We prov ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We consider problems on data sets where each data point has uncertainty described by an individual probability distribution. We develop several frameworks and algorithms for calculating statistics on these uncertain data sets. Our examples focus on geometric shape fitting problems. We prove approximation guarantees for the algorithms with respect to the full probability distributions. We then empirically demonstrate that our algorithms are simple and practical, solving for a constant hidden by asymptotic analysis so that a user can reliably trade speed and size for accuracy. 1
Asymptotically optimal declustering schemes for range queries
 in 8th International Conference on Database Theory, Lecture Notes In Computer Science
, 2001
"... d\Gamma 1 2) for ddim schemes and to \Omega (log M) for 2dim schemes, thus proving that the ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
d\Gamma 1 2) for ddim schemes and to \Omega (log M) for 2dim schemes, thus proving that the
Control variates for quasiMonte Carlo
, 2003
"... QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can, ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can,
A weighted error metric and optimization method for antialiasing patterns
 COMPUTER GRAPHICS FORUM
, 2006
"... Displaying a synthetic image on a computer display requires determining the colors of individual pixels. To avoid aliasing, multiple samples of the image can be taken per pixel, after which the color of a pixel may be computed as a weighted sum of the samples. The positions and weights of the sample ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Displaying a synthetic image on a computer display requires determining the colors of individual pixels. To avoid aliasing, multiple samples of the image can be taken per pixel, after which the color of a pixel may be computed as a weighted sum of the samples. The positions and weights of the samples play a major role in the resulting image quality, especially in realtime applications where usually only a handful of samples can be afforded per pixel. This paper presents a new error metric and an optimization method for antialiasing patterns used in image reconstruction. The metric is based on comparing the pattern against a given reference reconstruction filter in spatial domain and it takes into account psychovisually measured anglespecific acuities for sharp features.