Results 1  10
of
74
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 1123 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Universal Limit Laws for Depths in Random Trees
 SIAM Journal on Computing
, 1998
"... Random binary search trees, bary search trees, medianof(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#er a universal law of large numbers and a limit law for the depth of the last inserted point, as well as a ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
Random binary search trees, bary search trees, medianof(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#er a universal law of large numbers and a limit law for the depth of the last inserted point, as well as a law of large numbers for the height.
Derivativefree optimization: A review of algorithms and comparison of software implementations
"... ..."
Improving hitandrun for global optimization
 J. Global Optim
, 1993
"... Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if i ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n 5~2) where n is the dimension of the problem. Improving HitandRun when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum. Key words. Random search, Monte Carlo optimization, algorithm complexity, global optimization. 1.
APPROXIMATE VOLUME AND INTEGRATION FOR BASIC SEMIALGEBRAIC SETS
"... Given a basic compact semialgebraic set K ⊂ R n, we introduce a methodology that generates a sequence converging to the volume of K. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments o ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
(Show Context)
Given a basic compact semialgebraic set K ⊂ R n, we introduce a methodology that generates a sequence converging to the volume of K. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments of the probability measure that is uniformly distributed on K can be approximated as closely as desired, and so permits to approximate the integral on K of any given polynomial; extension to integration against some weight functions is also provided. Finally, some numerical issues associated with the algorithms involved are briefly discussed.
POSITIVITY OF HITANDRUN AND RELATED ALGORITHMS
"... Abstract. We prove positivity of the Markov operators that correspond to the hitandrun algorithm, random scan Gibbs sampler, slice sampler and an Metropolis algorithm with positive proposal. In all of these cases the positivity is independent of the state space and the stationary distribution. In ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We prove positivity of the Markov operators that correspond to the hitandrun algorithm, random scan Gibbs sampler, slice sampler and an Metropolis algorithm with positive proposal. In all of these cases the positivity is independent of the state space and the stationary distribution. In particular, the results show that it is not necessary to consider the lazy versions of these Markov chains. The proof relies on a well known lemma which relates the positivity of the product MT M ∗ , for some operators M and T, to the positivity of T. It remains to find that kind of representation of the Markov operator with a positive operator T. 1.
Probabilistic temporal logic falsification of cyberphysical systems
 ACM Transactions on Embedded Computing Systems
"... We present a MonteCarlo optimization technique for finding system behaviors that falsify a Metric Temporal Logic (MTL) property. Our approach performs a random walk over the space of system inputs guided by a robustness metric defined by the MTL property. Robustness is guiding the search for a fal ..."
Abstract

Cited by 14 (12 self)
 Add to MetaCart
(Show Context)
We present a MonteCarlo optimization technique for finding system behaviors that falsify a Metric Temporal Logic (MTL) property. Our approach performs a random walk over the space of system inputs guided by a robustness metric defined by the MTL property. Robustness is guiding the search for a falsifying behavior by exploring trajectories with smaller robustness values. The resulting testing framework can be applied to a wide class of CyberPhysical Systems (CPS). We show through experiments on complex system models that using our framework can help automatically falsify properties with more consistency as compared to other means such as uniform sampling. 1
DIRECTION CHOICE FOR ACCELERATED CONVERGENCE IN HITANDRUN SAMPLING
, 1994
"... HitandRun algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
HitandRun algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo integration. These algorithms are reversible random walks which commonly apply uniformly distributed step directions. We investigate nonuniform direction choice and show that under minimal restrictions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes a bound on the rate of convergence. We provide computational results demonstrating greatly accelerated convergence for this optimizing direction choice and We consider the Monte Carlo problem of generating a sample of points according to a given probability distribution G over an open, bounded region S in ℜn. After motivating the problem through several applications, this section discusses the limitations of exact sampling
Nearoptimal Batch Mode Active Learning and Adaptive Submodular Optimization
"... Active learning can lead to a dramatic reduction in labeling effort. However, in many practical implementations (such as crowdsourcing, surveys, highthroughput experimental design), it is preferable to query labels for batches of examples to be labelled in parallel. While several heuristics have be ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Active learning can lead to a dramatic reduction in labeling effort. However, in many practical implementations (such as crowdsourcing, surveys, highthroughput experimental design), it is preferable to query labels for batches of examples to be labelled in parallel. While several heuristics have been proposed for batchmode active learning, little is known about their theoretical performance. We consider batch mode active learning and more general informationparallel stochastic optimization problems that exhibit adaptive submodularity, a natural diminishing returns condition. We prove that for such problems, a simple greedy strategy is competitive with the optimal batchmode policy. In some cases, surprisingly, the use of batches incurs competitively low cost, even when compared to a fully sequential strategy. We demonstrate the effectiveness of our approach on batchmode active learning tasks, where it outperforms the state of the art, as well as the novel problem of multistage influence maximization in social networks. 1.