Results 1  10
of
28
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 751 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Universal Limit Laws for Depths in Random Trees
 SIAM Journal on Computing
, 1998
"... Random binary search trees, bary search trees, medianof(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#er a universal law of large numbers and a limit law for the depth of the last inserted point, as well as a ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
Random binary search trees, bary search trees, medianof(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#er a universal law of large numbers and a limit law for the depth of the last inserted point, as well as a law of large numbers for the height.
Improving hitandrun for global optimization
 J. Global Optim
, 1993
"... Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if i ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
Abstract. Improving HitandRun is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n 5~2) where n is the dimension of the problem. Improving HitandRun when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum. Key words. Random search, Monte Carlo optimization, algorithm complexity, global optimization. 1.
APPROXIMATE VOLUME AND INTEGRATION FOR BASIC SEMIALGEBRAIC SETS
"... Given a basic compact semialgebraic set K ⊂ R n, we introduce a methodology that generates a sequence converging to the volume of K. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments o ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Given a basic compact semialgebraic set K ⊂ R n, we introduce a methodology that generates a sequence converging to the volume of K. This sequence is obtained from optimal values of a hierarchy of either semidefinite or linear programs. Not only the volume but also every finite vector of moments of the probability measure that is uniformly distributed on K can be approximated as closely as desired, and so permits to approximate the integral on K of any given polynomial; extension to integration against some weight functions is also provided. Finally, some numerical issues associated with the algorithms involved are briefly discussed.
Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization
"... We present an extension of continuous domain Simulated Annealing. Our algorithm employs a globally reaching candidate generator, adaptive stochastic acceptance probabilities, and converges in probability to the optimal value. An application to simulationoptimization problems with asymptotically dim ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present an extension of continuous domain Simulated Annealing. Our algorithm employs a globally reaching candidate generator, adaptive stochastic acceptance probabilities, and converges in probability to the optimal value. An application to simulationoptimization problems with asymptotically diminishing errors is presented. Numerical results on a noisy proteinfolding problem are included.
A noninformative Bayesian approach to finite population sampling using auxiliary variables
, 2008
"... ..."
Statistical Validation for Uncertainty Models
 Lecture Notes in Control and Information Sciences
, 1994
"... Statistical model validation is treated for a class of parametric uncertainty models and also for a more general class of nonparametric uncertainty models. We show that, in many cases of interest, this problem reduces to computing relative weighted volumes of convex sets in R N (where N is the num ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Statistical model validation is treated for a class of parametric uncertainty models and also for a more general class of nonparametric uncertainty models. We show that, in many cases of interest, this problem reduces to computing relative weighted volumes of convex sets in R N (where N is the number of uncertain parameters) for parametric uncertainty models, and to computing the limit of a sequence (Vk ) 1 1 of relative weighted volumes of convex sets in R k for nonparametric uncertainty models. We then present and discuss a randomized algorithm based on gas kinetics for probable approximate computation of these volumes. We also review the existing HitandRun family of algorithms for this purpose. Finally, we introduce the notion of testability to describe uncertainty models that can be statistically validated with arbitrary reliability using inputoutput data records of sufficient (finite) length. It is then shown that some common nonparametric uncertainty models, such as thos...
On Statistical Model Validation
 Journal of Dynamic Systems,  69  and Control
, 1994
"... In this paper we formulate a particular statistical model validation problem in which we wish to determine the probability that a certain hypothesized parametric uncertainty model is consistent with a given inputoutput data record. Using a Bayesian approach and ideas from the field of hypothesis te ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper we formulate a particular statistical model validation problem in which we wish to determine the probability that a certain hypothesized parametric uncertainty model is consistent with a given inputoutput data record. Using a Bayesian approach and ideas from the field of hypothesis testing, we show that in many cases of interest this problem reduces to computing relative weighted volumes of convex sets in R N (where N is the number of uncertain parameters). We also present and discuss a randomized algorithm based on gas kinetics, as well as the existing HitandRun family of algorithms, for probable approximate computation of these volumes. 1 Introduction Motivated by the desire to produce identified models that are compatible with modern robust control design methodologies, many researchers have recently been working in the area of controloriented system identification (see for example [5, 6, 7, 11, 12, 16, 17, 19, 23, 24, 25, 27, 28] and the references cited therein...
DIRECTION CHOICE FOR ACCELERATED CONVERGENCE IN HITANDRUN SAMPLING
, 1994
"... HitandRun algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
HitandRun algorithms are Monte Carlo procedures for generating points that are asymptotically distributed according to general absolutely continuous target distributions G over open bounded regions S. Applications include nonredundant constraint identification, global optimization, and Monte Carlo integration. These algorithms are reversible random walks which commonly apply uniformly distributed step directions. We investigate nonuniform direction choice and show that under minimal restrictions on the region S and target distribution G, there exists a unique direction choice distribution, characterized by necessary and sufficient conditions depending on S and G, which optimizes a bound on the rate of convergence. We provide computational results demonstrating greatly accelerated convergence for this optimizing direction choice and We consider the Monte Carlo problem of generating a sample of points according to a given probability distribution G over an open, bounded region S in ℜn. After motivating the problem through several applications, this section discusses the limitations of exact sampling
Discrete hitandrun for sampling points from arbitrary distributions over subsets of integer hyperrectangles
 Operations Research
"... We consider the problem of sampling a point from an arbitrary distribution π over an arbitrary subset S of an integer hyperrectangle. Neither the distribution π nor the support set S are assumed to be available as explicit mathematical equations but may only be defined through oracles and in partic ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We consider the problem of sampling a point from an arbitrary distribution π over an arbitrary subset S of an integer hyperrectangle. Neither the distribution π nor the support set S are assumed to be available as explicit mathematical equations but may only be defined through oracles and in particular computer programs. This problem commonly occurs in blackbox discrete optimization as well as counting and estimation problems. The generality of this setting and highdimensionality of S precludes the application of conventional random variable generation methods. As a result, we turn to Markov Chain Monte Carlo (MCMC) sampling, where we execute an ergodic Markov chain that converges to π so that the distribution of the point delivered after sufficiently many steps can be made arbitrarily close to π. Unfortunately, classical Markov chains such as the nearest neighbor random walk or the coordinate direction random walk fail to converge to π as they can get trapped in isolated regions of the support set. To surmount this difficulty, we propose Discrete HitandRun (DHR), a Markov chain motivated by the HitandRun algorithm known to be the most efficient method for sampling from logconcave distributions over convex bodies in Rn. We prove that the limiting distribution of DHR is π as desired, thus enabling us to sample approximately from π by delivering the last iterate of a sufficiently