Results 1  10
of
53
The geometry of logconcave functions and an O∗(n³) sampling algorithm
"... The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient s ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient sampling algorithm (by a random walk) with no assumptions on the local smoothness of the density function. After appropriate preprocessing, the algorithm produces a point from approximately the right distribution in time O∗(n^4), and in amortized time O∗(n³) if many sample points are needed (where the asterisk indicates that dependence on the error parameter and factors of log n are not shown).
Geometric random walks: a survey
 Combinatorial and Computational Geometry
, 2005
"... Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and convex optimization) that are based on sampling by random walks —are discussed. 1.
THE MARKOV CHAIN MONTE CARLO REVOLUTION
"... Abstract. The use of simulation for highdimensional intractable computations has revolutionized applied mathematics. Designing, improving and understanding the new tools leads to (and leans on) fascinating mathematics, from representation theory through microlocal analysis. 1. ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The use of simulation for highdimensional intractable computations has revolutionized applied mathematics. Designing, improving and understanding the new tools leads to (and leans on) fascinating mathematics, from representation theory through microlocal analysis. 1.
Query by committee made real
 In Advances in Neural Information Processing Systems 18
, 2005
"... Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the we ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by projecting onto a low dimensional space. KQBC also enables the use of kernels, providing a simple way of extending QBC to the nonlinear scenario. Sampling the low dimension space is done using the hit and run random walk. We demonstrate the success of this novel algorithm by applying it to both artificial and a real world problems. 1
Fast algorithms for logconcave functions: sampling, rounding, integration and optimization
 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we deriv ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we derive asymptotically faster algorithms in the general oracle model for sampling, rounding, integration and maximization of logconcave functions, improving or generalizing the main results of [24, 25, 1] and [16] respectively. The algorithms for integration and optimization both use sampling and are surprisingly similar.
Mathematical aspects of mixing times in markov chains
 FOUND. TRENDS THEOR. COMPUT. SCI
, 2006
"... ..."
(Show Context)
Efficient Markov Chain Monte Carlo methods for decoding population spike trains
 TO APPEAR, NEURAL COMPUTATION
, 2010
"... Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
(Show Context)
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is logconcave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nonGaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hitandrun” algorithm performed better than other MCMC methods. Using these
Enumerative lattice algorithms in any norm via mellipsoid coverings
 in FOCS
, 2011
"... We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the Mellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the ℓ2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2 O(n)time algorithm for computing an Mellipsoid for any ndimensional convex body. As applications, we give deterministic 2 O(n)time andspace algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on ndimensional lattices in any (semi)norm given an Mellipsoid of the unit ball. In many norms of interest, including all ℓp norms, an Mellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve ” for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f ∗ (n)) ntime algorithm for Integer Programming, where f ∗ (n) denotes the optimal bound in the socalled “flatness theorem, ” which satisfies f ∗ (n) = O(n 4/3 polylog(n)) and is conjectured to be f ∗ (n) = Θ(n). Our runtime improves upon the previous best of O(n 2) n by Hildebrand and Köppe (2010).
Probabilistic temporal logic falsification of cyberphysical systems
 ACM Transactions on Embedded Computing Systems
"... We present a MonteCarlo optimization technique for finding system behaviors that falsify a Metric Temporal Logic (MTL) property. Our approach performs a random walk over the space of system inputs guided by a robustness metric defined by the MTL property. Robustness is guiding the search for a fal ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
(Show Context)
We present a MonteCarlo optimization technique for finding system behaviors that falsify a Metric Temporal Logic (MTL) property. Our approach performs a random walk over the space of system inputs guided by a robustness metric defined by the MTL property. Robustness is guiding the search for a falsifying behavior by exploring trajectories with smaller robustness values. The resulting testing framework can be applied to a wide class of CyberPhysical Systems (CPS). We show through experiments on complex system models that using our framework can help automatically falsify properties with more consistency as compared to other means such as uniform sampling. 1
Testing Geometric Convexity
"... Abstract. We consider the problem of determining whether a given set S in R n is approximately convex, i.e., if there is a convex set K ∈ R n such that the volume of their symmetric difference is at most ǫ vol(S) for some given ǫ. When the set is presented only by a membership oracle and a random or ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of determining whether a given set S in R n is approximately convex, i.e., if there is a convex set K ∈ R n such that the volume of their symmetric difference is at most ǫ vol(S) for some given ǫ. When the set is presented only by a membership oracle and a random oracle, we show that the problem can be solved with high probability using poly(n)(c/ǫ) n oracle calls and computation time. We complement this result with an exponential lower bound for the natural algorithm that tests convexity along “random ” lines. We conjecture that a simple 2dimensional version of this algorithm has polynomial complexity. 1