Results 1  10
of
34
Geometric random walks: a survey
 Combinatorial and Computational Geometry
, 2005
"... Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract. The developing theory of geometric random walks is outlined here. Three aspects —general methods for estimating convergence (the “mixing ” rate), isoperimetric inequalities in R n and their intimate connection to random walks, and algorithms for fundamental problems (volume computation and convex optimization) that are based on sampling by random walks —are discussed. 1.
Fast algorithms for logconcave functions: sampling, rounding, integration and optimization
 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we deriv ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we derive asymptotically faster algorithms in the general oracle model for sampling, rounding, integration and maximization of logconcave functions, improving or generalizing the main results of [24, 25, 1] and [16] respectively. The algorithms for integration and optimization both use sampling and are surprisingly similar.
Query by committee made real
 In Advances in Neural Information Processing Systems 18
, 2005
"... Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the we ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by projecting onto a low dimensional space. KQBC also enables the use of kernels, providing a simple way of extending QBC to the nonlinear scenario. Sampling the low dimension space is done using the hit and run random walk. We demonstrate the success of this novel algorithm by applying it to both artificial and a real world problems. 1
Mathematical aspects of mixing times in markov chains
 FOUND. TRENDS THEOR. COMPUT. SCI
, 2006
"... ..."
THE MARKOV CHAIN MONTE CARLO REVOLUTION
"... Abstract. The use of simulation for highdimensional intractable computations has revolutionized applied mathematics. Designing, improving and understanding the new tools leads to (and leans on) fascinating mathematics, from representation theory through microlocal analysis. 1. ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract. The use of simulation for highdimensional intractable computations has revolutionized applied mathematics. Designing, improving and understanding the new tools leads to (and leans on) fascinating mathematics, from representation theory through microlocal analysis. 1.
Efficient Markov Chain Monte Carlo methods for decoding population spike trains
 TO APPEAR, NEURAL COMPUTATION
, 2010
"... Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is logconcave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nonGaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hitandrun” algorithm performed better than other MCMC methods. Using these
Enumerative lattice algorithms in any norm via mellipsoid coverings
 in FOCS
, 2011
"... We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the Mellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the ℓ2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2 O(n)time algorithm for computing an Mellipsoid for any ndimensional convex body. As applications, we give deterministic 2 O(n)time andspace algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on ndimensional lattices in any (semi)norm given an Mellipsoid of the unit ball. In many norms of interest, including all ℓp norms, an Mellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve ” for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f ∗ (n)) ntime algorithm for Integer Programming, where f ∗ (n) denotes the optimal bound in the socalled “flatness theorem, ” which satisfies f ∗ (n) = O(n 4/3 polylog(n)) and is conjectured to be f ∗ (n) = Θ(n). Our runtime improves upon the previous best of O(n 2) n by Hildebrand and Köppe (2010).
Testing Geometric Convexity
"... Abstract. We consider the problem of determining whether a given set S in R n is approximately convex, i.e., if there is a convex set K ∈ R n such that the volume of their symmetric difference is at most ǫ vol(S) for some given ǫ. When the set is presented only by a membership oracle and a random or ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Abstract. We consider the problem of determining whether a given set S in R n is approximately convex, i.e., if there is a convex set K ∈ R n such that the volume of their symmetric difference is at most ǫ vol(S) for some given ǫ. When the set is presented only by a membership oracle and a random oracle, we show that the problem can be solved with high probability using poly(n)(c/ǫ) n oracle calls and computation time. We complement this result with an exponential lower bound for the natural algorithm that tests convexity along “random ” lines. We conjecture that a simple 2dimensional version of this algorithm has polynomial complexity. 1
Random walks on polytopes and an affine interior point method for linear programming
 Proceedings of the ACM Symposium on Theory of Computing, 2009
"... Let K be a polytope in Rn defined by m linear inequalities. We give a new Markov Chain algorithm to draw a nearly uniform sample from K. The underlying Markov Chain is the first to have a mixing time that is strongly polynomial when started from a “central ” point x0. If s is the and ɛ is an upper b ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Let K be a polytope in Rn defined by m linear inequalities. We give a new Markov Chain algorithm to draw a nearly uniform sample from K. The underlying Markov Chain is the first to have a mixing time that is strongly polynomial when started from a “central ” point x0. If s is the and ɛ is an upper bound on the desired total variation distance from the uniform, it is sufficient to take O ( mn ( n log(sm) + log 1 ɛ steps of the random walk. We use this result to design an affine interior point algorithm that does a single random walk to solve linear programs approximately. More precisely, suppose Q = {z ∣ ∣Bz ≤ 1} contains a point z such that cT z ≥ d and r: = supz∈Q ‖Bz ‖ + 1, where B is an m × n matrix. Then, after τ = O ( mn ( n ln ( ))) mr 1 ɛ + ln δ steps, the random walk is at a point xτ for which cT xτ ≥ d(1 − ɛ) with probability greater than 1 − δ. The fact that this algorithm has a runtime that is provably polynomial is notable since the analogous deterministic affine algorithm analyzed by Dikin has no known polynomial guarantees. supremum over all chords pq passing through x0 of p−x0 q−x0 1
Enumerative Algorithms for the Shortest and Closest Lattice Vector Problems in Any Norm via MEllipsoid Coverings
, 2010
"... We give an algorithm for solving the exact Shortest Vector Problem in ndimensional lattices, in any norm, in deterministic 2 O(n) time (and space), given poly(n)sized advice that depends only on the norm. In many norms of interest, including all ℓp norms, the advice is efficiently and deterministi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We give an algorithm for solving the exact Shortest Vector Problem in ndimensional lattices, in any norm, in deterministic 2 O(n) time (and space), given poly(n)sized advice that depends only on the norm. In many norms of interest, including all ℓp norms, the advice is efficiently and deterministically computable, and in general we give a randomized algorithm to compute it in expected 2 O(n) time. We also give an algorithm for solving the exact Closest Vector Problem in 2 O(n) time and space, when the target point is within any constant factor of the minimum distance of the lattice. Our approach may be seen as a derandomization of ‘sieve ’ algorithms for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002), and uses as a crucial subroutine the recent deterministic algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the ℓ2 norm. Our main technique is to reduce the enumeration of lattice points in an arbitrary convex body K to enumeration in 2 O(n) copies of an Mellipsoid of K, a classical concept in asymptotic convex geometry. Building on the techniques of Klartag (Geometric and Functional Analysis, 2006), we also give an expected 2 O(n)time algorithm to compute an Mellipsoid covering of any convex body, which may be of independent interest.