Results 1  10
of
30
The geometry of logconcave functions and an O∗(n³) sampling algorithm
"... The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient s ..."
Abstract

Cited by 60 (17 self)
 Add to MetaCart
The class of logconcave functions in Rn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce a technique for “smoothing” them out. This leads to an efficient sampling algorithm (by a random walk) with no assumptions on the local smoothness of the density function. After appropriate preprocessing, the algorithm produces a point from approximately the right distribution in time O∗(n^4), and in amortized time O∗(n³) if many sample points are needed (where the asterisk indicates that dependence on the error parameter and factors of log n are not shown).
Faster Mixing via average Conductance
, 1999
"... The notion of conductance introduced by Jerrum and Sinclair [JS] has been widely used to prove rapid mixing of Markov chains. Here we introduce a variant of this  instead of measuring the conductance of the worst subset of states, we show that it is enough to bound a certain weighted average conduc ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
(Show Context)
The notion of conductance introduced by Jerrum and Sinclair [JS] has been widely used to prove rapid mixing of Markov chains. Here we introduce a variant of this  instead of measuring the conductance of the worst subset of states, we show that it is enough to bound a certain weighted average conductance (where the average is taken over subsets of states with different sizes.) In the case of convex bodies, we show that this average conductance is better than the known bounds for the worst case; this helps us save a factor of O(n) which is incurred in all proofs as a "penalty" for a "bad start" (i.e., because the starting distribution may be arbitrary).
Fast algorithms for logconcave functions: sampling, rounding, integration and optimization
 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we deriv ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
We prove that the hitandrun random walk is rapidly mixing for an arbitrary logconcave distribution starting from any point in the support. This extends the work of [26], where this was shown for an important special case, and settles the main conjecture formulated there. From this result, we derive asymptotically faster algorithms in the general oracle model for sampling, rounding, integration and maximization of logconcave functions, improving or generalizing the main results of [24, 25, 1] and [16] respectively. The algorithms for integration and optimization both use sampling and are surprisingly similar.
Efficient algorithms for universal portfolios
 Proceedings of the 41st Annual Symposium on the Foundations of Computer Science
, 2000
"... A constant rebalanced portfolio is an investment strategy that keeps the same distribution of wealth among a set of stocks from day to day. There has been much work on Cover's Universal algorithm, which is competitive with the best constant rebalanced portfolio determined in hindsight (3, 9, 2, ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
(Show Context)
A constant rebalanced portfolio is an investment strategy that keeps the same distribution of wealth among a set of stocks from day to day. There has been much work on Cover's Universal algorithm, which is competitive with the best constant rebalanced portfolio determined in hindsight (3, 9, 2, 8, 16, 4, 5, 6). While this algorithm has good performance guarantees, all known implementations are exponential in the number of stocks, restricting the number of stocks used in experiments (9, 4, 2, 5, 6). We present an efficient implementation of the Universal algorithm that is based on nonuniform random walks that are rapidly mixing (1, 14, 7). This same implementation also works for nonfinancial applications of the Universal algorithm, such as data compression (6) and language modeling (11).
Logconcave Functions: Geometry and Efficient Sampling Algorithms
"... The class of logconcave functions in R^n is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce an analysis technique for "smoothing" them out. This le ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The class of logconcave functions in R^n is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from a logconcave density function, we study their geometry and introduce an analysis technique for "smoothing" them out. This leads to efficient sampling algorithms with no assumptions on the local smoothness of the density function. After appropriate preprocessing, both the ball walk (with a Metropolis filter) and a generalization of hitandrun produce a point from approximately the right distribution in ), and in amortized time O ) if many sample points are needed (where the asterisk indicates that dependence on the error parameter and factors of log n are not shown). The bounds are optimal in terms of a "roundness" parameter and match the bestknown bounds for the special case of the uniform density over a convex set.
Blocking conductance and mixing in random walks
, 2005
"... The notion of conductance introduced by Jerrum and Sinclair [8] has been widely used to prove rapid mixing of Markov Chains. Here we introduce a bound that extends this in two directions. First, instead of measuring the conductance of the worst subset of states, we bound the mixing time by a formula ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
The notion of conductance introduced by Jerrum and Sinclair [8] has been widely used to prove rapid mixing of Markov Chains. Here we introduce a bound that extends this in two directions. First, instead of measuring the conductance of the worst subset of states, we bound the mixing time by a formula that can be thought of as a weighted average of the JerrumSinclair bound (where the average is taken over subsets of states with different sizes). Furthermore, instead of just the conductance, which in graph theory terms measures edge expansion, we also take into account node expansion. Our bound is related to the logarithmic Sobolev inequalities, but it appears to be more flexible and easier to compute. In the case of random walks in convex bodies, we show that this new bound is better than the known bounds for the worst case. This saves a factor of O(n) in the mixing time bound, which is incurred in all proofs as a “penalty ” for a “bad start”. We show that in a convex body in IR n, with diameter D, random walk with steps in a ball with radius δ mixes in O ∗ (nD 2 /δ 2) time (if idle steps at the boundary are not counted). This gives an O ∗ (n 3) sampling algorithm after appropriate preprocessing, improving the previous bound of O ∗ (n 4). The application of the general conductance bound in the geometric setting depends on an improved isoperimetric inequality for convex bodies.
HitandRun is Fast and Fun
, 2003
"... ... In this paper we study a natural extension of the hitandrun algorithm to sampling from a logconcave distribution in n dimensions. After appropriate preprocessing, hitandrun produces a point from approximately the right distribution in amortized time O*(n³). ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
... In this paper we study a natural extension of the hitandrun algorithm to sampling from a logconcave distribution in n dimensions. After appropriate preprocessing, hitandrun produces a point from approximately the right distribution in amortized time O*(n³).
Dynamics in congestion games
 In ACM SIGMETRICS/Performance
, 2010
"... Game theoretic modeling and equilibrium analysis of congestion games have provided insights in the performance of Internet congestion control, road transportation networks, etc. Despite the long history, very little is known about their transient (non equilibrium) performance. In this paper, we are ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Game theoretic modeling and equilibrium analysis of congestion games have provided insights in the performance of Internet congestion control, road transportation networks, etc. Despite the long history, very little is known about their transient (non equilibrium) performance. In this paper, we are motivated to seek answers to questions such as how long does it take to reach equilibrium, when the system does operate near equilibrium in the presence of dynamics, e.g. nodes join or leave. In this pursuit, we provide three contributions in this paper. First, a novel probabilistic model to capture realistic behaviors of agents allowing for the possibility of arbitrariness in conjunction with rationality. Second, evaluation of (a) time to converge to equilibrium under this behavior model and (b) distance to Nash equilibrium. Finally, determination of tradeoff between the rate of dynamics and quality of performance (distance to equilibrium) which leads to an interesting uncertainty principle. The novel technical ingredients involve analysis of logarithmic Sobolov constant of Markov process with time varying state space and methodically this should be of broader interest in the context of dynamical systems.