Results 1  10
of
54
A Combinatorial, PrimalDual approach to Semidefinite Programs
"... Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced S ..."
Abstract

Cited by 62 (11 self)
 Add to MetaCart
Semidefinite programs (SDP) have been used in many recent approximation algorithms. We develop a general primaldual approach to solve SDPs using a generalization of the wellknown multiplicative weights update rule to symmetric matrices. For a number of problems, such as Sparsest Cut and Balanced Separator in undirected and directed weighted graphs, and the Min UnCut problem, this yields combinatorial approximation algorithms that are significantly more efficient than interior point methods. The design of our primaldual algorithms is guided by a robust analysis of rounding algorithms used to obtain integer solutions from fractional ones.
Fast algorithms for approximate semidefinite programming using the multiplicative weights update method
 IN FOCS
, 2005
"... Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and Klein and Lu) to derive faster algorithms for approximately solving several families of SDP relaxations. The algorithms are based upon some improvements to the PST ideas — which lead to new results even for their framework — as well as improvements in approximate eigenvalue computations by using random sampling.
A multiplicative weights mechanism for privacypreserving data analysis
 In FOCS
, 2010
"... Abstract—We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacypreserving answers to queries as they arrive. Our primary contribution is a new differentiall ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Abstract—We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacypreserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worstcase accuracy guarantees that can answer large numbers of interactive queries and is efficient (in terms of the runtime’s dependence on the data universe size). The error is asymptotically optimal in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly linear in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for any input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes polylogarithmic in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions. I.
Breaking the multicommodity flow barrier for O( √ logn)approximations to sparsest cut
 In FOCS
, 2009
"... This paper ties the line of work on algorithms that find an O ( √ log n)approximation to the sparsest cut together with the line of work on algorithms that run in subquadratic time by using only singlecommodity flows. We present an algorithm that simultaneously achieves both goals, finding an O ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This paper ties the line of work on algorithms that find an O ( √ log n)approximation to the sparsest cut together with the line of work on algorithms that run in subquadratic time by using only singlecommodity flows. We present an algorithm that simultaneously achieves both goals, finding an O ( √ log(n)/ε)approximation using O(n ε log O(1) n) maxflows. The core of the algorithm is a stronger, algorithmic version of Arora et al.’s structure theorem, where we show that matchingchaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in certain geometric multicommodity flow networks. By using that specialized algorithm in place of a blackbox solver, we are able to solve those instances much more efficiently. We also show the cutmatching game framework can not achieve an approximation any better than Ω(log(n) / log log(n)) without rerouting flow. 1
Multiplicative Updates Outperform Generic NoRegret . . .
, 2009
"... We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicativeweights learning algorithm results in convergence to ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We study the outcome of natural learning algorithms in atomic congestion games. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the wellknown multiplicativeweights learning algorithm results in convergence to pure equilibria. Our results show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the loadbalancing game introduced by Koutsoupias and Papadimitriou, which has superconstant price of anarchy and has correlated equilibria that are exponentially worse than any mixed Nash equilibrium. Our results identify a set of mixed Nash equilibria that we call weakly stable equilibria. Our notion of weakly stable is defined gametheoretically, but we show that this property holds whenever a stability criterion from the theory of dynamical systems is satisfied. This allows us to show that in every congestion game, the distribution of play converges to the set of weakly stable equilibria. Pure Nash equilibria are weakly stable, and we show using techniques from algebraic geometry that the converse is true with probability 1 when congestion costs are selected at random independently on each edge (from any monotonically parametrized distribution). We further extend our results to show that players can use algorithms with different (sufficiently small) learning rates, i.e. they can trade off convergence speed and long term average regret differently.
Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems
, 2011
"... We present algorithms for a class of resource allocation problems both in the online setting with stochastic input and in the offline setting. This class of problems contains many interesting special cases such as the Adwords problem. In the online setting we introduce a new distributional model cal ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We present algorithms for a class of resource allocation problems both in the online setting with stochastic input and in the offline setting. This class of problems contains many interesting special cases such as the Adwords problem. In the online setting we introduce a new distributional model called the adversarial stochastic input model, which is a generalization of the i.i.d model with unknown distributions, where the distributions can change over time. In this model we give a 1 − O(ǫ) approximation algorithm for the resource allocation problem, with almost the weakest possible assumption: the ratio of the maximum amount of resource consumed by any single request to the total capacity of the resource, and the ratio of the profit contributed by any single request to the optimal profit is at most ǫ 2 /log(1/ǫ) 2 where n is the number of resources log n+log(1/ǫ) available. There are instances where this ratio is ǫ 2 /log n such that no randomized algorithm can have a competitive ratio of 1 − o(ǫ) even in the i.i.d model. The upper bound on ratio that we require improves on the previous upperbound for the i.i.d case by a factor of n. Our proof technique also gives a very simple proof that the greedy algorithm has a competitive ratio of 1 −1/e for the Adwords problem in the i.i.d model with unknown distributions, and more generally in the adversarial stochastic input model, when there is no bound on the bid to budget ratio. All the previous proofs assume A full version of this paper, with all the proofs, is available at
Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs
, 2010
"... We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be approximately computed in nearlylinear time. Using this approach, we develop the fastest known algorithm for computing approximately maximum st flows. For a graph having n vertices and m edges, our algorithm computes a (1−ɛ)approximately maximum st flow in time 1 Õ ( mn 1/3 ɛ −11/3). A dual version of our approach computes a (1 + ɛ)approximately minimum st cut in time Õ ( m + n 4/3 ɛ −16/3) , which is the fastest known algorithm for this problem as well. Previously, the best dependence on m and n was achieved by the algorithm of Goldberg and Rao (J. ACM 1998), which can be used to compute approximately maximum st flows in time Õ ( m √ nɛ −1) , and approximately minimum st cuts in time Õ ( m + n 3/2 ɛ −3). Research partially supported by NSF grant CCF0843915.
Beating simplex for fractional packing and covering linear programs. FOCS
, 2007
"... We give an approximation algorithm for packing and covering linear programs (linear programs with nonnegative coefficients). Given a constraint matrix with n nonzeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a f ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We give an approximation algorithm for packing and covering linear programs (linear programs with nonnegative coefficients). Given a constraint matrix with n nonzeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a factor of 1 + ε of OPT (the optimal cost) in time O(n + (r + c)log(n)/ε 2). For dense problems (with r, c = O ( √ n)) the time is O(n + √ n log(n)/ε 2) — linear even as ε → 0. In comparison, previous Lagrangianrelaxation algorithms generally take at least Ω(n log(n)/ε 2) time, while (for small ε) the Simplex algorithm typically takes at least Ω(n min(r, c)) time. 1.
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).
DYNAMICS OF BAYESIAN UPDATING WITH DEPENDENT DATA AND MISSPECIFIED MODELS
, 2009
"... Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Recent work on the convergence of posterior distributions under Bayesian updating has established conditions under which the posterior will concentrate on the truth, if the latter has a perfect representation within the support of the prior, and under various dynamical assumptions, such as the data being independent and identically distributed or Markovian. Here I establish sufficient conditions for the convergence of the posterior distribution in nonparametric problems even when all of the hypotheses are wrong, and the datagenerating process has a complicated dependence structure. The main dynamical assumption is the generalized asymptotic equipartition (or “ShannonMcMillanBreiman”) property of information theory. I derive a kind of large deviations principle for the posterior measure, and discuss the advantages of predicting using a combination of models known to be wrong. An appendix sketches connections between the present results and the “replicator dynamics” of evolutionary theory.