Results 1  10
of
21
The multiplicative weights update method: a meta algorithm and applications
, 2005
"... Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies ..."
Abstract

Cited by 146 (14 self)
 Add to MetaCart
(Show Context)
Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algorithm. 1
Computing correlated equilibria in MultiPlayer Games
 STOC'05
, 2005
"... We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, ..."
Abstract

Cited by 95 (6 self)
 Add to MetaCart
We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations. Our algorithm is based on a variant of the existence proof due to Hart and Schmeidler [11], and employs linear programming duality, the ellipsoid algorithm, Markov chain steady state computations, as well as applicationspecific methods for computing multivariate expectations.
Optimal Hierarchical Decompositions for Congestion Minimization in Networks
, 2008
"... Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10, 11, 14, 16]) depend on hierarchical graph decompo ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10, 11, 14, 16]) depend on hierarchical graph decompositions. In this line of work a probability distribution over tree graphs is constructed from a given input graph, in such a way that the tree distances closely resemble the distances in the original graph. This allows it, to solve many problems with a distancebased cost function on trees, and then transfer the tree solution to general undirected graphs with only a logarithmic loss in the performance guarantee. The results about oblivious routing [30, 22] in general undirected graphs are based on hierarchical decompositions of a different type in the sense that they are aiming to approximate the bottlenecks in the network (instead of the pointtopoint distances). We call such decompositions cutbased decompositions. It has been shown that they also can be used to design approximation and online algorithms for a wide variety of different problems, but at the current state of the art the performance guarantee goes down by an O(log 2 n log log n)factor when making the transition from tree networks to general graphs. In this paper we show how to construct cutbased decompositions that only result in a logarithmic loss in performance, which is asymptotically optimal. Remarkably, one major ingredient of our proof is a distancebased decomposition scheme due to Fakcharoenphol, Rao and Talwar [16]. This shows an interesting relationship between these seemingly different decomposition techniques. The main applications of the new decomposition are an optimal O(log n)competitive algorithm for oblivious routing in general undirected graphs, and an O(log n)approximation for Minimum Bisection, which improves the O(log 1.5 n) approximation
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).
Vertex sparsifiers: New results from old techniques
 IN 13TH INTERNATIONAL WORKSHOP ON APPROXIMATION, RANDOMIZATION, AND COMBINATORIAL OPTIMIZATION, VOLUME 6302 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2010
"... Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Given a capacitated graph G = (V, E) and a set of terminals K ⊆ V, how should we produce a graph H only on the terminals K so that every (multicommodity) flow between the terminals in G could be supported in H with low congestion, and vice versa? (Such a graph H is called a flowsparsifier for G.) What if we want H to be a “simple ” graph? What if we allow H to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flowsparsifier H that log k log log k maintains congestion up to a factor of O (), where k = K. (b) a convex combination of trees over the terminals K that maintains congestion up to a factor of O(log k). (c) for a planar graph G, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0extension problem, the first one in which the preimages of each terminal are connected in G. Moreover, this result extends to minorclosed families of graphs. Our bounds immediately imply improved approximation guarantees for several terminalbased cut and ordering problems.
Approximation schemes for packing with item fragmentation. Theory Comput
 Syst
"... We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size inc ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size increasing fragmentation (BPSIF), fragmenting an item increases the input size (due to a header/footer of fixed size that is added to each fragment). In bin packing with size preserving fragmentation (BPSPF), there is a bound on the total number of fragmented items. These two variants of bin packing capture many practical scenarios, including message transmission in community TV networks, VLSI circuit design and preemptive scheduling on parallel machines with setup times/setup costs. While both BPSPF and BPSIF do not belong to the class of problems that admit a polynomial time approximation scheme (PTAS), we show in this paper that both problems admit a dual PTAS and an asymptotic PTAS. We also develop for each of the problems a dual asymptotic fully polynomial time approximation scheme (AFPTAS). Our AFPTASs are based on a nonstandard transformation of the mixed packing and covering linear program formulations of our problems into pure covering programs, which enables to efficiently solve these programs.
Fractional covering with upper bounds on the variables: solving LPs with negative entries
 IN PROC. 12TH EUROPEAN SYMPOSIUM ON ALGORITHMS (ESA), LNCS 3321
, 2004
"... We present a Lagrangian relaxation technique to solve a class of linear programs with negative coefficients in the objective function and the constraints. We apply this technique to solve (the dual of) covering linear programs with upper bounds on the variables: min{c ⊤ x  Ax ≥ b, x ≤ u, x ≥ 0} wh ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We present a Lagrangian relaxation technique to solve a class of linear programs with negative coefficients in the objective function and the constraints. We apply this technique to solve (the dual of) covering linear programs with upper bounds on the variables: min{c ⊤ x  Ax ≥ b, x ≤ u, x ≥ 0} where c, u ∈ R m +, b ∈ R n + and A ∈ R n×m + have nonnegative entries. We obtain a strictly feasible, (1 + ɛ)approximate solution by making O(mɛ −2 log m + min{n, log log C}) calls to an oracle that finds the mostviolated constraint. Here C is the largest entry in c or u, m is the number of variables, and n is the number of covering constraints. Our algorithm follows naturally from the algorithm for the fractional packing problem and improves the previous best bound of O(mɛ −2 log(mC)) given by Fleischer [1]. Also for a fixed ɛ, if the number of covering constraints is polynomial, our algorithm makes a number of oracle calls that is strongly polynomial.
Approximate convex optimization by online game playing
 CoRR
"... Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε2. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for frac ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ε approximate solution is proportional to 1 ε2. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1 ε iterations. The latter algorithm requires to solve a convex quadratic program every iteration an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1 ε. The algorithm does not require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar’s result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest. 1
An Approximation Algorithm for the General Mixed Packing and Covering
 Problem, in "ESCAPE
"... Abstract. We present a pricedirective decomposition algorithm to compute an approximate solution of the mixed packing and covering problem; it either finds x ∈ B such that f(x) ≤ c(1 + ɛ)a and g(x) ≥ (1 − ɛ)b/c or correctly decides that {x ∈ Bf(x) ≤ a, g(x) ≥ b} = ∅. Heref,g are vectors of M ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We present a pricedirective decomposition algorithm to compute an approximate solution of the mixed packing and covering problem; it either finds x ∈ B such that f(x) ≤ c(1 + ɛ)a and g(x) ≥ (1 − ɛ)b/c or correctly decides that {x ∈ Bf(x) ≤ a, g(x) ≥ b} = ∅. Heref,g are vectors of M ≥ 2 convex and concave functions, respectively, which are nonnegative on the convex compact set ∅ ̸ = B ⊆ R N; B can be queried by a feasibility oracle or block solver, a, b ∈ R M ++ and c is the block solver’s approximation ratio. The algorithm needs only O(M(ln M + ɛ −2 ln ɛ −1)) iterations, a runtime bound independent from c and the input data. Our algorithm is a generalization of [16] and also approximately solves the fractional packing and covering problem where f,g are linear and B is a polytope; there, a widthindependent runtime bound is obtained. 1