Results 11  20
of
49
On the unique games conjecture
 In FOCS
, 2005
"... This article surveys recently discovered connections between the Unique Games Conjecture and computational complexity, algorithms, discrete Fourier analysis, and geometry. 1 ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This article surveys recently discovered connections between the Unique Games Conjecture and computational complexity, algorithms, discrete Fourier analysis, and geometry. 1
Graphs with tiny vector chromatic numbers and huge chromatic numbers
 SIAM J. Comput
"... Abstract. Karger, Motwani, and Sudan [J. ACM, 45 (1998), pp. 246–265] introduced the notion of a vector coloring of a graph. In particular, they showed that every kcolorable graph is also vector kcolorable, and that for constant k, graphs that are vector kcolorable can be colored by roughly ∆ 1−2 ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. Karger, Motwani, and Sudan [J. ACM, 45 (1998), pp. 246–265] introduced the notion of a vector coloring of a graph. In particular, they showed that every kcolorable graph is also vector kcolorable, and that for constant k, graphs that are vector kcolorable can be colored by roughly ∆ 1−2/k colors. Here ∆ is the maximum degree in the graph and is assumed to be of the order of n δ for some 0 <δ<1. Their results play a major role in the best approximation algorithms used for coloring and for maximum independent sets. We show that for every positive integer k there are graphs that are vector kcolorable but do not have independent sets significantly larger than n/ ∆ 1−2/k (and hence cannot be colored with significantly fewer than ∆ 1−2/k colors). For k = O(log n / log log n) we show vector kcolorable graphs that do not have independent sets of size (log n) c, for some constant c. This shows that the vector chromatic number does not approximate the chromatic number within factors better than n/polylogn. As part of our proof, we analyze “property testing ” algorithms that distinguish between graphs that have an independent set of size n/k, and graphs that are “far ” from having such an independent set. Our bounds on the sample size improve previous bounds of Goldreich, Goldwasser, and Ron [J. ACM, 45 (1998), pp. 653–750] for this problem.
Bounds On Contention Management Algorithms
"... We present two new algorithms for contention management in transactional memory, the deterministic algorithm CommitRounds and the randomized algorithm RandomizedRounds. Our randomized algorithm is efficient: in some notorious problem instances (e.g., dining philosophers) it is exponentially faster t ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We present two new algorithms for contention management in transactional memory, the deterministic algorithm CommitRounds and the randomized algorithm RandomizedRounds. Our randomized algorithm is efficient: in some notorious problem instances (e.g., dining philosophers) it is exponentially faster than prior work from a worst case perspective. Both algorithms are (i) local and (ii) starvationfree. Our algorithms are local because they do not use global synchronization data structures (e.g., a shared counter), hence they do not introduce additional resource conflicts which eventually might limit scalability. Our algorithms are starvationfree because each transaction is guaranteed to complete. Prior work sometimes features either (i) or (ii), but not both. To analyze our algorithms (from a worst case perspective) we introduce a new measure of complexity that depends on the number of actual conflicts only. In addition, we show that even a nonconstant approximation of the length of an optimal (shortest) schedule of a set of transactions is NPhard – even if all transactions are known in advance and do not alter their resource requirements. Furthermore, in case the needed resources of a transaction varies over time, such that for a transaction the number of conflicting transactions increases by a factor k, the competitive ratio of any contention manager is Ω(k) for k < √ m, where m denotes the number of cores. 1
Coloring Unstructured Wireless MultiHop Networks
 In PODC
, 2009
"... We present a randomized coloring algorithm for the unstructured radio network model, a model comprising autonomous nodes, asynchronous wakeup, no collision detection and an unknown but geometric network topology. The current stateoftheart coloring algorithm needs with high probability O(∆·log n) ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
We present a randomized coloring algorithm for the unstructured radio network model, a model comprising autonomous nodes, asynchronous wakeup, no collision detection and an unknown but geometric network topology. The current stateoftheart coloring algorithm needs with high probability O(∆·log n) time and uses O(∆) colors, where n and ∆ are the number of nodes in the network and the maximum degree, respectively; this algorithm requires knowledge of a linear bound on n and ∆. We improve this result in three ways: Firstly, we improve the time complexity, instead of the logarithmic factor we just need a polylogarithmic additive term; more specifically, our time complexity is O( ∆ + log ∆ · log n) given an estimate of n and ∆, and O( ∆ + log 2 n) without knowledge of ∆. Secondly, our vertex coloring algorithm needs ∆ + 1 colors only. Thirdly, our algorithm manages to do a distanced coloring with asymptotically optimal O(∆) colors for a constant d.
Approximability of sparse integer programs
 In Proc. 17th ESA
, 2009
"... The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ǫ> 0, if P = NP this ratio cannot be improved to k − 1 − ǫ, and under the unique games conjecture this ratio cannot be improved to k − ǫ. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsackcover inequalities. Second, for packing integer programs {max cx: Ax ≤ b,0 ≤ x ≤ d} where A has at most k nonzeroes per column, we give a 2 k k 2approximation algorithm. This is the first polynomialtime approximation algorithm for this problem with approximation ratio depending only on k, for any k> 1. Our approach starts from iterated LP relaxation, and then uses probabilistic and greedy methods to recover a feasible solution. Note added after publication: This version includes subsequent developments: a O(k 2) approximation for the latter problem using the iterated rounding framework, and several literature reference updates including a O(k)approximation for the same problem by Bansal et al.
On Agnostic Learning of Parities, Monomials and Halfspaces
, 2006
"... We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly refer ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al. [BKW03], this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of kjuntas reduces to learning noisy parities of k variables. We give essentially optimal hardness results for agnostic learning of monomials over {0, 1} n and halfspaces over Q n. We show that for any constant ɛ finding a monomial (halfspace) that agrees with an unknown function on 1/2 + ɛ fraction of examples is NPhard even when there exists a monomial (halfspace) that agrees with the unknown function on 1 − ɛ fraction of examples. This resolves an open question due to Blum and significantly improves on a number of previous hardness results for these problems. We extend these results to ɛ = 2 − log1−λ n (ɛ = 2 − √ log n in the case of halfspaces) for any constant λ> 0 under stronger complexity assumptions.
Inapproximability Results for Equations over Finite Groups
, 2002
"... An equation over a finite group G is an expression of form w_1 w_2... w_k = 1_G, where each w_i is a variable, an inverted variable, or a constant from G; such an equation is satisfiable if there is a setting of the variables to values in G so that the equality is realized. We study the problem of s ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
An equation over a finite group G is an expression of form w_1 w_2... w_k = 1_G, where each w_i is a variable, an inverted variable, or a constant from G; such an equation is satisfiable if there is a setting of the variables to values in G so that the equality is realized. We study the problem of simultaneously satisfying a family of equations over a finite group G and show that it is NPhard to approximate the number of simultaneously satisfiable equations to within Gε for any ε > 0. This generalizes results of Håstad (2001, J. ACM, 48 (4)), who established similar bounds under the added condition that the group G is Abelian.
Hypergraph list coloring and Euclidean Ramsey Theory
, 2010
"... A hypergraph is simple if it has no two edges sharing more than a single vertex. It is slist colorable (or schoosable) if for any assignment of a list of s colors to each of its vertices, there is a vertex coloring assigning to each vertex a color from its list, so that no edge is monochromatic. W ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A hypergraph is simple if it has no two edges sharing more than a single vertex. It is slist colorable (or schoosable) if for any assignment of a list of s colors to each of its vertices, there is a vertex coloring assigning to each vertex a color from its list, so that no edge is monochromatic. We prove that for every positive integer r, there is a function dr(s) such that no runiform simple hypergraph with average degree at least dr(s) is slistcolorable. This extends a similar result for graphs, due to the first author, but does not give as good estimates of dr(s) as are known for d2(s), since our proof only shows that for each fixed r ≥ 2, dr(s) ≤ 2 crsr−1. We use the result to prove that for any finite set of points X in the plane, and for any finite integer s, one can assign a list of s distinct colors to each point of the plane, so that any coloring of the plane that colors each point by a color from its list contains a monochromatic isometric copy of X.
CachetoCache: Could ISPs Cooperate to Decrease Peertopeer Content Distribution Costs?
, 2009
"... We consider whether cooperative caching may reduce the transit traffic costs of Internet service providers (ISPs) due to peertopeer (P2P) content distribution systems. We formulate two game theoretic models for cooperative caching, one in which ISPs follow their selfish interests, and one in which ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider whether cooperative caching may reduce the transit traffic costs of Internet service providers (ISPs) due to peertopeer (P2P) content distribution systems. We formulate two game theoretic models for cooperative caching, one in which ISPs follow their selfish interests, and one in which they act altruistically. We show the existence of pure strategy Nash equilibria for both games, and evaluate the gains of cooperation on various network topologies, among them the AS level map of Northern Europe, using measured traces of P2P content popularity. We find that cooperation can lead to significant improvements of the cache efficiency with little communication overhead even if ISPs follow their selfish interests.