Results 1  10
of
15
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 173 (24 self)
 Add to MetaCart
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
Conditional hardness for approximate coloring
 In STOC 2006
, 2006
"... We study the APPROXIMATECOLORING(q, Q) problem: Given a graph G, decide whether χ(G) ≤ q or χ(G) ≥ Q (where χ(G) is the chromatic number of G). We derive conditional hardness for this problem for any constant 3 ≤ q < Q. For q ≥ 4, our result is based on Khot’s 2to1 conjecture [Khot’02]. For q = ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
We study the APPROXIMATECOLORING(q, Q) problem: Given a graph G, decide whether χ(G) ≤ q or χ(G) ≥ Q (where χ(G) is the chromatic number of G). We derive conditional hardness for this problem for any constant 3 ≤ q < Q. For q ≥ 4, our result is based on Khot’s 2to1 conjecture [Khot’02]. For q = 3, we base our hardness result on a certain ‘⊲< shaped ’ variant of his conjecture. We also prove that the problem ALMOST3COLORINGε is hard for any constant ε> 0, assuming Khot’s Unique Games conjecture. This is the problem of deciding for a given graph, between the case where one can 3color all but a ε fraction of the vertices without monochromatic edges, and the case where the graph contains no independent set of relative size at least ε. Our result is based on bounding various generalized noisestability quantities using the invariance principle of Mossel et al [MOO’05].
A brief introduction to Fourier analysis on the Boolean cube
 Theory of Computing Library– Graduate Surveys
, 2008
"... Abstract: We give a brief introduction to the basic notions of Fourier analysis on the ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Abstract: We give a brief introduction to the basic notions of Fourier analysis on the
Testing halfspaces
 IN PROC. 20TH ANNUAL SYMPOSIUM ON DISCRETE ALGORITHMS (SODA
, 2009
"... This paper addresses the problem of testing whether a Booleanvalued function f is a halfspace, i.e. a function of the form f(x) = sgn(w ·x−θ). We consider halfspaces over the continuous domain R n (endowed with the standard multivariate Gaussian distribution) as well as halfspaces over the Boolean ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
This paper addresses the problem of testing whether a Booleanvalued function f is a halfspace, i.e. a function of the form f(x) = sgn(w ·x−θ). We consider halfspaces over the continuous domain R n (endowed with the standard multivariate Gaussian distribution) as well as halfspaces over the Boolean cube {−1, 1} n (endowed with the uniform distribution). In both cases we give an algorithm that distinguishes halfspaces from functions that are ǫfar from any halfspace using only poly ( 1) queries, independent of ǫ the dimension n. Two simple structural results about halfspaces are at the heart of our approach for the Gaussian distribution: the first gives an exact relationship between the expected value of a halfspace f and the sum of the squares of f’s degree1 Hermite coefficients, and the second shows that any function that approximately satisfies this relationship is close to a halfspace. We prove analogous results for the Boolean cube {−1, 1} n (with Fourier coefficients in place of Hermite coefficients) for balanced halfspaces in which all degree1 Fourier coefficients are small. Dealing with general halfspaces over {−1, 1} n poses significant additional complications and requires other ingredients. These include “crossconsistency ” versions of the results mentioned above for pairs of halfspaces with the same weights but different thresholds; new structural results relating the largest degree1 Fourier coefficient and the largest weight in unbalanced halfspaces; and algorithmic techniques from recent work on testing juntas [FKR+ 02].
On the Noise Sensitivity of Monotone Functions
, 2003
"... It is known that for all monotone functions f: {0, 1} n → {0, 1}, if x ∈ {0, 1} n is chosen uniformly at random and y is obtained from x by flipping each of the bits of x independently with probability ɛ = n −α, then P[f(x) � = f(y)] < cn −α+1/2, for some c> 0. Previously, the best construction of m ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
It is known that for all monotone functions f: {0, 1} n → {0, 1}, if x ∈ {0, 1} n is chosen uniformly at random and y is obtained from x by flipping each of the bits of x independently with probability ɛ = n −α, then P[f(x) � = f(y)] < cn −α+1/2, for some c> 0. Previously, the best construction of monotone functions satisfying P[fn(x) � = fn(y)] ≥ δ, where 0 < δ < 1/2, required ɛ ≥ c(δ)n −α, where α = 1 − ln 2 / ln 3 = 0.36907..., and c(δ)> 0. We improve this result by achieving for every 0 < δ < 1/2, P[fn(x) � = fn(y)] ≥ δ, with: • ɛ = c(δ)n−α for any α < 1/2, using the recursive majority function with arity k = k(α); π/2 =.3257..., using an explicit recursive majority • ɛ = c(δ)n −1/2 log t n for t = log 2 function with increasing arities; and, • ɛ = c(δ)n −1/2, nonconstructively, following a probabilistic CNF construction due to Talagrand. We also study the problem of achieving the best dependence on δ in the case that the noise rate ɛ is at least a small constant; the results we obtain are tight to within logarithmic factors.
Quantitative Relation Between Noise Sensitivity and Influences
, 2010
"... A Boolean function f: {0, 1} n → {0, 1} is said to be noise sensitive if inserting a small random error in its argument makes the value of the function almost unpredictable. Benjamini, Kalai and Schramm [BKS99] showed that if the sum of squares of influences in f is close to zero then f must be nois ..."
Abstract
 Add to MetaCart
A Boolean function f: {0, 1} n → {0, 1} is said to be noise sensitive if inserting a small random error in its argument makes the value of the function almost unpredictable. Benjamini, Kalai and Schramm [BKS99] showed that if the sum of squares of influences in f is close to zero then f must be noise sensitive. We show a quantitative version of this result which does not depend on n, and prove that it is tight for certain parameters. Our results hold also for a general product measure µp on the discrete cube, as long as log 1/p ≪ log n. We note that in [BKS99], a quantitative relation between the sum of squares of the influences and the noise sensitivity was also shown, but only when the sum of squares is bounded by n−c for a constant c. Our results require a generalization of a lemma of Talagrand on the Fourier coefficients of monotone Boolean functions. In order to achieve it, we present a considerably shorter proof of Talagrand’s lemma, which easily generalizes in various directions, including nonmonotone functions. 1
THE CHOW PARAMETERS PROBLEM
"... function is uniquely determined by its degree0 and degree1 Fourier coefficients. These numbers became known as the Chow Parameters. Providing an algorithmic version of Chow’s Theorem—i.e., efficiently constructing a representation of a threshold function given its Chow Parameters—has remained open ..."
Abstract
 Add to MetaCart
function is uniquely determined by its degree0 and degree1 Fourier coefficients. These numbers became known as the Chow Parameters. Providing an algorithmic version of Chow’s Theorem—i.e., efficiently constructing a representation of a threshold function given its Chow Parameters—has remained open ever since. This problem has received significant study in the fields of circuit complexity, game theory and the design of voting systems, and learning theory. In this paper we effectively solve the problem, giving a randomized PTAS with the following behavior: Given the Chow Parameters of a Boolean threshold function f over n bits and any constant ɛ> 0, the algorithm runs in time O(n 2 log 2 n) and with high probability outputs a representation of a threshold function f ′ which is ɛclose to f. Along the way we prove several new results of independent interest about Boolean threshold functions. In addition to various structural results, these include Õ(n2)time learning algorithms for threshold functions under the uniform distribution in the following models: (i) The Restricted Focus of Attention model, answering an open question of Birkendorf et al. (ii) An agnostictype model. This contrasts with recent results of Guruswami and Raghavendra who show NPhardness for the problem under general distributions. (iii) The PAC model, with constant ɛ. Our Õ(n2)time algorithm substantially improves on the previous best known running time and nearly matches the Ω(n 2) bits of training data that any successful learning algorithm must use. Key words. Chow Parameters, threshold functions, approximation, learning theory