Results 1  10
of
41
Finding and Certifying a Large Hidden Clique in a SemiRandom Graph
, 1999
"... Alon, Krivelevich and Sudakov (Random Structures and Algorithms, 1998) designed an algorithm based on spectral techniques that almost surely finds a clique of size \Omega\Gamma p n) hidden in an otherwise random graph. We show that a different algorithm, based on the Lov'asz theta functio ..."
Abstract

Cited by 68 (12 self)
 Add to MetaCart
Alon, Krivelevich and Sudakov (Random Structures and Algorithms, 1998) designed an algorithm based on spectral techniques that almost surely finds a clique of size \Omega\Gamma p n) hidden in an otherwise random graph. We show that a different algorithm, based on the Lov'asz theta function, almost surely both finds the hidden clique and certifies its optimality. Our algorithm has an additional advantage of being more robust: it also works in a semirandom hidden clique model, in which an adversary can remove edges from the random portion of the graph. 1 Introduction A clique in a graph G is a subset of the vertices every two of which are connected by an edge. The maximum clique problem, that is, finding a clique of maximum size in a graph, is fundamental in the area of combinatorial optimization, and is closely related to the independent set problem (clique on the edge complement graph G), the vertex cover problem (the vertex complement of the independent set) and chromatic...
Algorithmic barriers from phase transitions. preprint
"... For many random Constraint Satisfaction Problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomialtime algorithms stop finding solutions at much smaller densities. For exa ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
(Show Context)
For many random Constraint Satisfaction Problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomialtime algorithms stop finding solutions at much smaller densities. For example, it is wellknown that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 − ɛ)χ colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of kcolorings of a random graph, viewed as a subset of {1,..., k} n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of kcolorings looks like a giant ball for k ≥ 2χ, but like an errorcorrecting code for k ≤ (2 − ɛ)χ. We also prove that an analogous phase transition occurs both in random kSAT and in random hypergraph 2coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomialtime algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1step ReplicaSymmetryBreaking hypothesis of statistical physics for random CSPs.
Testing kwise and almost kwise independence
 In 39th Annual ACM Symposium on Theory of Computing
, 2007
"... In this work, we consider the problems of testing whether a distribution over {0, 1} n is kwise (resp. (ɛ, k)wise) independent using samples drawn from that distribution. For the problem of distinguishing kwise independent distributions from those that are δfar from kwise independence in statis ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
(Show Context)
In this work, we consider the problems of testing whether a distribution over {0, 1} n is kwise (resp. (ɛ, k)wise) independent using samples drawn from that distribution. For the problem of distinguishing kwise independent distributions from those that are δfar from kwise independence in statistical distance, we upper bound the number of required samples by Õ(nk /δ 2) and lower bound it by Ω(n k−1 2 /δ) (these bounds hold for constant k, and essentially the same bounds hold for general k). To achieve these bounds, we use Fourier analysis to relate a distribution’s distance from kwise independence to its biases, a measure of the parity imbalance it induces on a set of variables. The relationships we derive are tighter than previously known, and may be of independent interest. To distinguish (ɛ, k)wise independent distributions from those that are δfar from (ɛ, k)wise independence in statistical distance, we upper bound the number of required samples by O ` k log n δ2ɛ2 ´ and lower bound it by
The probable value of the LovaszSchrijver relaxations for maximum independent set
 SIAM Journal on Computing
, 2003
"... independent set ..."
Complexity theoretic lower bounds for sparse principal component detection
 In COLT 2013 – The 26th Conference on Learning Theory
, 2013
"... In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
In the context of sparse principal component detection, we bring evidence towards the existence of a statistical price to pay for computational efficiency. We measure the performance of a test by the smallest signal strength that it can detect and we propose a computationally efficient method based on semidefinite programming. We also prove that the statistical performance of this test cannot be strictly improved by any computationally efficient method. Our results can be viewed as complexity theoretic lower bounds conditionally on the assumptions that some instances of the planted clique problem cannot be solved in randomized polynomial time.
Public Key Cryptography from Different Assumptions
, 2008
"... We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
(Show Context)
We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmicsized subset S of the outputs is connected to fewer than S  inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current stateofart. 1
Finding Hidden Cliques in Linear Time with High Probability
"... We are given a graph G with n vertices, where a random subset of k vertices has been made into a clique, and the remaining edges are chosen independently with probability 1 2, k). The hidden clique problem is to design an algorithm that finds the kclique in polynomial time with high probability. An ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We are given a graph G with n vertices, where a random subset of k vertices has been made into a clique, and the remaining edges are chosen independently with probability 1 2, k). The hidden clique problem is to design an algorithm that finds the kclique in polynomial time with high probability. An algorithm due to Alon, Krivelevich and Sudakov [3] uses spectral techniques to find the hidden clique with high probability when k = c √ n for a sufficiently large constant c> 0. Recently, an algorithm that solves the same problem was proposed by Feige and Ron [14]. It has the advantages of being simpler and more intuitive, and of an improved running time of O(n 2). However, the analysis in [14] gives success probability of only 2/3. In this paper we present a new algorithm for finding hidden cliques that both runs in time O(n 2), and has a failure probability that is less than polynomially small.. This random graph model is denoted G(n, 1 2