Results 1 - 10
of
2,466
Uniform hardness amplification in NP via monotone codes
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY
, 2006
"... We consider the problem of amplifying uniform average-case hardness of languages in NP, where hardness is with respect to BPP algorithms. We introduce the notion of monotone errorcorrecting codes, and show that hardness amplification for NP is essentially equivalent to constructing efficiently local ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
We consider the problem of amplifying uniform average-case hardness of languages in NP, where hardness is with respect to BPP algorithms. We introduce the notion of monotone errorcorrecting codes, and show that hardness amplification for NP is essentially equivalent to constructing efficiently
Loopy belief propagation for approximate inference: An empirical study. In:
- Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract
-
Cited by 676 (15 self)
- Add to MetaCart
. Introduction The task of calculating posterior marginals on nodes in an arbitrary Bayesian network is known to be NP hard In this paper we investigate the approximation performance of "loopy belief propagation". This refers to using the well-known Pearl polytree algorithm [12] on a Bayesian network
On uniform amplification of hardness in NP
- IN PROCEEDINGS OF THE THIRTYSEVENTH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2005
"... We continue the study of amplification of average-case complexity within NP, and we focus on the uniform case. We prove that if every problem in NP admits an efficient uniform algorithm that (averaged over random inputs and over the internal coin tosses of the algorithm) succeeds with probability at ..."
Abstract
-
Cited by 24 (3 self)
- Add to MetaCart
We continue the study of amplification of average-case complexity within NP, and we focus on the uniform case. We prove that if every problem in NP admits an efficient uniform algorithm that (averaged over random inputs and over the internal coin tosses of the algorithm) succeeds with probability
Iterative hard thresholding for compressed sensing
- Appl. Comp. Harm. Anal
"... Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery probl ..."
Abstract
-
Cited by 329 (18 self)
- Add to MetaCart
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery
Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms
- J. ACM
, 1999
"... In this paper, we establish max-flow min-cut theorems for several important classes of multicommodity flow problems. In particular, we show that for any n-node multicommodity flow problem with uniform demands, the max-flow for the problem is within an O(log n) factor of the upper bound implied by ..."
Abstract
-
Cited by 357 (6 self)
- Add to MetaCart
In this paper, we establish max-flow min-cut theorems for several important classes of multicommodity flow problems. In particular, we show that for any n-node multicommodity flow problem with uniform demands, the max-flow for the problem is within an O(log n) factor of the upper bound implied
On the complexity of hardness amplification
- In Proceedings of the 20th Annual IEEE Conference on Computational Complexity
, 2005
"... We study the task of transforming a hard function f, with which any small circuit disagrees on (1 − δ)/2 fraction of the input, into a harder function f ′ , with which any small circuit disagrees on (1 − δ k)/2 fraction of the input, for δ ∈ (0, 1) and k ∈ N. We show that this process can not be car ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
high complexity. Furthermore, we show that even without any restriction on the complexity of the amplification procedure, such a black-box hardness amplification must be inherently non-uniform in the following sense. Given as an oracle any algorithm which agrees with f ′ on (1 − δ k)/2 fraction
Hardness Amplification for Errorless Heuristics
, 2007
"... An errorless heuristic is an algorithm that on all inputs returns either the correct answer or the special symbol ⊥, which means “I don’t know. ” A central question in average-case complexity is whether every distributional decision problem in NP has an errorless heuristic scheme: This is an algorit ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
: This is an algorithm that, for every δ> 0, runs in time polynomial in the instance size and 1/δ and answers ⊥ only on a δ fraction of instances. We study the question from the standpoint of hardness amplification and show that • If every problem in (NP, U) has errorless heuristic circuits that output the correct
The PCP theorem by gap amplification
- In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing
, 2006
"... The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a constant number of bits from the proof. The celebrated equivalence of this theorem and inapproximability of certain optimization problems, due to [12], has placed the PC ..."
Abstract
-
Cited by 166 (8 self)
- Add to MetaCart
the PCP theorem at the heart of the area of inapproximability. In this work we present a new proof of the PCP theorem that draws on this equivalence. We give a combinatorial proof for the NP-hardness of approximating a certain constraint satisfaction problem, which can then be reinterpreted to yield
Texts in Computational Complexity: Amplification of Hardness
, 2006
"... The existence of natural computational problems that are (or seem to be) infeasible to solve is usually perceived as bad news, because it means that we cannot do things we wish to do. But these bad news have a positive side, because hard problem can be "put to work " to our benefit ..."
Abstract
- Add to MetaCart
). Much of the current chapter is devoted to this issue, which is known by the term hardness amplification. Summary: We consider two conjectures that are related to P 6 = N P. The first conjecture is that there are problems that are solvable in exponential-time but are not solvable by (non-uniform
Proofs of retrievability via hardness amplification
- In TCC
, 2009
"... Proofs of Retrievability (PoR), introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client’s data. Constructions of PoR schemes attempt to minimize the cl ..."
Abstract
-
Cited by 84 (4 self)
- Add to MetaCart
of Shacham and Waters [SW08]. • Build the first bounded-use scheme with information-theoretic security. The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification, extensively studied in complexity theory. In particular, our improvements come from
Results 1 - 10
of
2,466