Results 11 - 20
of
33
Hardness Amplification within NP against Deterministic Algorithms
- IEEE Conference on Computational Complexity
, 2008
"... We study the average-case hardness of the class NP against algorithms in P. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We study the average-case hardness of the class NP against algorithms in P. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4 + (log n) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that can be uniquely decoded up to by a deterministic local decoder. error rate 1 4
Locally Testing Direct Products in the Low Error Range
"... Given a function f: X → Σ, its ℓ-wise direct product is the function F = f ℓ: X ℓ → Σ ℓ defined by: F (x1,..., xℓ) = (f(x1),..., f(xℓ)). We are interested in the local testability of the direct product encoding (mapping f ↦ → f ℓ). Namely, given an arbitrary function F: X ℓ → Σ ℓ, we wish to determ ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
Given a function f: X → Σ, its ℓ-wise direct product is the function F = f ℓ: X ℓ → Σ ℓ defined by: F (x1,..., xℓ) = (f(x1),..., f(xℓ)). We are interested in the local testability of the direct product encoding (mapping f ↦ → f ℓ). Namely, given an arbitrary function F: X ℓ → Σ ℓ, we wish to determine how close it is to f ℓ for some f: X → Σ, by making two random queries into F. In this work we analyze the case of low acceptance probability of the test. We show that even if the test passes with small probability, ε> 0, already F must have a non-trivial structure and in particular must agree with some f ℓ on nearly ε of the domain. Moreover, we give a structural characterization of all functions F on which the test passes with probability ε. Our results can be viewed as a combinatorial analog of the low error ‘low degree test’, that is used in PCP constructions. 1.
Local list decoding with a constant number of queries
, 2010
"... Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to 1 3 fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate 1 2 − α for any α> 0 and locally list-decoded from erro ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to 1 3 fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate 1 2 − α for any α> 0 and locally list-decoded from error rate 1 − α for any α> 0, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries. 1
On Heuristic Time Hierarchies
, 2006
"... We study the existence of time hierarchies for heuristic algorithms. We prove that a time hierarchy exists for heuristics algorithms in such syntactic classes as NP and coNP, and also in semantic classes AM and MA. Earlier, Fortnow and Santhanam (FOCS’04) proved the existence of a time hierarchy for ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
We study the existence of time hierarchies for heuristic algorithms. We prove that a time hierarchy exists for heuristics algorithms in such syntactic classes as NP and coNP, and also in semantic classes AM and MA. Earlier, Fortnow and Santhanam (FOCS’04) proved the existence of a time hierarchy for heuristics algorithms in BPP. We present an alternative approach and give a simpler proof.
New direct-product testers and 2-query PCPs
- IN PROCEEDINGS OF THE FORTY-FIRST ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2009
"... The “direct product code” of a function f gives its values on all k-tuples (f(x1),..., f(xk)). This basic construct underlies “hardness amplification ” in cryptography, circuit complexity and PCPs. Goldreich and Safra [GS00] pioneered its local testing and its PCP application. A recent result by Din ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
The “direct product code” of a function f gives its values on all k-tuples (f(x1),..., f(xk)). This basic construct underlies “hardness amplification ” in cryptography, circuit complexity and PCPs. Goldreich and Safra [GS00] pioneered its local testing and its PCP application. A recent result by Dinur and Goldenberg [DG08] enabled for the first time testing proximity to this important code in the “list-decoding” regime. In particular, they give a 2-query test which works for polynomially small success probability 1/kα, and show that no such test works below success probability 1/k. Our main result is a 3-query test which works for exponentially small success probability exp(−kα). Our techniques (based on recent simplified decoding algorithms for the same code [IJKW08]) also allow us to considerably simplify the analysis of the 2-query test of [DG08]. We then show how to derandomize their test, achieving a code of polynomial rate, independent of k, and success probability 1/kα. Finally we show the applicability of the new tests to PCPs. Starting with a 2-query PCP over an alphabet Σ and with soundness error 1 − δ, Rao [Rao08] (building on Raz’s (k-fold)
Lower bounds on the query complexity of non-uniform and adaptive reductions showing hardness amplification
, 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “non-uniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by non-uniform reductions. Our result is the first lower bound that applies to non-uniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A non-uniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and list-decodable error-correcting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is non-adaptive if it makes non-adaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform
A note on amplifying the error-tolerance of locally decodable codes
- COLLOQ. COMPUT. COMPLEX
, 2010
"... Trevisan [Tre03] suggested a transformation that allows amplifying the error rate a code can handle. We observe that this transformation, that was suggested in the non-local setting, works also in the local setting and thus gives a generic, simple way to amplify the error-tolerance of locally decoda ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Trevisan [Tre03] suggested a transformation that allows amplifying the error rate a code can handle. We observe that this transformation, that was suggested in the non-local setting, works also in the local setting and thus gives a generic, simple way to amplify the error-tolerance of locally decodable codes. Specifically, this shows how to transform a locally decodable code that can tolerate a constant fraction of errors to a locally decodable code that can recover from a much higher error-rate, and how to transform such locally decodable codes to locally list-decodable codes. The transformation of [Tre03] involves a simple composition with an approximately locally (list) decodable code. Using a construction of such codes by Impagliazzo et al. [IJKW10], the transformation incurs only a negligible growth in the length of the code and in the query complexity.
Deterministic Hardness Amplification via Local GMD Decoding
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 89
, 2007
"... We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4+(log n) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that by a deterministic local decoder. can be uniquely decoded up to error rate 1/4