Results 1 -
6 of
6
Approximate list-decoding of direct product . . .
"... Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is, gi ..."
Abstract
-
Cited by 33 (8 self)
- Add to MetaCart
Given a message msg ∈ {0, 1} N, its k-wise direct product encoding is the sequence of k-tuples (msg(i1),..., msg(ik)) over all possible k-tuples of indices (i1,..., ik) ∈ {1,..., N} k. We give an efficient randomized algorithm for approximate local list-decoding of direct product codes. That is, given oracle access to a word which agrees with a k-wise direct product encoding of some message msg ∈ {0, 1} N in at least ɛ � poly(1/k) fraction of positions, our algorithm outputs a list of poly(1/ɛ) strings that contains at least one string msg ′ which is equal to msg in all but at most k −Ω(1) fraction of positions. The decoding is local in that our algorithm outputs a list of Boolean circuits so that the jth bit of the ith output string can be computed by running the ith circuit on input j. The running time of the algorithm is polynomial in log N and 1/ɛ. In general, when ɛ> e−kα for a sufficiently small constant α> 0, we get a randomized approximate list-decoding algorithm that runs in time quasipolynomial in 1/ɛ, i.e., (1/ɛ) poly log 1/ɛ. As an application of our decoding algorithm, we get uniform hardness amplification for PNP�, the class of languages reducible to NP through one round of parallel oracle queries: If there is a language in PNP � that cannot be decided by any BPP algorithm on more that 1 − 1/nΩ(1) fraction of inputs, then there is another language in P NP � that cannot be decided by any BPP algorithm on more that 1/2 + 1/nω(1) fraction of inputs.
Noise-resilient group testing: Limitations and constructions
- In Proceedings of 17th International Symposium on Fundamentals of Computation Theory (FCT
, 2009
"... We study combinatorial group testing schemes for learning d-sparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
We study combinatorial group testing schemes for learning d-sparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noise-resilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of ˜ Ω(d 2 log n) that is known for exact reconstruction of d-sparse vectors of length n via non-adaptive measurements, by a multiplicative factor ˜ Ω(d). Specifically, we give simple randomized constructions of non-adaptive measurement schemes, with m = O(d log n) measurements, that allow efficient reconstruction of d-sparse vectors up to O(d) false positives even in the presence of δm false positives and O(m/d) false negatives within the measurement outcomes, for any constant δ < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized trade-off but using m = O(d 1+o(1) log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time poly(m), which would be sublinear in n for sufficiently sparse vectors. The main tool used in our construction is the list-decoding view of randomness condensers and extractors. An immediate consequence of our result is an adaptive scheme that runs in only two non-adaptive rounds and exactly reconstructs any d-sparse vector using a total O(d log n) measurements, a task that would be impossible in one round and fairly easy in O(log(n/d)) rounds.
Hardness Amplification within NP against Deterministic Algorithms
- IEEE Conference on Computational Complexity
, 2008
"... We study the average-case hardness of the class NP against algorithms in P. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We study the average-case hardness of the class NP against algorithms in P. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4 + (log n) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that can be uniquely decoded up to by a deterministic local decoder. error rate 1 4
Query Complexity in Errorless Hardness Amplification
, 2010
"... An errorless circuit for a boolean function is one that outputs the correct answer or “don’t know ” on each input (and never outputs the wrong answer). The goal of errorless hardness amplification is to show that if f has no size s errorless circuit that outputs “don’t know ” on at most a δ fraction ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
An errorless circuit for a boolean function is one that outputs the correct answer or “don’t know ” on each input (and never outputs the wrong answer). The goal of errorless hardness amplification is to show that if f has no size s errorless circuit that outputs “don’t know ” on at most a δ fraction of inputs, then some f ′ related to f has no size s ′ errorless circuit that outputs “don’t know ” on at most a 1 − ǫ fraction of inputs. Thus the hardness is “amplified” from δ to 1 −ǫ. Unfortunately, this amplification comes at the cost of a loss in circuit size. This is because such results are proven by reductions which show that any size s ′ errorless circuit for f ′ that outputs “don’t know ” on at most a 1 − ǫ fraction of inputs could be used to construct a size s errorless circuit for f that outputs “don’t know ” on at most a δ fraction of inputs. If the reduction makes q queries to the hypothesized errorless circuit for f ′, then plugging in a size s ′ circuit yields a circuit of size ≥ qs ′, and thus we must have s ′ ≤ s/q. Hence it is desirable to keep the query complexity to a minimum. The first results on errorless hardness amplification were obtained by Bogdanov and Safra. They achieved query complexity O ( ( 1 1 δ log ǫ)2 · 1
Deterministic Hardness Amplification via Local GMD Decoding
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 89
, 2007
"... We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4+(log n) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that by a deterministic local decoder. can be uniquely decoded up to error rate 1/4
Deterministic hardness amplification via local . . .
"... We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of ..."
Abstract
- Add to MetaCart
We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4+(logn) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that by a deterministic local decoder. can be uniquely decoded up to error rate 1 4 1