Results 21 - 30
of
33
Advice Lower Bounds for the Dense Model Theorem
, 2011
"... We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is δ-dense in a distribution that is ǫ ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is δ-dense in a distribution that is ǫ ′-indistinguishable from uniform, there exists a “dense model ” for D, that is, a distribution that is δ-dense in the uniform distribution and is ǫ-indistinguishable from D. This ǫ-indistinguishability is with respect to an arbitrary small class of functions F. For the very natural case where ǫ ′ ≥ Ω(ǫδ) and ǫ ≥ δ O(1) , our lower bound implies that Ω ( √ (1/ǫ)log(1/δ) · log |F | ) advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is O ( (1/ǫ 2)log(1/δ)·log |F | ). Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for “dense model decoding ” instead. 1
Near-optimal extractors against quantum storage
- ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 133 (2009)
, 2009
"... We give near-optimal constructions of extractors secure against quantum bounded-storage adversaries. One instantiation gives the first such extractor to achieve an output length Θ(K − b), where K is the source’s entropy and b the adversary’s storage, depending linearly on the adversary’s amount of s ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
We give near-optimal constructions of extractors secure against quantum bounded-storage adversaries. One instantiation gives the first such extractor to achieve an output length Θ(K − b), where K is the source’s entropy and b the adversary’s storage, depending linearly on the adversary’s amount of storage, together with a poly-logarithmic seed length. Another instantiation achieves a logarithmic key length, with a slightly smaller output length Θ((K − b)/K γ) for any γ> 0. In contrast, the previous best construction [Ts09] could only extract (K/b) 1/15 bits. Our construction follows Trevisan’s general reconstruction paradigm [Tre01], and in fact our proof of security shows that essentially all extractors constructed using this paradigm are secure against quantum storage, with optimal parameters. Our argument is based on bounds for a generalization of quantum random access codes, which we call quantum functional access codes. This is crucial as it lets us avoid the local list-decoding algorithm central to the approach in [Ts09], which was the source of the multiplicative overhead. Some of our constructions have the additional advantage that every bit of the output is a function of only a polylogarithmic number of bits from the source, which is crucial for some cryptographic applications.
Deterministic hardness amplification via local . . .
"... We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of ..."
Abstract
- Add to MetaCart
We study the average-case hardness of the class NP against deterministic polynomial time algorithms. We prove that there exists some constant µ> 0 such that if there is some language in NP for which no deterministic polynomial time algorithm can decide L correctly on a 1 − (log n) −µ fraction of inputs of length n, then there is a language L ′ in NP for which no deterministic polynomial time algorithm can decide L ′ correctly on a 3/4+(logn) −µ fraction of inputs of length n. In coding theoretic terms, we give a construction of a monotone code that by a deterministic local decoder. can be uniquely decoded up to error rate 1 4 1
Locally Testing Direct Products in the High Error
, 2008
"... Given a function f: X → Σ, its ℓ-wise direct product is the function F = f ℓ: X ℓ → Σ ℓ defined by F (x1,..., xℓ) = (f(x1),..., f(xℓ)). In this paper we study the local testability of the direct product encoding (mapping f ↦ → f ℓ). Namely, given an arbitrary function F: X ℓ → Σ ℓ, we wish to deter ..."
Abstract
- Add to MetaCart
Given a function f: X → Σ, its ℓ-wise direct product is the function F = f ℓ: X ℓ → Σ ℓ defined by F (x1,..., xℓ) = (f(x1),..., f(xℓ)). In this paper we study the local testability of the direct product encoding (mapping f ↦ → f ℓ). Namely, given an arbitrary function F: X ℓ → Σ ℓ, we wish to determine how close it is to f ℓ for some f: X → Σ, by making the smallest possible number of random queries into F (namely, two). This question has first been studied by Goldreich and Safra and later the following simple two-query test has been studied by Dinur and Reingold: Choose a random pair x, x ′ ∈ X ℓ that have m coordinates in common. Accept iff F (x) and F (x ′ ) agree on the common coordinates. Dinur and Reingold showed that if the test accepts with sufficiently high probability (close to 1) then F is close to f ℓ for some f. In this work we analyze the case of low acceptance probability of the test. We show that even if the test passes with small probability, ε> 0, already F must have a non-trivial structure and in particular must agree with some f ℓ on nearly ε of the domain. Moreover, we give a structural characterization of all functions F on which the test passes with probability ε. We find a list of functions f1,..., ft such that essentially the only way T ′ will accept on a pair x, x ′ , is if both F (x) and F (x ′ ) agree with fi. This is related to approximate local-decoding of this code, as studied by Impagliazzo et. al. Our result means that both the testing and the approximate local decoding can be done in “one shot ” with the minimal possible number (only two) of queries. Our results hold for values of ε as small as ℓ −Ω(1) , and we show that below 1/ℓ no characterization is possible.
The Computational Complexity of Randomness
, 2013
"... This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to wh ..."
Abstract
- Add to MetaCart
This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling throughthelens ofcomputational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random. Even when the specificationofacomputational problem involves no randomness, onecanstill consider randomized
Verifying and Decoding in Constant Depth Shafi Goldwasser *CSAIL, MIT and
"... Another, less immediate sender-receiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list-)decodable by constant-depth circuits of size poly ..."
Abstract
- Add to MetaCart
(Show Context)
Another, less immediate sender-receiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list-)decodable by constant-depth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally list-decodable codes and average-case complexity, we obtain a new, more efficient, worst-case to average-case reduction for languages in EXP.
Clustering in the Boolean Hypercube in a List Decoding Regime
, 2013
"... We consider the following clustering with outliers problem: Given a set of points X ⊂ {−1, 1}n, such that there is some point z ∈ {−1, 1}n for which Prx∈X [〈x, z 〉 ≥ ε] ≥ δ, find z. We call such a point z a (δ, ε)-center of X. In this work we give lower and upper bounds for the task of finding a ( ..."
Abstract
- Add to MetaCart
We consider the following clustering with outliers problem: Given a set of points X ⊂ {−1, 1}n, such that there is some point z ∈ {−1, 1}n for which Prx∈X [〈x, z 〉 ≥ ε] ≥ δ, find z. We call such a point z a (δ, ε)-center of X. In this work we give lower and upper bounds for the task of finding a (δ, ε)-center. We first show that for δ = 1−ν close to 1, i.e. in the unique decoding regime, given a (1−ν, ε)-centered set our algorithm can find a (1−(1+o(1))ν, (1−o(1))ε)-center. More interestingly, we study the list decoding regime, i.e. when δ is close to 0. Our main upper bound shows that for values of ε and δ that are larger than 1/poly log(n), there exists a polynomial time algorithm that finds a (δ − o(1), ε − o(1))-center. Moreover, our algorithm outputs a list of centers explaining all of the clusters in the input. Our main lower bound shows that given a set for which there exists a (δ, ε)-center, it is hard to find even a (δ/nc, ε)-center for some constant c and ε = 1/poly(n), δ = 1/poly(n).