• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

The Complexity of Local List Decoding

by Dan Gutfreund, Guy N. Rothblum
Add To MetaCart

Tools

Sorted by:
Results 1 - 4 of 4

Delegating computation reliably: Paradigms and Constructions

by Guy N. Rothblum , 2009
"... In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for we ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We con-struct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's

Lower bounds on the query complexity of non-uniform and adaptive reductions showing hardness amplification

by Sergei Artemenko, Ronen Shaltiel , 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “non-uniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by non-uniform reductions. Our result is the first lower bound that applies to non-uniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A non-uniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and list-decodable error-correcting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is non-adaptive if it makes non-adaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform

Advice Lower Bounds for the Dense Model Theorem

by Thomas Watson , 2011
"... We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is δ-dense in a distribution that is ǫ ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
We prove a lower bound on the amount of nonuniform advice needed by black-box reductions for the Dense Model Theorem of Green, Tao, and Ziegler, and of Reingold, Trevisan, Tulsiani, and Vadhan. The latter theorem roughly says that for every distribution D that is δ-dense in a distribution that is ǫ ′-indistinguishable from uniform, there exists a “dense model ” for D, that is, a distribution that is δ-dense in the uniform distribution and is ǫ-indistinguishable from D. This ǫ-indistinguishability is with respect to an arbitrary small class of functions F. For the very natural case where ǫ ′ ≥ Ω(ǫδ) and ǫ ≥ δ O(1) , our lower bound implies that Ω ( √ (1/ǫ)log(1/δ) · log |F | ) advice bits are necessary. There is only a polynomial gap between our lower bound and the best upper bound for this case (due to Zhang), which is O ( (1/ǫ 2)log(1/δ)·log |F | ). Our lower bound can be viewed as an analog of list size lower bounds for list-decoding of error-correcting codes, but for “dense model decoding ” instead. 1

The Computational Complexity of Randomness

by Thomas Weir Watson , 2013
"... This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to wh ..."
Abstract - Add to MetaCart
This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling throughthelens ofcomputational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random. Even when the specificationofacomputational problem involves no randomness, onecanstill consider randomized
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University