Results 1  10
of
15
Delegating computation: interactive proofs for muggles
 In Proceedings of the ACM Symposium on the Theory of Computing (STOC
, 2008
"... In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a “muggle”. 1 The verifier should be superefficient and run in nearlylinear time. These proof systems can be used for delegating computation: a se ..."
Abstract

Cited by 113 (6 self)
 Add to MetaCart
(Show Context)
In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a “muggle”. 1 The verifier should be superefficient and run in nearlylinear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result’s correctness in nearlylinear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a logspace uniform boolean circuit with depth d and input length n. The verifier runs in time (n+d)·polylog(n) and space O(log(n)), the communication complexity is d · polylog(n), and the prover runs in time poly(n). In particular, for languages computable by logspace uniform N C (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n · polylog(n) and space O(log(n)), and the communication complexity is polylog(n).
From secrecy to soundness: efficient verification via secure computation
 In Proceedings of the 37th international colloquium conference on Automata, languages and programming
, 2010
"... Abstract. We study the problem of verifiable computation (VC) in which a computationally weak client wishes to delegate the computation of a function f on an input x to a computationally strong but untrusted server. We present new general approaches for constructing VC protocols, as well as solving ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We study the problem of verifiable computation (VC) in which a computationally weak client wishes to delegate the computation of a function f on an input x to a computationally strong but untrusted server. We present new general approaches for constructing VC protocols, as well as solving the related problems of program checking and selfcorrecting. The new approaches reduce the task of verifiable computation to suitable variants of secure multiparty computation (MPC) protocols. In particular, we show how to efficiently convert the secrecy property of MPC protocols into soundness of a VC protocol via the use of a message authentication code (MAC). The new connections allow us to apply results from the area of MPC towards simplifying, unifying, and improving over previous results on VC and related problems. In particular, we obtain the following concrete applications: (1) The first VC protocols for arithmetic computations which only make a blackbox use of the underlying field or ring; (2) a noninteractive VC protocol for boolean circuits in the preprocessing model, conceptually simplifying and improving the online complexity of a recent protocol of Gennaro et al. (Cryptology ePrint Archive: Report 2009/547); (3) NC0 selfcorrectors for complete languages in the complexity class NC1 and various logspace classes, strengthening previous AC0 correctors of Goldwasser et al. (STOC 2008). 1
Amplifying lower bounds by means of selfreducibility
 IN IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2008
"... We observe that many important computational problems in NC¹ share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC⁰ circuits of size n 1+ɛ for every ɛ>0 (counting the num ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
We observe that many important computational problems in NC¹ share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC⁰ circuits of size n 1+ɛ for every ɛ>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean Formula Evaluation problem (BFE), which is complete for NC¹ and has the selfreducibility property. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC 0 circuits of size n 1+ɛd. If one were able to improve this lower bound to show that there is some constant ɛ>0 such that every TC 0 circuit family recognizing BFE has size n 1+ɛ, then it would follow that TC⁰ ̸ = NC¹. We show that proving lower bounds of the form n 1+ɛ is not ruled out by the Natural Proof framework of Razborov and Rudich and hence there is currently no known barrier for separating classes such as ACC⁰, TC⁰ and NC¹ via existing “natural ” approaches to proving circuit lower bounds. We also show that problems with small uniform constantdepth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known timespace tradeoff lower bounds to show that SAT requires uniform depth d TC⁰ and AC⁰ [6] circuits of size n 1+c for some constant c depending on d.
Hardness amplification proofs require majority
 In Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC
, 2008
"... Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f i ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f is worstcase hard). Achieving ɛ = 1/n ω(1) is a prerequisite for cryptography and most pseudorandomgenerator constructions. In this paper we study the complexity of blackbox proofs of hardness amplification. A class of circuits D proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + ɛ fraction of the inputs there exists an oracle circuit D ∈ D such that D h agrees with f on a 1 − δ fraction of the inputs. We focus on the case where every D ∈ D makes nonadaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D “can be used ” to compute the majority function on 1/ɛ bits. In particular, these circuits have large depth when ɛ ≤ 1/poly log n. 2. The circuits in D must make Ω � log(1/δ)/ɛ 2 � oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors.
Uniform direct product theorems: Simplified, unified and derandomized
, 2007
"... The classical DirectProduct Theorem for circuits says that if a Boolean function f: {0, 1} n → {0, 1} is somewhat hard to compute on average by small circuits, then the corresponding kwise direct product function f k (x1,..., xk) = (f(x1),..., f(xk)) (where each xi ∈ {0, 1} n) is significantly ha ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
The classical DirectProduct Theorem for circuits says that if a Boolean function f: {0, 1} n → {0, 1} is somewhat hard to compute on average by small circuits, then the corresponding kwise direct product function f k (x1,..., xk) = (f(x1),..., f(xk)) (where each xi ∈ {0, 1} n) is significantly harder to compute on average by slightly smaller circuits. We prove a fully uniform version of the DirectProduct Theorem with informationtheoretically optimal parameters, up to constant factors. Namely, we show that for given k and ɛ, there is an efficient randomized algorithm A with the following property. Given a circuit C that computes f k on at least ɛ fraction of inputs, the algorithm A outputs with probability at least 3/4 a list of O(1/ɛ) circuits such that at least one of the circuits on the list computes f on more than 1 − δ fraction of inputs, for δ = O((log 1/ɛ)/k); moreover, each output circuit is an AC 0 circuit (of size poly(n, k, log 1/δ, 1/ɛ)), with oracle access to the circuit C. Using the GoldreichLevin decoding algorithm [GL89], we also get a fully uniform version of Yao’s XOR Lemma [Yao82] with optimal parameters, up to constant factors. Our results simplify and improve those in [IJK06]. Our main result may be viewed as an efficient approximate, local, listdecoding algorithm for
A (de)constructive approach to program checking
 Electronic Colloquium on Computational Complexity, 2007. 34 [GMR89] O. Goldreich
, 1989
"... Program checking, program selfcorrecting and program selftesting were pioneered by [Blum and Kannan] and [Blum, Luby and Rubinfeld] in the mid eighties as a new way to gain confidence in software, by considering program correctness on an input by input basis rather than full program verification. ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Program checking, program selfcorrecting and program selftesting were pioneered by [Blum and Kannan] and [Blum, Luby and Rubinfeld] in the mid eighties as a new way to gain confidence in software, by considering program correctness on an input by input basis rather than full program verification. Work in the field of program checking focused on designing, for specific functions, checkers, testers and correctors that are more efficient than the best program known for the function. These were designed utilizing specific algebraic, combinatorial or completeness properties of the function at hand. In this work we introduce a novel composition methodology for improving the efficiency of program checkers. We use this approach to design a variety of program checkers that are provably more efficient, in terms of circuit depth, than the optimal program for computing the function being checked. Extensions of this methodology for the cases of program testers and correctors are also presented. In particular, we show: • For all i ≥ 1, every language in RNC i (that is NC 1hard under NC 0reductions) has a program checker in RNC i−1.
Delegating computation reliably: Paradigms and Constructions
, 2009
"... In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for we ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's
The Complexity of Local List Decoding
"... We study the complexity of locally listdecoding binary error correcting codes with good parameters (that are polynomially related to information theoretic bounds). We show that computing majority over Θ(1/ǫ) bits is essentially equivalent to locally listdecoding binary codes from relative distance ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We study the complexity of locally listdecoding binary error correcting codes with good parameters (that are polynomially related to information theoretic bounds). We show that computing majority over Θ(1/ǫ) bits is essentially equivalent to locally listdecoding binary codes from relative distance 1/2 − ǫ with list size at most poly(1/ǫ). That is, a localdecoder for such a code can be used to construct a circuit of roughly the same size and depth that computes majority on Θ(1/ǫ) bits. On the other hand, there is an explicit locally listdecodable code with these parameters that has a very efficient (in terms of circuit size and depth) localdecoder that uses majority gates of fanin Θ(1/ǫ). Using known lower bounds for computing majority by constant depth circuits, our results imply that every constantdepth decoder for such a code must have size almost exponential in 1/ǫ (this extends even to subexponential list sizes). This shows that the listdecoding radius of the constantdepth locallistdecoders of Goldwasser et al. [STOC07] is essentially optimal. Using the tight connection between locallylistdecodable codes and hardness amplification, we obtain similar limitations on the complexity of uniform (and even somewhat nonuniform) fullyblackbox worstcase to averagecase reductions. Very recently, Shaltiel and Viola [SV08] independently obtained similar limitations for completely nonuniform fullyblackbox worstcase to averagecase reductions, but only for the special case that the reduction is nonadaptive. Our results apply also to adaptive reductions. 1
Uniform Derandomization from Pathetic Lower Bounds
, 2009
"... A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e., superpolynomial, or even nearlyexponential) circuit size lower bounds for certain problems. In contrast to what is needed ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A recurring theme in the literature on derandomization is that probabilistic algorithms can be simulated quickly by deterministic algorithms, if one can obtain impressive (i.e., superpolynomial, or even nearlyexponential) circuit size lower bounds for certain problems. In contrast to what is needed for derandomization, existing lower bounds seem rather pathetic (linearsize lower bounds for general circuits [IM02], nearly cubic lower bounds for formula size [H˚as98], nearly n log log n size lower bounds for branching programs [BSSV03], n 1+cd for depth d threshold circuits [IPS97]). Here, we present two instances where “pathetic ” lower bounds of the form n 1+ɛ would suffice to derandomize interesting classes of probabilistic algorithms. We show: • If the word problem over S5 requires constantdepth threshold circuits of size n1+ɛ for some ɛ> 0, then any language accepted by uniform polynomialsize probabilistic threshold circuits is accepted by a uniform family of deterministic constantdepth threshold circuits of subexponential size. • If there are no constantdepth arithmetic circuits of size n1+ɛ for the problem of multiplying a sequence of n 3by3 matrices, then for every constant d, blackbox identity testing for depthd arithmetic circuits with bounded individual degree can be performed by a uniform family of deterministic constantdepth AC0 circuits of subexponential size.
Lower bounds on the query complexity of nonuniform and adaptive reductions showing hardness amplification
, 2012
"... Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2 ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Hardness amplification results show that for every Boolean function f there exists a Boolean function Amp(f) such that the following holds: if every circuit of size s computes f correctly on at most a 1 − δ fraction of inputs, then every circuit of size s ′ computes Amp(f) correctly on at most a 1/2+ϵ fraction of inputs. All hardness amplification results in the literature suffer from “size loss ” meaning that s ′ ≤ ϵ · s. In this paper we show that proofs using “nonuniform reductions ” must suffer from such size loss. To the best of our knowledge, all proofs in the literature are by nonuniform reductions. Our result is the first lower bound that applies to nonuniform reductions that are adaptive. A reduction is an oracle circuit R (·) such that when given oracle access to any function D that computes Amp(f) correctly on a 1/2 + ϵ fraction of inputs, R D computes f correctly on a 1 − δ fraction of inputs. A nonuniform reduction is allowed to also receive a short advice string that may depend on both f and D in an arbitrary way. The well known connection between hardness amplification and listdecodable errorcorrecting codes implies that reductions showing hardness amplification cannot be uniform for δ, ϵ < 1/4. A reduction is nonadaptive if it makes nonadaptive queries to its oracle. Shaltiel and Viola (SICOMP 2010) showed lower bounds on the number of queries made by nonuniform