Results 1  10
of
59
Worstcase to averagecase reductions based on Gaussian measures
 SIAM J. on Computing
, 2004
"... We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest indepe ..."
Abstract

Cited by 128 (23 self)
 Add to MetaCart
(Show Context)
We show that finding small solutions to random modular linear equations is at least as hard as approximating several lattice problems in the worst case within a factor almost linear in the dimension of the lattice. The lattice problems we consider are the shortest vector problem, the shortest independent vectors problem, the covering radius problem, and the guaranteed distance decoding problem (a variant of the well known closest vector problem). The approximation factor we obtain is nlog O(1) n for all four problems. This greatly improves on all previous work on the subject starting from Ajtai’s seminal paper (STOC, 1996), up to the strongest previously known results by Micciancio (SIAM J. on Computing, 2004). Our results also bring us closer to the limit where the problems are no longer known to be in NP intersect coNP. Our main tools are Gaussian measures on lattices and the highdimensional Fourier transform. We start by defining a new lattice parameter which determines the amount of Gaussian noise that one has to add to a lattice in order to get close to a uniform distribution. In addition to yielding quantitatively much stronger results, the use of this parameter allows us to simplify many of the complications in previous work. Our technical contributions are twofold. First, we show tight connections between this new parameter and existing lattice parameters. One such important connection is between this parameter and the length of the shortest set of linearly independent vectors. Second, we prove that the distribution that one obtains after adding Gaussian noise to the lattice has the following interesting property: the distribution of the noise vector when conditioning on the final value behaves in many respects like the original Gaussian noise vector. In particular, its moments remain essentially unchanged. 1
Notions of Reducibility between Cryptographic Primitives
, 2004
"... Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions. ..."
Abstract

Cited by 77 (8 self)
 Add to MetaCart
Starting with the seminal paper of Impagliazzo and Rudich [18], there has been a large body of work showing that various cryptographic primitives cannot be reduced to each other via "blackbox" reductions.
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
On basing oneway functions on NPhardness
 In Proceedings of the ThirtyEighth Annual ACM Symposium on Theory of Computing
, 2006
"... We consider the question of whether it is possible to base the existence of oneway functions on N Phardness. That is we study the the possibility of reductions from a worstcase N Phard decision problem to the task of inverting a polynomial time computable function. We prove two negative results: ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We consider the question of whether it is possible to base the existence of oneway functions on N Phardness. That is we study the the possibility of reductions from a worstcase N Phard decision problem to the task of inverting a polynomial time computable function. We prove two negative results: 1. For any polynomial time computable function f: the existence of a randomized nonadaptive reduction of worst case N P problems to the task of averagecase inverting f implies that coN P ⊆ AM. It is widely believed that coN P is not contained in AM. Thus, this result may be regarded as showing that such reductions cannot exist (unless coN P ⊆ AM). This result improves previous negative results that placed coN P in nonuniform AM. 2. For any polynomial time computable function f for which it is possible to efficiently compute preimage sizes (i.e., f −1 (y)  for a given y): the existence of a randomized reduction of worst case N P problems to the task of inverting f implies that coN P ⊆ AM. Moreover, this is also true for functions for which it is possible to verify (via and AM protocol) the approximate size of preimage sizes (i.e., f −1 (y)  for a given y). These results holds for any reduction, including adaptive ones. The previously known negative results regarding worstcase to averagecase reductions were confined to nonadaptive reductions. In the course of proving the above results, two new AM protocols emerge for proving upper bounds on the sizes of N P sets. Whereas the known lower bound protocol on set sizes by [GoldwasserSipser] works for any N P set, the known upper bound protocol on set sizes by [AielloHastad] works in a setting where the verifier knows a random secret element (unknown to the prover) in the N P set. The new protocols we develop here, each work under different requirements than that of [AielloHastad], enlarging the settings in which it is possible to prove upper bounds on N P set size.
Using Nondeterminism to Amplify Hardness
, 2004
"... We revisit the problem of hardness amplification in N P, as recently studied by O’Donnell (STOC ‘02). We prove that if N P has a balanced function f such that any circuit of size s(n) fails to compute f on a 1 / poly(n) fraction of inputs, then N P has a function f ′ such that any circuit of size s ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We revisit the problem of hardness amplification in N P, as recently studied by O’Donnell (STOC ‘02). We prove that if N P has a balanced function f such that any circuit of size s(n) fails to compute f on a 1 / poly(n) fraction of inputs, then N P has a function f ′ such that any circuit of size s ′ (n) = s ( √ n) Ω(1) fails to compute f ′ on a 1/2−1/s ′ (n) fraction of inputs. In particular, 1. If s(n) = n ω(1) , we amplify to hardness 1/2 − 1/n ω(1). 2. If s(n) = 2 nΩ(1), we amplify to hardness 1/2−1/2 nΩ(1) 3. If s(n) = 2 Ω(n) , we amplify to hardness 1/2−1/2 Ω( √ n). These improve the results of O’Donnell, which only amplified to 1/2 − 1 / √ n. O’Donnell also proved that no construction of a certain general form could amplify beyond 1/2 − 1/n. We bypass this barrier by using both derandomization and nondeterminism in the construction of f ′. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that f is balanced are necessary for “blackbox ” hardness amplification procedures (such as ours).
On uniform amplification of hardness in np
 In Proceedings of the ThirtySeventh Annual ACM Symposium on Theory of Computing
, 2005
"... We continue the study of amplification of averagecase complexity within NP, and we focus on the uniform case. We prove that if every problem in NP admits an efficient uniform algorithm that (averaged over random inputs and over the internal coin tosses of the algorithm) succeeds with probability at ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
We continue the study of amplification of averagecase complexity within NP, and we focus on the uniform case. We prove that if every problem in NP admits an efficient uniform algorithm that (averaged over random inputs and over the internal coin tosses of the algorithm) succeeds with probability at least 1/2 + 1/(log n) α, then for every problem in NP there is an efficient uniform algorithm that succeeds with probability at least 1 − 1/poly(n). Above, α> 0 is an absolute constant. Previously, Trevisan (FOCS’03) presented a similar reduction between success 3/4 + 1/(log n) α and 1 − 1/(log n) α. Stronger reductions, due to O’Donnell (STOC’02) and Healy, Vadhan and Viola (FOCS’04) are known in the nonuniform case. 1
AverageCase Complexity
 in Foundations and Trends in Theoretical Computer Science Volume 2, Issue 1
, 2006
"... We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonav ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
We survey the averagecase complexity of problems in NP. We discuss various notions of goodonaverage algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easyonaverage with respect to the uniform distribution, then all problems in NP are easyonaverage with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose averagecase complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hardonaverage problems in NP can be based on the P ̸ = NP assumption or on related worstcase assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worstcase and averagecase complexity for general NP problems remains open, there has been progress in understanding the relation between different “degrees ” of averagecase complexity. We discuss some of these “hardness amplification ” results. 1
Guarantees for the success frequency of an algorithm for finding Dodgsonelection winners
 In Proceedings of the 31st International Symposium on Mathematical Foundations of Computer Science
, 2006
"... Dodgson’s election system elegantly satisfies the Condorcet criterion. However, determining the winner of a Dodgson election is known to be Θ p 2complete ([HHR97], see also [BTT89]), which implies that unless P = NP no polynomialtime solution to this problem exists, and unless the polynomial hiera ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
(Show Context)
Dodgson’s election system elegantly satisfies the Condorcet criterion. However, determining the winner of a Dodgson election is known to be Θ p 2complete ([HHR97], see also [BTT89]), which implies that unless P = NP no polynomialtime solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates (although the number of voters may still be polynomial in the number of candidates), a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it “knows” that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner. 1
If NP languages are hard on the worstcase then it is easy to find their hard instances
 PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON COMPUTATIONAL COMPLEXITY, (CCC)
, 2005
"... We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We prove that if NP 6t, BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errson inputs from this distribution. This is the first worstcase to averagecase reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which isa prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NPcomplete languages, that is samplable in quasipolynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easy in the worstcase). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that failsto solve SAT in the worstcase, we can efficiently generate at most three Boolean formulas (of increasing
Hardness amplification proofs require majority
 In Proceedings of the 40th Annual ACM Symposium on the Theory of Computing (STOC
, 2008
"... Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f i ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
Hardness amplification is the fundamental task of converting a δhard function f: {0, 1} n → {0, 1} into a (1/2 − ɛ)hard function Amp(f), where f is γhard if small circuits fail to compute f on at least a γ fraction of the inputs. Typically, ɛ, δ are small (and δ = 2 −k captures the case where f is worstcase hard). Achieving ɛ = 1/n ω(1) is a prerequisite for cryptography and most pseudorandomgenerator constructions. In this paper we study the complexity of blackbox proofs of hardness amplification. A class of circuits D proves a hardness amplification result if for any function h that agrees with Amp(f) on a 1/2 + ɛ fraction of the inputs there exists an oracle circuit D ∈ D such that D h agrees with f on a 1 − δ fraction of the inputs. We focus on the case where every D ∈ D makes nonadaptive queries to h. This setting captures most hardness amplification techniques. We prove two main results: 1. The circuits in D “can be used ” to compute the majority function on 1/ɛ bits. In particular, these circuits have large depth when ɛ ≤ 1/poly log n. 2. The circuits in D must make Ω � log(1/δ)/ɛ 2 � oracle queries. Both our bounds on the depth and on the number of queries are tight up to constant factors.