Results 1  10
of
84
Publickey cryptosystems from the worstcase shortest vector problem
, 2008
"... We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector probl ..."
Abstract

Cited by 152 (22 self)
 Add to MetaCart
(Show Context)
We construct publickey cryptosystems that are secure assuming the worstcase hardness of approximating the length of a shortest nonzero vector in an ndimensional lattice to within a small poly(n) factor. Prior cryptosystems with worstcase connections were based either on the shortest vector problem for a special class of lattices (Ajtai and Dwork, STOC 1997; Regev, J. ACM 2004), or on the conjectured hardness of lattice problems for quantum algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the “learning with errors” (LWE) problem; previously, only a quantum reduction of this kind was known. In addition, we construct new cryptosystems based on the search version of LWE, including a very natural chosen ciphertextsecure system that has a much simpler description and tighter underlying worstcase approximation factor than prior constructions.
The complexity of online memory checking
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and t ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q = Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. in 1991, and hence the same (tight) lower bound applies also to that problem. It was previously shown that when the adversary is computationally bounded, under the assumption that oneway functions exist, it is possible to construct much better online memory checkers. T he same is also true for sublinear authentication schemes. We show that the existence of oneway functions is also a necessary condition: even slightly breaking the s × q = Ω(n) lower bound in a computational setting implies the existence of oneway functions. 1
Pseudorandomness and averagecase complexity via uniform reductions
 IN PROCEEDINGS OF THE 17TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudor ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudorandomness, nor does it explicitly establish an averagecase hardness result. In this paper: ◦ We obtain an optimal worstcase to averagecase connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACEcomplete selfcorrectible and downward selfreducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #Pcomplete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “blackbox” uniform reductions.
On the Compressibility of NP Instances and Cryptographic Applications
"... We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of N P decision problems. We consider N P problems that have long instances but relatively short witnesses. The question is, can one efficientl ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
(Show Context)
We study compression that preserves the solution to an instance of a problem rather than preserving the instance itself. Our focus is on the compressibility of N P decision problems. We consider N P problems that have long instances but relatively short witnesses. The question is, can one efficiently compress an instance and store a shorter representation that maintains the information of whether the original input is in the language or not. We want the length of the compressed instance to be polynomial in the length of the witness and polylog in the length of original input. We discuss the differences between this notion and similar notions from parameterized complexity. Such compression enables to succinctly store instances until a future setting will allow solving them, either via a technological or algorithmic breakthrough or simply until enough time has elapsed. We give a new classification of N P with respect to compression. This classification forms a stratification of N P that we call the VC hierarchy. The hierarchy is based on a new type of reduction called Wreduction and there are compressioncomplete problems for each class. Our motivation for studying this issue stems from the vast cryptographic implications compressibility has. For example, we say that SAT is compressible if there exists a polynomial p(·, ·) so that given a
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average&quot;, but some may not be. How is one to know whether an NPcomplete problem is \di cult on average&quot;? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1
On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem
, 2009
"... We prove the equivalence, up to a small polynomial approximation factor p n / log n, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a longstanding open problem about the r ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We prove the equivalence, up to a small polynomial approximation factor p n / log n, of the lattice problems uSVP (unique Shortest Vector Problem), BDD (Bounded Distance Decoding) and GapSVP (the decision version of the Shortest Vector Problem). This resolves a longstanding open problem about the relationship between uSVP and the more standard GapSVP, as well the BDD problem commonly used in coding theory. The main cryptographic application of our work is the proof that the AjtaiDwork ([AD97]) and the Regev ([Reg04a]) cryptosystems, which were previously only known to be based on the hardness of uSVP, can be equivalently based on the hardness of worstcase GapSVP O(n 2.5) and GapSVP O(n 2), respectively. Also, in the case of uSVP and BDD, our connection is very tight, establishing the equivalence (within a small constant approximation factor) between the two most central problems used in lattice based public key cryptography and coding theory. 1
Public Key Cryptography from Different Assumptions
, 2008
"... We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmicsized subset S of the outputs is connected to fewer than S  inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current stateofart. 1
Guarantees for the success frequency of an algorithm for finding Dodgsonelection winners
 In Proceedings of the 31st International Symposium on Mathematical Foundations of Computer Science
, 2006
"... Dodgson’s election system elegantly satisfies the Condorcet criterion. However, determining the winner of a Dodgson election is known to be Θ p 2complete ([HHR97], see also [BTT89]), which implies that unless P = NP no polynomialtime solution to this problem exists, and unless the polynomial hiera ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
Dodgson’s election system elegantly satisfies the Condorcet criterion. However, determining the winner of a Dodgson election is known to be Θ p 2complete ([HHR97], see also [BTT89]), which implies that unless P = NP no polynomialtime solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates (although the number of voters may still be polynomial in the number of candidates), a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it “knows” that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner. 1
If NP languages are hard on the worstcase then it is easy to find their hard instances
"... We prove that if NP 6 ` BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. Th ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We prove that if NP 6 ` BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worstcase to averagecase reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which is a prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NPcomplete languages, that is samplable in quasipolynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easyin the worstcase). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worstcase, we can efficiently generate at most three Boolean formulas (of increasing