Results 1  10
of
104
Worstcase equilibria
 IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 1999
"... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..."
Abstract

Cited by 853 (17 self)
 Add to MetaCart
In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.
The Decision DiffieHellman Problem
, 1998
"... The Decision DiffieHellman assumption (ddh) is a gold mine. It enables one to construct efficient cryptographic systems with strong security properties. In this paper we survey the recent applications of DDH as well as known results regarding its security. We describe some open problems in this are ..."
Abstract

Cited by 242 (6 self)
 Add to MetaCart
The Decision DiffieHellman assumption (ddh) is a gold mine. It enables one to construct efficient cryptographic systems with strong security properties. In this paper we survey the recent applications of DDH as well as known results regarding its security. We describe some open problems in this area. 1 Introduction An important goal of cryptography is to pin down the exact complexity assumptions used by cryptographic protocols. Consider the DiffieHellman key exchange protocol [12]: Alice and Bob fix a finite cyclic group G and a generator g. They respectively pick random a; b 2 [1; jGj] and exchange g a ; g b . The secret key is g ab . To totally break the protocol a passive eavesdropper, Eve, must compute the DiffieHellman function defined as: dh g (g a ; g b ) = g ab . We say that the group G satisfies the Computational DiffieHellman assumption (cdh) if no efficient algorithm can compute the function dh g (x; y) in G. Precise definitions are given in the next sectio...
A Sieve Algorithm for the Shortest Lattice Vector Problem
, 2001
"... We present a randomized 2 O(n) time algorithm to compute a shortest nonzero vector in an ndimensional rational lattice. The best known time upper bound for this problem was 2 O(n log n) ..."
Abstract

Cited by 212 (3 self)
 Add to MetaCart
We present a randomized 2 O(n) time algorithm to compute a shortest nonzero vector in an ndimensional rational lattice. The best known time upper bound for this problem was 2 O(n log n)
PublicKey Cryptosystems from Lattice Reduction Problems
, 1996
"... We present a new proposal for a trapdoor oneway function, from whichwe derive publickey encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of latticereduction problems, providing a possible alternative to existing publicke ..."
Abstract

Cited by 149 (4 self)
 Add to MetaCart
(Show Context)
We present a new proposal for a trapdoor oneway function, from whichwe derive publickey encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of latticereduction problems, providing a possible alternative to existing publickey encryption algorithms and digital signatures such as RSA and DSS.
Efficient Cryptographic Schemes Provably as Secure as Subset Sum
"... We show very efficient constructions for a pseudorandom generator and for a universal oneway hash function based on the intractability of the subset sum problem for certain dimensions. (Pseudorandom generators can be used for private key encryption and universal oneway hash functions for signatu ..."
Abstract

Cited by 91 (9 self)
 Add to MetaCart
We show very efficient constructions for a pseudorandom generator and for a universal oneway hash function based on the intractability of the subset sum problem for certain dimensions. (Pseudorandom generators can be used for private key encryption and universal oneway hash functions for signature schemes). The increase in efficiency in our construction is due to the fact that many bits can be generated/hashed with one application of the assumed oneway function. All our construction can be implemented in NC using an optimal number of processors.
The Insecurity of the Digital Signature Algorithm with Partially Known Nonces
 Journal of Cryptology
, 2000
"... . We present a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonabl ..."
Abstract

Cited by 79 (18 self)
 Add to MetaCart
(Show Context)
. We present a polynomialtime algorithm that provably recovers the signer's secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log 1=2 q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of HowgraveGraham and Smart who recently introduced that topic. Our attack is based on a connection with the hidden number problem (HNP) introduced at Crypto '96 by Boneh and Venkatesan in order to study the bitsecurity of the DiffieHellman key exchange. The HNP consists, given a prime number q, of recovering a number ff 2 IFq such that for many known random t 2 IFq ...
Quantum Computation and Lattice Problems
 Proc. 43rd Symposium on Foundations of Computer Science
, 2002
"... We present the first explicit connection between quantum computation and lattice problems. Namely, we show a solution to the uniqueSVP under the assumption that there exists... ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
(Show Context)
We present the first explicit connection between quantum computation and lattice problems. Namely, we show a solution to the uniqueSVP under the assumption that there exists...
Factoring Polynomials and the Knapsack Problem.
"... Although a polynomial time algorithm exists, the most commonly used algorithm for factoring a univariate polynomial f with integer coefficients is the BerlekampZassenhaus algorithm which has a complexity that depends exponentially on n where n is the number of modular factors of f . This expone ..."
Abstract

Cited by 56 (17 self)
 Add to MetaCart
(Show Context)
Although a polynomial time algorithm exists, the most commonly used algorithm for factoring a univariate polynomial f with integer coefficients is the BerlekampZassenhaus algorithm which has a complexity that depends exponentially on n where n is the number of modular factors of f . This exponential time complexity is due to a combinatorial problem; the problem of choosing the right subset of these n factors. In this paper we reduce this combinatorial problem to a knapsack problem of a kind that can be solved with polynomial time algorithms such LLL or PSLQ. The result is a practical algorithm that can factor large polynomials even when n is large as well. 1 Introduction Let f be a polynomial of degree N with integer coefficients, f = N X i=0 a i x i where a i 2 ZZ. Assume that f is monic (i.e. aN = 1) and that f is squarefree (no multiple roots), so the gcd of f and f 0 equals 1. Let p be a prime number and let F p = ZZ=(p) be the field with p elements. Let ZZ p ...
Approximating Shortest Lattice Vectors is Not Harder Than Approximating Closest Lattice Vectors
"... We show that given oracle access to a subroutine which returns approximate closest vectors in a lattice, one may nd in polynomial time approximate shortest vectors in a lattice. The level of approximation is maintained; that is, for any function f , the following holds: Suppose that the subroutine, ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
We show that given oracle access to a subroutine which returns approximate closest vectors in a lattice, one may nd in polynomial time approximate shortest vectors in a lattice. The level of approximation is maintained; that is, for any function f , the following holds: Suppose that the subroutine, on input a lattice L and a target vector w (not necessarily in the lattice), outputs v 2 L such that kv wk f(n) ku wk for any u 2 L. Then, our algorithm, on input a lattice L, outputs a nonzero vector v 2 L such that kvk f(n) kuk for any nonzero vector u 2 L. The result holds for any norm, and preserves the dimension of the lattice, i.e. the closest vector oracle is called on lattices of exactly the same dimension as the original shortest vector problem. This result establishes the widely believed conjecture by which the shortest vector problem is not harder than the closest vector problem. The proof can be easily adapted to establish an analogous result for the corresponding computational problems for linear codes. Key words: Computational problems in integer lattices, reducibility among approximation problems, linear errorcorrecting codes. 1 Partially supported by DARPA contract DABT6396C0018. Preprint submitted to Elsevier Preprint 6 July 1999 1