Results 1  10
of
569
Worstcase equilibria
 IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 1999
"... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..."
Abstract

Cited by 626 (18 self)
 Add to MetaCart
In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.
Mersenne Twister: A 623dimensionally equidistributed uniform pseudorandom number generator
"... ..."
Fully homomorphic encryption using ideal lattices
 In Proc. STOC
, 2009
"... We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitra ..."
Abstract

Cited by 279 (11 self)
 Add to MetaCart
We propose a fully homomorphic encryption scheme – i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result – that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Latticebased cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a publickey ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable – i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a serveraided cryptosystem.
The Decision DiffieHellman Problem
, 1998
"... The Decision DiffieHellman assumption (ddh) is a gold mine. It enables one to construct efficient cryptographic systems with strong security properties. In this paper we survey the recent applications of DDH as well as known results regarding its security. We describe some open problems in this are ..."
Abstract

Cited by 197 (6 self)
 Add to MetaCart
The Decision DiffieHellman assumption (ddh) is a gold mine. It enables one to construct efficient cryptographic systems with strong security properties. In this paper we survey the recent applications of DDH as well as known results regarding its security. We describe some open problems in this area. 1 Introduction An important goal of cryptography is to pin down the exact complexity assumptions used by cryptographic protocols. Consider the DiffieHellman key exchange protocol [12]: Alice and Bob fix a finite cyclic group G and a generator g. They respectively pick random a; b 2 [1; jGj] and exchange g a ; g b . The secret key is g ab . To totally break the protocol a passive eavesdropper, Eve, must compute the DiffieHellman function defined as: dh g (g a ; g b ) = g ab . We say that the group G satisfies the Computational DiffieHellman assumption (cdh) if no efficient algorithm can compute the function dh g (x; y) in G. Precise definitions are given in the next sectio...
Closest Point Search in Lattices
 IEEE TRANS. INFORM. THEORY
, 2000
"... In this semitutorial paper, a comprehensive survey of closestpoint search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closestpoint search algorithm, ba ..."
Abstract

Cited by 197 (1 self)
 Add to MetaCart
In this semitutorial paper, a comprehensive survey of closestpoint search methods for lattices without a regular structure is presented. The existing search strategies are described in a unified framework, and differences between them are elucidated. An efficient closestpoint search algorithm, based on the SchnorrEuchner variation of the Pohst method, is implemented. Given an arbitrary point x 2 R m and a generator matrix for a lattice , the algorithm computes the point of that is closest to x. The algorithm is shown to be substantially faster than other known methods, by means of a theoretical comparison with the Kannan algorithm and an experimental comparison with the Pohst algorithm and its variants, such as the recent ViterboBoutros decoder. The improvement increases with the dimension of the lattice. Modifications of the algorithm are developed to solve a number of related search problems for lattices, such as finding a shortest vector, determining the kissing number, compu...
On Lattices, Learning with Errors, Random Linear Codes, and Cryptography
 In STOC
, 2005
"... Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear co ..."
Abstract

Cited by 197 (3 self)
 Add to MetaCart
Our main result is a reduction from worstcase lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the ‘learning from parity with error’ problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. We also present a (classical) publickey cryptosystem whose security is based on the hardness of the learning problem. By the main result, its security is also based on the worstcase quantum hardness of SVP and SIVP. Previous latticebased publickey cryptosystems such as the one by Ajtai and Dwork were based only on uniqueSVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Õ(n2) and encrypting a message increases its size by a factor of Õ(n) (in previous cryptosystems these values are Õ(n4) and Õ(n2), respectively). In fact, under the assumption that all parties share a random bit string of length Õ(n2), the size of the public key can be reduced to Õ(n). 1
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co ..."
Abstract

Cited by 189 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
On MaximumLikelihood Detection and the Search for the Closest Lattice Point
 IEEE TRANS. INFORM. THEORY
, 2003
"... Maximumlikelihood (ML) decoding algorithms for Gaussian multipleinput multipleoutput (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using numbertheoretic tools for searching the closest lattice point. These decoders are colle ..."
Abstract

Cited by 155 (3 self)
 Add to MetaCart
Maximumlikelihood (ML) decoding algorithms for Gaussian multipleinput multipleoutput (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using numbertheoretic tools for searching the closest lattice point. These decoders are collectively referred to as sphere decoders in the literature. In this paper, a fresh look at this class of decoding algorithms is taken. In particular, two novel algorithms are developed. The first algorithm is inspired by the Pohst enumeration strategy and is shown to offer a significant reduction in complexity compared to the ViterboBoutros sphere decoder. The connection between the proposed algorithm and the stack sequential decoding algorithm is then established. This connection is utilized to construct the second algorithm which can also be viewed as an application of the SchnorrEuchner strategy to ML decoding. Aided with a detailed study of preprocessing algorithms, a variant of the second algorithm is developed and shown to offer significant reductions in the computational complexity compared to all previously proposed sphere decoders with a nearML detection performance. This claim is supported by intuitive arguments and simulation results in many relevant scenarios.
PublicKey Cryptosystems from Lattice Reduction Problems
, 1996
"... We present a new proposal for a trapdoor oneway function, from whichwe derive publickey encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of latticereduction problems, providing a possible alternative to existing publicke ..."
Abstract

Cited by 122 (5 self)
 Add to MetaCart
We present a new proposal for a trapdoor oneway function, from whichwe derive publickey encryption and digital signatures. The security of the new construction is based on the conjectured computational difficulty of latticereduction problems, providing a possible alternative to existing publickey encryption algorithms and digital signatures such as RSA and DSS.
Solving lowdensity subset sum problems
 in Proceedings of 24rd Annu. Symp. Foundations of comput. Sci
, 1983
"... Abstract. The subset sum problem is to decide whether or not the O1 integer programming problem C aixi = M, Vi,x,=O or 1, il has a solution, where the ai and M are given positive integers. This problem is NPcomplete, and the difficulty of solving it is the basis of publickey cryptosystems of kna ..."
Abstract

Cited by 107 (5 self)
 Add to MetaCart
Abstract. The subset sum problem is to decide whether or not the O1 integer programming problem C aixi = M, Vi,x,=O or 1, il has a solution, where the ai and M are given positive integers. This problem is NPcomplete, and the difficulty of solving it is the basis of publickey cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n/log2(maxi ai). Then for “almost all ” problems of density d c 0.645, the vector v we searched for is the shortest nonzero vector in the lattice. For “almost all ” problems of density d < l/a it is proved that the lattice basis reduction algorithm locates v. Extensive computational tests of the algorithm suggest that it works for densities d < de(n), where d=(n) is a cutoff value that is substantially larger than I/n. This method gives a polynomial time attack on knapsack publickey cryptosystems that can be expected to break them if they transmit information at rates below d=(n), as n+ 01.