Results 1  10
of
11
ZeroKnowledge Sets
, 2003
"... We show how a polynomialtime prover can commit to an arbitrary finite set S of strings so that, later on, he can, for any string x, reveal with a proof whetherÜËorÜ�Ë, without revealing any knowledge beyond the verity of these membership assertions. Our method is non interactive. Given a public ran ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
We show how a polynomialtime prover can commit to an arbitrary finite set S of strings so that, later on, he can, for any string x, reveal with a proof whetherÜËorÜ�Ë, without revealing any knowledge beyond the verity of these membership assertions. Our method is non interactive. Given a public random string, the prover commits to a set by simply posting a short and easily computable message. After that, each time it wants to prove whether a given element is in the set, it simply posts another short and easily computable proof, whose correctness can be verified by any one against the public random string. Our scheme is very efficient; no reasonable prior way to achieve our desiderata existed. Our new primitive immediately extends to providing zeroknowledge “databases.”
Improved NonCommitting Encryption with Applications to Adaptively Secure Protocols
"... Abstract. We present a new construction of noncommitting encryption schemes. Unlike the previous constructions of Canetti et al. (STOC ’96) and of Damg˚ard and Nielsen (Crypto ’00), our construction achieves all of the following properties: – Optimal round complexity. Our encryption scheme is a 2r ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. We present a new construction of noncommitting encryption schemes. Unlike the previous constructions of Canetti et al. (STOC ’96) and of Damg˚ard and Nielsen (Crypto ’00), our construction achieves all of the following properties: – Optimal round complexity. Our encryption scheme is a 2round protocol, matching the round complexity of Canetti et al. and improving upon that in Damg˚ard and Nielsen. – Weaker assumptions. Our construction is based on trapdoor simulatable cryptosystems, a new primitive that we introduce as a relaxation of those used in previous works. We also show how to realize this primitive based on hardness of factoring. – Improved efficiency. The amortized complexity of encrypting a single bit is O(1) public key operations on a constantsized plaintext in the underlying cryptosystem. As a result, we obtain the first noncommitting publickey encryption schemes under hardness of factoring and worstcase lattice assumptions; previously, such schemes were only known under the CDH and RSA assumptions. Combined with existing work on secure multiparty computation, we obtain protocols for multiparty computation secure against a malicious adversary that may adaptively corrupt an arbitrary number of parties under weaker assumptions than were previously known. Specifically, we obtain the first adaptively secure multiparty protocols based on hardness of factoring in both the standalone setting and the UC setting with a common reference string. Key words: publickey encryption, adaptive corruption, noncommitting encryption, secure multiparty computation. 1
Cryptography meets voting
, 2005
"... We survey the contributions of the entire theoretical computer science/cryptography community during 19752002 that impact the question of how to run verifiable elections with secret ballots. The approach based on homomorphic encryptions is the most successful; one such scheme is sketched in detail ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We survey the contributions of the entire theoretical computer science/cryptography community during 19752002 that impact the question of how to run verifiable elections with secret ballots. The approach based on homomorphic encryptions is the most successful; one such scheme is sketched in detail and argued to be feasible to implement. It is explained precisely what these ideas accomplish but also what they do not accomplish, and a short history of election fraud throughout history is included.
Time Hierarchies for Sampling Distributions
, 2012
"... We prove that for every constant k ≥ 2, every polynomial time bound t, and every polynomially small ǫ, there exists a family of distributions on k elements that can be sampled exactly in polynomial time but cannot be sampled within statistical distance 1−1/k−ǫ in time t. Our proof involves reducing ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We prove that for every constant k ≥ 2, every polynomial time bound t, and every polynomially small ǫ, there exists a family of distributions on k elements that can be sampled exactly in polynomial time but cannot be sampled within statistical distance 1−1/k−ǫ in time t. Our proof involves reducing the problem to a communication problem over a certain type of noisy channel. We solve the latter problem by giving a construction of a new type of listdecodable code, for a setting where there is no bound on the number of errors but each error gives more information than an erasure. 1
The Computational Complexity of Randomness
, 2013
"... This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to wh ..."
Abstract
 Add to MetaCart
This dissertation explores the multifaceted interplay between efficient computation andprobability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling throughthelens ofcomputational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random. Even when the specificationofacomputational problem involves no randomness, onecanstill consider randomized
Open Problems on Exponential and Character Sums
, 2010
"... This is a collection of mostly unrelated open questions, at various levels of difficulty, related to exponential and multiplicative character sums. One may certainly notice a large proportion of selfreferences in the bibliography. By no means should this be considered as an indication of anything e ..."
Abstract
 Add to MetaCart
This is a collection of mostly unrelated open questions, at various levels of difficulty, related to exponential and multiplicative character sums. One may certainly notice a large proportion of selfreferences in the bibliography. By no means should this be considered as an indication of anything else than
Generating Random Factored Gaussian Integers, Easily
, 2013
"... We introduce an algorithm to generate a random Gaussian integer with the uniform distribution among those with norm at most N, along with its prime factorization. Then, we show that the algorithm runs in polynomial time. The hard part of this algorithm is determining a norm at random with a specific ..."
Abstract
 Add to MetaCart
We introduce an algorithm to generate a random Gaussian integer with the uniform distribution among those with norm at most N, along with its prime factorization. Then, we show that the algorithm runs in polynomial time. The hard part of this algorithm is determining a norm at random with a specific distribution. After that, finding the actual Gaussian integer is easy. We also consider the analogous problem for Eisenstein integers and quadratic integer rings. 1 1 Generating Random Factored Numbers, Easily Consider the following problem: Given a positive integer N, generatearandomintegerlessthanorequaltoN with uniform distribution, along with its factorization in polynomial time. (In this context, polynomial time refers to a polynomial in the number of digits of N, not the size of N. So,the running time of our algorithm should be O(log k N), for some real k.) At first glance, this seems very simple. Simply choose a random integer in the range [1,N] and factor it. However, there are no known polynomial time factorization algorithms. But,
unknown title
"... Abstract — The number field sieve of factoring 1024bit RSA keys has many steps involved, one of them being the ‘relation collection ’ step. This step consists of two parts, the first part is called Sieving and is used to generate smooth random numbers (numbers whose factors are less than a given bo ..."
Abstract
 Add to MetaCart
Abstract — The number field sieve of factoring 1024bit RSA keys has many steps involved, one of them being the ‘relation collection ’ step. This step consists of two parts, the first part is called Sieving and is used to generate smooth random numbers (numbers whose factors are less than a given boundary), and these smooth numbers are factored in the second part by special factoring methods like ECM, p1 and rho. The Sieving processes involve high complexity in generating smooth numbers, so there is a requirement for a simple mechanism which would generate large smooth numbers so that these numbers can be used as test vectors for the factorization methods like ECM, P1 and rho. In the project our aim is to generate the smooth numbers so that they can be as test vectors. We used Eric Bach and Adam Kalai’s algorithms which are developed for generating factored random numbers, and made modifications to these algorithms in order to generate smooth numbers. In this paper we describe these algorithms, modifications we made to them and how we implemented them. We also compare these algorithms and come to a conclusion as to which method is faster in generating smooth factored random numbers.