Results 1  10
of
25
Cryptographic Limitations on Learning Boolean Formulae and Finite Automata
 PROCEEDINGS OF THE TWENTYFIRST ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 1989
"... In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntact ..."
Abstract

Cited by 343 (15 self)
 Add to MetaCart
In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of wellknown publickey cryptosystems to the learning problems. We prove that a polynomialtime learning algorithm for Boolean formulae, deterministic finite automata or constantdepth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a generalization of graph coloring.
Proving in ZeroKnowledge that a Number is the Product of Two Safe Primes
, 1998
"... This paper presents the first efficient statistical zeroknowledge protocols to prove statements such as: A committed number is a pseudoprime. ..."
Abstract

Cited by 144 (13 self)
 Add to MetaCart
(Show Context)
This paper presents the first efficient statistical zeroknowledge protocols to prove statements such as: A committed number is a pseudoprime.
Equivalence Between Two Flavours of Oblivious Transfers
, 1988
"... This paper presents a proof that these two notions are computationally equivalent. Essentially, we show a protocol for "oneoutoftwo oblivious transfer", based on the existence of a protocol for the oblivious transfer problem. The reduction presented does not depend on any cryptographic ..."
Abstract

Cited by 85 (5 self)
 Add to MetaCart
This paper presents a proof that these two notions are computationally equivalent. Essentially, we show a protocol for "oneoutoftwo oblivious transfer", based on the existence of a protocol for the oblivious transfer problem. The reduction presented does not depend on any cryptographic assumption and works independently of the implementation of O.T.. The implications of this reduction are: there exists a protocol for ANDOS [BCR] if and only if there exists a protocol for O.T. the completeness theorem of [GMW] can be based on the existence of O.T. 2. DEFINITIONS
A Quantum Bit Commitment Scheme Provably Unbreakable by both Parties
, 1993
"... Assume that a party, Alice, has a bit x in mind, to which she would like to be committed toward another party, Bob. That is, Alice wishes, through a procedure commit(x), to provide Bob with a piece of evidence that she has a bit x in mind and that she cannot change it. Meanwhile, Bob should not be ..."
Abstract

Cited by 78 (13 self)
 Add to MetaCart
Assume that a party, Alice, has a bit x in mind, to which she would like to be committed toward another party, Bob. That is, Alice wishes, through a procedure commit(x), to provide Bob with a piece of evidence that she has a bit x in mind and that she cannot change it. Meanwhile, Bob should not be able to tell from that evidence what x is. At a later time, Alice can reveal, through a procedure unveil(x), the value of x and prove to Bob that the piece of evidence sent earlier really corresponded to that bit. Classical bit commitment schemes (by which Alice's piece of evidence is classical information such as a bit string) cannot be secure against unlimited computing power and none have been proven secure against algorithmic sophistication. Previous quantum bit commitment schemes (by which Alice's piece of evidence is quantum information such as a stream of polarized photons) were known to be invulnerable to unlimited computing power and algorithmic sophistication, but not to arbitrary...
New PublicKey Schemes Based on Elliptic Curves over the Ring Z_n
, 1991
"... Three new trapdoor oneway functions are proposed that are based on elliptic curves over the ring Z_n. The first class of functions is a naive construction, which can be used only in a digital signature scheme, and not in a publickey cryptosystem. The second, preferred class of function, does not s ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Three new trapdoor oneway functions are proposed that are based on elliptic curves over the ring Z_n. The first class of functions is a naive construction, which can be used only in a digital signature scheme, and not in a publickey cryptosystem. The second, preferred class of function, does not suffer from this problem and can be used for the same applications as the RSA trapdoor oneway function, including zeroknowledge identification protocols. The third class of functions has similar properties to the Rabin trapdoor oneway functions. Although the security of these proposed schemes is based on the difficulty of factoring n, like the RSA and Rabin schemes, these schemes seem to be more secure than those schemes from the viewpoint of attacks without factoring such as low multiplier attacks.
The Generation of Random Numbers That Are Probably Prime
 Journal of Cryptology
, 1988
"... In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomial time. Therefore, factoring would be easy if Rabin's test systematically failed with a 25% probability on each composite integer (which, of course, it does not). The second observation is more fundamental because is it _not_ restricted to primality testing: it has consequences for the entire field of probabilistic algorithms. The failure probability when using a probabilistic algorithm for the purpose of testing some property is compared with that when using it for the purpose of obtaining a random element hopefully having this property. More specifically, we investigate the question of how reliable Rabin's test is when used to _generate_ a random integer that is probably prime, rather than to _test_ a specific integer for primality.
Key words: factorization, false witnesses, primality testing, probabilistic algorithms, Rabin's test.
Transparent Proofs and Limits to Approximation
, 1994
"... We survey a major collective accomplishment of the theoretical computer science community on efficiently verifiable proofs. Informally, a formal proof is transparent (or holographic) if it can be verified with large confidence by a small number of spotchecks. Recent work by a large group of researc ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey a major collective accomplishment of the theoretical computer science community on efficiently verifiable proofs. Informally, a formal proof is transparent (or holographic) if it can be verified with large confidence by a small number of spotchecks. Recent work by a large group of researchers has shown that this seemingly paradoxical concept can be formalized and is feasible in a remarkably strong sense; every formal proof in ZF, say, can be rewritten in transparent format (proving the same theorem in a different proof system) without increasing the length of the proof by too much. This result in turn has surprising implications for the intractability of approximate solutions of a wide range of discrete optimization problems, extending the pessimistic predictions of the PNP theory to approximate solvability. We discuss the main results on transparent proofs and their implications to discrete optimization. We give an account of several links between the two subjects as well ...
Cryptographic Secure PseudoRandom Bits Generation: The BlumBlumShub Generator
, 1999
"... ..."
(Show Context)
Coastal processes
"... Available from National T This report describes a study of coastal processes along the Atlantic coast from Asbury Park to Manasquan, New Jersey. Numerical predictive models for storm surge, dune erosion, nearshore wave transformation, and shoreline response were used in conjunction with an intensive ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Available from National T This report describes a study of coastal processes along the Atlantic coast from Asbury Park to Manasquan, New Jersey. Numerical predictive models for storm surge, dune erosion, nearshore wave transformation, and shoreline response were used in conjunction with an intensive analysis of available physical data to assist in the design, evaluation, and implementation of comprehensive shore protection plans for this densely populated and heavily structured coastal region. The study was divided into four independent but interrelated areas: (a) deepwater wave climate analysis and nearshore wave transformation, (b) longterm shoreline response numerical modeling, (c) development of coastal stagefrequency relationships, and (d) numerical modeling of storminduced dune erosion. The results, interrelations, and recommendations of these tasks are presented in the main body of the report together with guidance for the interpretation of (Continued)
Are `Strong' Primes Needed for RSA?
 In The 1997 RSA Laboratories Seminar Series, Seminars Proceedings
, 1999
"... We review the arguments in favor of using socalled "strong primes" in the RSA publickey cryptosystem. There are two types of such arguments: those that say that strong primes are needed to protect against factoring attacks, and those that say that strong primes are needed to protect a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We review the arguments in favor of using socalled "strong primes" in the RSA publickey cryptosystem. There are two types of such arguments: those that say that strong primes are needed to protect against factoring attacks, and those that say that strong primes are needed to protect against "cycling" attacks (based on repeated encryption).