Results 1  10
of
24
Efficient generation of shared RSA keys
 Advances in Cryptology  CRYPTO 97
, 1997
"... We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the ..."
Abstract

Cited by 124 (4 self)
 Add to MetaCart
We describe efficient techniques for a number of parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N. In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and communication. All results are presented in the honest but curious settings (passive adversary).
Parameterized Complexity: A Framework for Systematically Confronting Computational Intractability
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1997
"... In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by ..."
Abstract

Cited by 72 (15 self)
 Add to MetaCart
In this paper we give a programmatic overview of parameterized computational complexity in the broad context of the problem of coping with computational intractability. We give some examples of how fixedparameter tractability techniques can deliver practical algorithms in two different ways: (1) by providing useful exact algorithms for small parameter ranges, and (2) by providing guidance in the design of heuristic algorithms. In particular, we describe an improved FPT kernelization algorithm for Vertex Cover, a practical FPT algorithm for the Maximum Agreement Subtree (MAST) problem parameterized by the number of species to be deleted, and new general heuristics for these problems based on FPT techniques. In the course of making this overview, we also investigate some structural and hardness issues. We prove that an important naturally parameterized problem in artificial intelligence, STRIPS Planning (where the parameter is the size of the plan) is complete for W [1]. As a corollary, this implies that kStep Reachability for Petri Nets is complete for W [1]. We describe how the concept of treewidth can be applied to STRIPS Planning and other problems of logic to obtain FPT results. We describe a surprising structural result concerning the top end of the parameterized complexity hierarchy: the naturally parameterized Graph kColoring problem cannot be resolved with respect to XP either by showing membership in XP, or by showing hardness for XP without settling the P = NP question one way or the other.
Knowledge, probability, and adversaries
 Journal of the ACM
, 1993
"... Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice ..."
Abstract

Cited by 72 (24 self)
 Add to MetaCart
Abstract: What should it mean for an agent toknowor believe an assertion is true with probability:99? Di erent papers [FH88, FZ88a, HMT88] givedi erent answers, choosing to use quite di erent probability spaces when computing the probability that an agent assigns to an event. We showthat each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries in uencing three di erent aspects of the game. The rst selects the outcome of all nondeterministic choices in the system� the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above di er) � the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show howto assign probability spaces to agents in a way most appropriate for that class, where \most appropriate " is made precise in terms of this betting game. We conclude by showing how di erent assignments of probability spaces (corresponding to di erent opponents) yield di erent levels of guarantees in probabilistic coordinated attack.
NonTransitive Transfer of Confidence: A Perfect ZeroKnowledge Interactive Protocol for SAT and Beyond
, 1986
"... A perfect zeroknowledge interactive proof is a protocol by which Alice can convince Bob of the truth of some theorem in a way that yields no information as to how the proof might proceed (in the sense of Shannon's information theory). We give a general technique for achieving this goal for any prob ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
A perfect zeroknowledge interactive proof is a protocol by which Alice can convince Bob of the truth of some theorem in a way that yields no information as to how the proof might proceed (in the sense of Shannon's information theory). We give a general technique for achieving this goal for any problem in NP (and beyond). The fact that our protocol is perfect zeroknowledge does not depend on unproved cryptographic assumptions. Furthermore, our protocol is powerful enough to allow Alice to convince Bob of theorems for which she does not even have a proof. Whenever Alice can convince herself probabilistically of a theorem, perhaps thanks to her knowledge of some trapdoor information, she can convince Bob as well, without compromising the trapdoor in any way. This results in a nontransitive transfer of confidence from Alice to Bob, because Bob will not be able to convince anyone else afterwards. Our protocol is dual to those of [GrMiWi86a, BrCr86]. 1. INTRODUCTION Assume that Alice h...
ZeroKnowledge Simulation of Boolean Circuits
, 1987
"... A zeroknowledge interactive proof is a protocol by which Alice can convince a polynomiallybounded Bob of the truth of some theorem without giving him any hint as to how the proof might proceed. Under cryptographic assumptions, we give a general technique for achieving this goal for any problem in ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
A zeroknowledge interactive proof is a protocol by which Alice can convince a polynomiallybounded Bob of the truth of some theorem without giving him any hint as to how the proof might proceed. Under cryptographic assumptions, we give a general technique for achieving this goal for any problem in NP. This extends to a presumably larger class, which combines the powers of nondeterminism and randomness. Our protocol is powerful enough to allow Alice to convince Bob of theorems for which she does not even have a proof. Whenever Alice can convince herself probabilistically of a theorem, perhaps thanks to her knowledge of some trapdoor information, she can convince Bob as well, without compromising the trapdoor in any way. 1. INTRODUCTION The notion of zeroknowledge interactive proofs (ZKIP) introduced a few years ago by Goldwasser, Micali and Rackoff [GwMiRac85] has become a very active research area. Assume that Alice holds the proof of some theorem. A zeroknowledge interactive pr...
An introduction to quantum complexity theory
 Collected Papers on Quantum Computation and Quantum Information Theory
, 2000
"... ..."
The Generation of Random Numbers That Are Probably Prime
 Journal of Cryptology
, 1988
"... In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomia ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
In this paper we make two observations on Rabin's probabilistic primality test. The first is a provocative reason why Rabin's test is so good. It turned out that a single iteration has a nonnegligible probability of failing _only_ on composite numbers that can actually be split in expected polynomial time. Therefore, factoring would be easy if Rabin's test systematically failed with a 25% probability on each composite integer (which, of course, it does not). The second observation is more fundamental because is it _not_ restricted to primality testing: it has consequences for the entire field of probabilistic algorithms. The failure probability when using a probabilistic algorithm for the purpose of testing some property is compared with that when using it for the purpose of obtaining a random element hopefully having this property. More specifically, we investigate the question of how reliable Rabin's test is when used to _generate_ a random integer that is probably prime, rather than to _test_ a specific integer for primality.
Key words: factorization, false witnesses, primality testing, probabilistic algorithms, Rabin's test.
Hardness as randomness: A survey of universal derandomization
 in Proceedings of the International Congress of Mathematicians
, 2002
"... We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that P = BPP, it also indicates that this may be a difficult question to resolve. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that P = BPP, it also indicates that this may be a difficult question to resolve. In fact, proving that probalistic algorithms have nontrivial deterministic simulations is basically equivalent to proving circuit lower bounds, either in the algebraic or Boolean models.
I don’t want to think about it now: Decision theory with costly computation
 In KR’10
, 2010
"... Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult. In a companion paper we develop a framework for game theory with costly computation, where the objects of choice are Turing machines. Here we apply that framework to decision theory. We show how wellknown phenomena like firstimpressionmatters biases (i.e., people tend to put more weight on evidence they hear early on), belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions), and the status quo bias (people are much more likely to stick with what they already have) can be easily captured in that framework. Finally, we use the framework to define some new notions: value of computational information (a computational variant of value of information) and computational value of conversation. 1
Efficient Generation of Shared RSA keys (Extended Abstract)
 In Kaliski [103
"... We describe efficient techniques for three (or more) parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N . In addition a public encryption exponent is publicly known and each party holds a share o ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We describe efficient techniques for three (or more) parties to jointly generate an RSA key. At the end of the protocol an RSA modulus N = pq is publicly known. None of the parties know the factorization of N . In addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. Our protocols are efficient in computation and communication.