Results 1  10
of
57
A Method for Obtaining Digital Signatures and PublicKey Cryptosystems
 Communications of the ACM
, 1978
"... An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered usin ..."
Abstract

Cited by 2890 (30 self)
 Add to MetaCart
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key. 2. A message can be "signed" using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in "electronic mail" and "electronic funds transfer" systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two lar...
Randomized Algorithms
, 1995
"... Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simp ..."
Abstract

Cited by 1863 (38 self)
 Add to MetaCart
Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simplest, or both. A randomized algorithm is an algorithm that uses random numbers to influence the choices it makes in the course of its computation. Thus its behavior (typically quantified as running time or quality of output) varies from
Problems in Computational Geometry
 Packing and Covering
, 1974
"...  reproduced, stored In a retrieval system, or transmlt'ted, In any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the author. ..."
Abstract

Cited by 452 (2 self)
 Add to MetaCart
 reproduced, stored In a retrieval system, or transmlt'ted, In any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the author.
Designing Programs That Check Their Work
, 1989
"... A program correctness checker is an algorithm for checking the output of a computation. That is, given a program and an instance on which the program is run, the checker certifies whether the output of the program on that instance is correct. This paper defines the concept of a program checker. It d ..."
Abstract

Cited by 302 (17 self)
 Add to MetaCart
A program correctness checker is an algorithm for checking the output of a computation. That is, given a program and an instance on which the program is run, the checker certifies whether the output of the program on that instance is correct. This paper defines the concept of a program checker. It designs program checkers for a few specific and carefully chosen problems in the class FP of functions computable in polynomial time. Problems in FP for which checkers are presented in this paper include Sorting, Matrix Rank and GCD. It also applies methods of modern cryptography, especially the idea of a probabilistic interactive proof, to the design of program checkers for group theoretic computations. Two strucural theorems are proven here. One is a characterization of problems that can be checked. The other theorem establishes equivalence classes of problems such that whenever one problem in a class is checkable, all problems in the class are checkable.
Winnowing: Local Algorithms for Document Fingerprinting
 Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data 2003
, 2003
"... Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents. We introduce the class of local document fingerprinting algor ..."
Abstract

Cited by 180 (3 self)
 Add to MetaCart
Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents. We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any fingerprinting technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing’s performance is within 33 % of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widelyused plagiarism detection service. 1.
Practical Verifiable Encryption and Decryption of Discrete Logarithms
, 2003
"... Abstract. This paper addresses the problem of designing practical protocols for proving properties about encrypted data. To this end, it presents a variant of the new public key encryption of Cramer and Shoup based on Paillier’s decision composite residuosity assumption, along with efficient protoco ..."
Abstract

Cited by 135 (20 self)
 Add to MetaCart
Abstract. This paper addresses the problem of designing practical protocols for proving properties about encrypted data. To this end, it presents a variant of the new public key encryption of Cramer and Shoup based on Paillier’s decision composite residuosity assumption, along with efficient protocols for verifiable encryption and decryption of discrete logarithms (and more generally, of representations with respect to multiple bases). This is the first verifiable encryption system that provides chosen ciphertext security and avoids inefficient cutandchoose proofs. The presented protocols have numerous applications, including key escrow, optimistic fair exchange, publicly verifiable secret and signature sharing, universally composable commitments, group signatures, and confirmer signatures. 1
Symmetry Breaking In Distributed Networks
 Information and Computation
, 1981
"... Given a ring of n processors it is required to design the processors such that they will be able to choose a leader (a uniquely designated processor) by sending messages along the ring. If the processors are indistinguishable then there exists no deterministic algorithm to solve the problem. To over ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
Given a ring of n processors it is required to design the processors such that they will be able to choose a leader (a uniquely designated processor) by sending messages along the ring. If the processors are indistinguishable then there exists no deterministic algorithm to solve the problem. To overcome this difficulty, probabilistic algorithms are proposed. The algorithms may run forever but they terminate within finite time on the average. For the synchronous case several algorithms are presented: The simplest requires, on the average, the transmission of no more than 2.442n bits and O (n) time. More sophisticated algorithms trade time for communication complexity. If the processors work asynchronously then on the average O (nlogn) bits are transmitted. In the above cases the size of the ring was assumed to be known to all the processors. If the size is not known then finding it may be done only with high probability: any algorithm may yield incorrect results (with nonzero probabilit...
Backwards Analysis of Randomized Geometric Algorithms
 Trends in Discrete and Computational Geometry, volume 10 of Algorithms and Combinatorics
, 1992
"... The theme of this paper is a rather simple method that has proved very potent in the analysis of the expected performance of various randomized algorithms and data structures in computational geometry. The method can be described as "analyze a randomized algorithm as if it were running backwards in ..."
Abstract

Cited by 60 (0 self)
 Add to MetaCart
The theme of this paper is a rather simple method that has proved very potent in the analysis of the expected performance of various randomized algorithms and data structures in computational geometry. The method can be described as "analyze a randomized algorithm as if it were running backwards in time, from output to input." We apply this type of analysis to a variety of algorithms, old and new, and obtain solutions with optimal or near optimal expected performance for a plethora of problems in computational geometry, such as computing Delaunay triangulations of convex polygons, computing convex hulls of point sets in the plane or in higher dimensions, sorting, intersecting line segments, linear programming with a fixed number of variables, and others. 1 Introduction The curious phenomenon that randomness can be used profitably in the solution of computational tasks has attracted a lot of attention from researchers in recent years. The approach has proved useful in such diverse area...
Geometric Applications of a Randomized Optimization Technique
 Discrete Comput. Geom
, 1999
"... We propose a simple, general, randomized technique to reduce certain geometric optimization problems to their corresponding decision problems. These reductions increase the expected time complexity by only a constant factor and eliminate extra logarithmic factors in previous, often more complicated, ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
We propose a simple, general, randomized technique to reduce certain geometric optimization problems to their corresponding decision problems. These reductions increase the expected time complexity by only a constant factor and eliminate extra logarithmic factors in previous, often more complicated, deterministic approaches (such as parametric searching). Faster algorithms are thus obtained for a variety of problems in computational geometry: finding minimal kpoint subsets, matching point sets under translation, computing rectilinear pcenters and discrete 1centers, and solving linear programs with k violations. 1 Introduction Consider the classic randomized algorithm for finding the minimum of r numbers minfA[1]; : : : ; A[r]g: Algorithm randmin 1. randomly pick a permutation hi 1 ; : : : ; i r i of h1; : : : ; ri 2. t /1 3. for k = 1; : : : ; r do 4. if A[i k ] ! t then 5. t / A[i k ] 6. return t By a wellknown fact [27, 44], the expected number of times that step 5 is execut...
On the Efficiency of Parallel Backtracking
, 1992
"... It is known that isolated executions of parallel backtrack search exhibit speedup anomalies. In this paper we present analytical models and experimental results on the average case behavior of parallel backtracking. We consider two types of backtrack search algorithms: (i) simple backtracking (wh ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
It is known that isolated executions of parallel backtrack search exhibit speedup anomalies. In this paper we present analytical models and experimental results on the average case behavior of parallel backtracking. We consider two types of backtrack search algorithms: (i) simple backtracking (which does not use any heuristic information) ; (ii) heuristic backtracking (which uses heuristics to order and prune search). We present analytical models to compare the average number of nodes visited in sequential and parallel search for each case. For simple backtracking, we show that the average speedup obtained is (i) linear when distribution of solutions is uniform and (ii) superlinear when distribution of solutions is nonuniform. For heuristic backtracking, the average speedup obtained is at least linear (i.e., either linear or superlinear), and the speedup obtained on a subset of instances (called difficult instances) is superlinear. We also present experimental results over ...