Results 1  10
of
332
A Method for Obtaining Digital Signatures and PublicKey Cryptosystems
 Communications of the ACM
, 1978
"... An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered usin ..."
Abstract

Cited by 2980 (30 self)
 Add to MetaCart
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key. 2. A message can be "signed" using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in "electronic mail" and "electronic funds transfer" systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two lar...
Random sampling with a reservoir
 ACM Transactions on Mathematical Software
, 1985
"... We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O(n(1 ..."
Abstract

Cited by 263 (3 self)
 Add to MetaCart
We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O(n(1 + log(N/n))) expected time, which is optimum, up to a constant factor. Several optimizations are studied that collectively improve the speed of the naive version of the algorithm by an order of magnitude. We give an efficient Pascallike implementation that incorporates these modifications and that is suitable for general use. Theoretical and empirical results indicate that Algorithm Z outperforms current methods by a significant margin.
Automatic Program Parallelization
, 1993
"... This paper presents an overview of automatic program parallelization techniques. It covers dependence analysis techniques, followed by a discussion of program transformations, including straightline code parallelization, do loop transformations, and parallelization of recursive routines. The last s ..."
Abstract

Cited by 105 (8 self)
 Add to MetaCart
This paper presents an overview of automatic program parallelization techniques. It covers dependence analysis techniques, followed by a discussion of program transformations, including straightline code parallelization, do loop transformations, and parallelization of recursive routines. The last section of the paper surveys several experimental studies on the effectiveness of parallelizing compilers.
On The Rapid Computation of Various Polylogarithmic Constants”, manuscript
, 1996
"... We give algorithms for the computation of the dth digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the d ..."
Abstract

Cited by 102 (30 self)
 Add to MetaCart
We give algorithms for the computation of the dth digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log (2) or on a modest work station in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of, the billionth hexadecimal digits of 2 2 log(2) and log (2), and the ten billionth decimal digit of log(9=10). These calculations rest on the observation that very special types of identities exist for certain numbers like, 2,log(2) and log 2 (2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we deriveinthiswork appear to be new, for example the critical identity for:
More Flexible Exponentiation with Precomputation
 Precomputation,” Advances in Cryptology  CRYPTO ’94
, 1994
"... A new precomputation method is presented for computing g R for a fixed element g and a randomly chosen exponent R in a given group. Our method is more efficient and flexible than the previously proposed methods, especially in the case where the amount of storage available is very small or quit ..."
Abstract

Cited by 84 (4 self)
 Add to MetaCart
A new precomputation method is presented for computing g R for a fixed element g and a randomly chosen exponent R in a given group. Our method is more efficient and flexible than the previously proposed methods, especially in the case where the amount of storage available is very small or quite large. It is also very efficient in computing g R y E for a small size E and variable number y, which occurs in the verification of Schnorr's identification scheme or its variants. Finally it is shown that our method is wellsuited for parallel processing as well.
Iterated function systems and permutation representations of the Cuntz algebra
, 1996
"... We study a class of representations of the Cuntz algebras ON, N = 2, 3,..., acting on L 2 (T) where T = R�2πZ. The representations arise in wavelet theory, but are of independent interest. We find and describe the decomposition into irreducibles, and show how the ONirreducibles decompose when rest ..."
Abstract

Cited by 74 (19 self)
 Add to MetaCart
We study a class of representations of the Cuntz algebras ON, N = 2, 3,..., acting on L 2 (T) where T = R�2πZ. The representations arise in wavelet theory, but are of independent interest. We find and describe the decomposition into irreducibles, and show how the ONirreducibles decompose when restricted to the subalgebra UHFN ⊂ ON of gaugeinvariant elements; and we show that the whole structure is accounted for by arithmetic and combinatorial properties of the integers Z. We have general results on a class of representations of ON on Hilbert space H such that the generators Si as operators permute the elements in some orthonormal basis for H. We then use this to extend our results from L 2 (T) to L 2 ( T d) , d> 1; even to L 2 (T) where T is some fractal version of the torus which carries more of the algebraic
Random Testing
 Encyclopedia of Software Engineering
, 1994
"... this technical sense; however, it is certainly not the most used method.) If the technical meaning contrasts "random" with "systematic," it is in the sense that fluctuations in physical measurements are random (unpredictable or chaotic) vs. systematic (causal or lawful). Why is i ..."
Abstract

Cited by 70 (7 self)
 Add to MetaCart
this technical sense; however, it is certainly not the most used method.) If the technical meaning contrasts "random" with "systematic," it is in the sense that fluctuations in physical measurements are random (unpredictable or chaotic) vs. systematic (causal or lawful). Why is it desirable to be "unsystematic" on purpose in selecting test data for a program? (1) Because there are efficient methods of selecting random points algorithmically, by computing pseudorandom numbers; thus a vast number of tests can be easily defined. (2) Because statistical independence among test points allows statistical prediction of significance in the observed results. In the sequel it will be seen that (1) may be compromised because the required result of an easily generated test is not so easy to generate. (2) is the more important quality of random testing, both in practice and for the theory of software testing. To make an analogy with the case of physical measurement, it is only random fluctuations that can be "averaged out" to yield an improved measurement over many trials; systematic fluctuations might in principle be eliminated, but if their cause (or even their existence) is unknown, they forever invalidate the measurement. The analogy is better than it seems: in program testing, with systematic methods we know what we are doing, but not what it means; only by giving up all systematization can the significance of testing be known. Random testing at its best can be illustrated by a simple example. Suppose that a subroutine is written to compute the (floatingpoint) cube root of an integer parameter. The method to be used has been shown to be accurate to within 2x10