Results 1  10
of
34
Discrete Logarithms in Finite Fields and Their Cryptographic Significance
, 1984
"... Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q  1, for which u = g k . The wellknown problem of computing discrete logarithms in finite fields has acquired additional importance in recent years due to its appl ..."
Abstract

Cited by 87 (6 self)
 Add to MetaCart
Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u GF(q) is that integer k, 1 k q  1, for which u = g k . The wellknown problem of computing discrete logarithms in finite fields has acquired additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete logarithm algorithm were discovered. This paper surveys and analyzes known algorithms in this area, with special attention devoted to algorithms for the fields GF(2 n ). It appears that in order to be safe from attacks using these algorithms, the value of n for which GF(2 n ) is used in a cryptosystem has to be very large and carefully chosen. Due in large part to recent discoveries, discrete logarithms in fields GF(2 n ) are much easier to compute than in fields GF(p) with p prime. Hence the fields GF(2 n ) ought to be avoided in all cryptographic applications. On the other hand, ...
Multiplying matrices faster than coppersmithwinograd
 In Proc. 44th ACM Symposium on Theory of Computation
, 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1
Extra high speed matrix multiplication on the Cray2
 SIAM J. Sci. Stat. Comput
, 1988
"... The Cray2 is capable of performing matrix multiplication at very high rates. Using library routines provided byCray Research, Inc., performance rates of 300 to 425 MFLOPS can be obtained on a single processor, depending on system load. Considerably higher rates can be achieved with all four process ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
The Cray2 is capable of performing matrix multiplication at very high rates. Using library routines provided byCray Research, Inc., performance rates of 300 to 425 MFLOPS can be obtained on a single processor, depending on system load. Considerably higher rates can be achieved with all four processors running simultaneously. This article describes how matrix multiplication can be performed even faster, up to twice the above rates. This can be achieved by (1) employing Strassen's matrix multiplication algorithm to reduce the number of oatingpoint operations performed and (2) utilizing local memory on the Cray2 to avoid performance losses due to memory bank contention. The numerical stability and potential for parallel application of this procedure are also discussed.
Efficient Parallel Evaluation of Straightline Code and Arithmetic Circuits
 SIAM J. Comput
, 1988
"... A new parallel algorithm is given to evaluate a straight line program. The algorithm evaluates a program over a commutative semiring R of degree d and size n in time O(log n(log nd)) using M(n) processors, where M(n) is the number of processors required for multiplying n \Theta n matrices over the ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
A new parallel algorithm is given to evaluate a straight line program. The algorithm evaluates a program over a commutative semiring R of degree d and size n in time O(log n(log nd)) using M(n) processors, where M(n) is the number of processors required for multiplying n \Theta n matrices over the semiring R in O(log n) time. Appears in SIAM J. Comput., 17/4, pp. 687695 (1988). Preliminary version of this paper appeared in [6]. y Research supported in part by National Science Foundation Grant MCS800756 A01. z Research supported by NSF under ECS8404866, the Semiconductor Research Corporation under RSCH 84060496, and by an IBM Faculty Development Award. x Research Supported in part by NSF Grant DCR8504391 and by an IBM Faculty Development Award. 1 INTRODUCTION 1 1 Introduction In this paper we consider the problem of dynamic evaluation of a straight line program in parallel. This is a generalization of the result of Valiant et al [10]. They consider the problem of ta...
Parsing Incomplete Sentences
, 1988
"... An efficient contextfree parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number) that could account for the missing parts. The algorithm is a variation on the construction due to Earl ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
An efficient contextfree parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number) that could account for the missing parts. The algorithm is a variation on the construction due to Earley. ltowever, its presentation is such that it can readily be adapted to any chart parsing schema (top down, bottomup, etc...).
Solving Sparse, Symmetric, DiagonallyDominant Linear Systems in Time 0(m 1.31
 In FOCS ’03: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science
, 2003
"... ..."
An overview of computational complexity
 Communications of the ACM
, 1983
"... foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NPcompleteness. The ensuing exploration of the boundaries and nature of the NPcomplete class of problems has been one of the most active and important research activities in computer science for the last decade. Cook is well known for his influential results in fundamental areas of computer science. He has made significant contributions to complexity theory, to timespace tradeoffs in computation, and to logics for programming languages. His work is characterized by elegance and insights and has illuminated the very nature of computation." During 19701979, Cook did extensive work under grants from the
Communicationoptimal parallel algorithm for Strassen’s matrix multiplication
 In Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12
, 2012
"... Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix mul ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen’s fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassenbased, both asymptotically and in practice. A critical bottleneck in parallelizing Strassen’s algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA’11) prove lower bounds on these communication costs, using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communicationoptimal. It exhibits perfect strong scaling within the maximum possible range.