Results 1  10
of
58
Rover: A Toolkit for Mobile Information Access
, 1995
"... The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for "roving" mobile applications. A relocatable dynamic object is an object with a welldefined interface that can be dynamically loaded into a client computer from a server ..."
Abstract

Cited by 207 (5 self)
 Add to MetaCart
(Show Context)
The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for "roving" mobile applications. A relocatable dynamic object is an object with a welldefined interface that can be dynamically loaded into a client computer from a server computer (or vice versa) to reduce clientserver communication requirements. Queued remote procedure call is a communication system that permits applications to continue to make nonblocking remote procedure call requests even when a host is disconnected, with requests and responses being exchanged upon network reconnection. The challenges of mobile environments include intermittent connectivity, limited bandwidth, and channeluse optimization. Experimental results from a Roverbased mail reader, calendar program, and two nonblocking versions of WorldWide Web browsers show that Rover's services are a good match to these challenges. The Rover toolkit also offers advantages for workstation applications by providing a uniform distributed object architecture for code shipping, object caching, and asynchronous object invocation.
Elliptic Curves And Primality Proving
 Math. Comp
, 1993
"... The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm. ..."
Abstract

Cited by 201 (22 self)
 Add to MetaCart
(Show Context)
The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm.
Mobile computing with the rover toolkit
 IEEE Transactions on Computers
, 1997
"... ..."
(Show Context)
Solving Large Sparse Linear Systems Over Finite Fields
, 1991
"... Many of the fast methods for factoring integers and computing discrete logarithms require the solution of large sparse linear systems of equations over finite fields. This paper presents the results of implementations of several linear algebra algorithms. It shows that very large sparse systems can ..."
Abstract

Cited by 89 (3 self)
 Add to MetaCart
(Show Context)
Many of the fast methods for factoring integers and computing discrete logarithms require the solution of large sparse linear systems of equations over finite fields. This paper presents the results of implementations of several linear algebra algorithms. It shows that very large sparse systems can be solved efficiently by using combinations of structured Gaussian elimination and the conjugate gradient, Lanczos, and Wiedemann methods. 1. Introduction Factoring integers and computing discrete logarithms often requires solving large systems of linear equations over finite fields. General surveys of these areas are presented in [14, 17, 19]. So far there have been few implementations of discrete logarithm algorithms, but many of integer factoring methods. Some of the published results have involved solving systems of over 6 \Theta 10 4 equations in more than 6 \Theta 10 4 variables [12]. In factoring, equations have had to be solved over the field GF (2). In that situation, ordinary...
Adaptive Parallelism and Piranha
, 1995
"... . Under "adaptive parallelism," the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LANprocessors join the computation when they become idle, ..."
Abstract

Cited by 82 (0 self)
 Add to MetaCart
. Under "adaptive parallelism," the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LANprocessors join the computation when they become idle, and withdraw when their owners need themand to manage the nodes of a dedicated multiprocessor efficiency. Experience to date with our Piranha system for adaptive parallelism suggests that these possibilities can be achieved in practice on real applications at comparatively modest costs. Keywords: Parallelism, networks, multiprocessors, adaptive parallelism, programming techniques, Linda, Piranha. 1 Introduction Most work on parallelism is "static": it assumes that programs are distributed over processor sets that remain fixed throughout the computation. If a program starts out on 64 processors, it runs on exactly 64 until completion, and specifically on the same 64. "Adaptive parallelism" (AP) abo...
On DiffieHellman Key Agreement with Short Exponents
 Proc. Eurocrypt '96, LNCS 1070
, 1996
"... The difficulty of computing discrete logarithms known to be "short" is examined, motivated by recent practical interest in using DiftieHellman key agreement with short exponents (e.g. over Zp with 160bit exponents and 1024bit primes p). A new divideandconquer algorithm for discret ..."
Abstract

Cited by 68 (0 self)
 Add to MetaCart
The difficulty of computing discrete logarithms known to be "short" is examined, motivated by recent practical interest in using DiftieHellman key agreement with short exponents (e.g. over Zp with 160bit exponents and 1024bit primes p). A new divideandconquer algorithm for discrete logarithms is presented, combining Pollard's lambda method with a partial PohhgHellman decomposition. For random Diftie Hellman primes p, examination reveals this partial decomposition itself allows recovery of short exponents in many cases, while the new technique dramatically extends the range. Use of subgroups of large prime order precludes the attack at essentially no cost, and is the recommended solution.
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 44 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
Timeoptimal messageefficient work performance in the presence of faults
 In Proceedings of the 13th ACM Symposium on Principles of Distributed Computing (PODC
, 1994
"... ..."
Adaptive Parallelism with Piranha
"... "Adaptive parallelism" refers to parallel computations on a dynamically changing set of processors: processors may join or withdraw from the computation as it proceeds. Networks of fast workstations are the most important setting for adaptive parallelism at present. Workstations at most si ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
"Adaptive parallelism" refers to parallel computations on a dynamically changing set of processors: processors may join or withdraw from the computation as it proceeds. Networks of fast workstations are the most important setting for adaptive parallelism at present. Workstations at most sites are typically idle for significant fractions of the day, and those idle cycles may constitute in the aggregate a powerful computing resource. For this reason and others, we believe that adaptive parallelism is assured of playing an increasingly prominent role in parallel applications development over the next decade. The "Piranha" system now up and running on a heterogeneous network at Yale is a generalpurpose adaptive parallelism environment. It has been used to run a variety of production applications, including applications in graphics, theoretical physics, electrical engineering and computational fluid dynamics. In this paper we describe the Piranha model and several archetypal Piranha algori...