Results 1  10
of
41
Rover: A Toolkit for Mobile Information Access
, 1995
"... The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for "roving" mobile applications. A relocatable dynamic object is an object with a welldefined interface that can be dynamically loaded into a client computer from a server computer ( ..."
Abstract

Cited by 188 (4 self)
 Add to MetaCart
The Rover toolkit combines relocatable dynamic objects and queued remote procedure calls to provide unique services for "roving" mobile applications. A relocatable dynamic object is an object with a welldefined interface that can be dynamically loaded into a client computer from a server computer (or vice versa) to reduce clientserver communication requirements. Queued remote procedure call is a communication system that permits applications to continue to make nonblocking remote procedure call requests even when a host is disconnected, with requests and responses being exchanged upon network reconnection. The challenges of mobile environments include intermittent connectivity, limited bandwidth, and channeluse optimization. Experimental results from a Roverbased mail reader, calendar program, and two nonblocking versions of WorldWide Web browsers show that Rover's services are a good match to these challenges. The Rover toolkit also offers advantages for workstation applications by providing a uniform distributed object architecture for code shipping, object caching, and asynchronous object invocation.
Elliptic Curves And Primality Proving
 Math. Comp
, 1993
"... The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm. ..."
Abstract

Cited by 162 (22 self)
 Add to MetaCart
The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm.
Mobile computing with the Rover toolkit
 IEEE Transactions on Computers
, 1997
"... Rover is a software toolkit that supports the construction of both mobiletransparent and mobileaware applications. The objective of the mobiletransparent approach istodevelop proxies for system services that hide the mobile characteristics of the environment from applications. Since applications ..."
Abstract

Cited by 156 (2 self)
 Add to MetaCart
Rover is a software toolkit that supports the construction of both mobiletransparent and mobileaware applications. The objective of the mobiletransparent approach istodevelop proxies for system services that hide the mobile characteristics of the environment from applications. Since applications can be run without alteration, the mobiletransparent approach is appealing. However, to excel, applications operating in the harsh conditions of a mobile environment must often be aware of and take an active part in mitigating those conditions. The Rover toolkit supports a set of programming and communication abstractions that enable the construction of both mobiletransparent and mobileaware applications. Using the Rover abstractions, applications obtain increased availability, concurrency, resource allocation e ciency, fault tolerance, consistency, and adaptation. Experimental evaluation of a suite of mobile applications built with the toolkit demonstrates that such applicationlevel control can be obtained with relatively little programming overhead and allows correct operation, increases interactive performance, and dramatically reduces network utilization under intermittently connected conditions. I.
Adaptive Parallelism and Piranha
, 1995
"... . Under "adaptive parallelism," the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LANprocessors join the computation when they become idle, and withdr ..."
Abstract

Cited by 77 (0 self)
 Add to MetaCart
. Under "adaptive parallelism," the set of processors executing a parallel program may grow or shrink as the program runs. Potential gains include the capacity to run a parallel program on the idle workstations in a conventional LANprocessors join the computation when they become idle, and withdraw when their owners need themand to manage the nodes of a dedicated multiprocessor efficiency. Experience to date with our Piranha system for adaptive parallelism suggests that these possibilities can be achieved in practice on real applications at comparatively modest costs. Keywords: Parallelism, networks, multiprocessors, adaptive parallelism, programming techniques, Linda, Piranha. 1 Introduction Most work on parallelism is "static": it assumes that programs are distributed over processor sets that remain fixed throughout the computation. If a program starts out on 64 processors, it runs on exactly 64 until completion, and specifically on the same 64. "Adaptive parallelism" (AP) abo...
Solving Large Sparse Linear Systems Over Finite Fields
, 1991
"... Many of the fast methods for factoring integers and computing discrete logarithms require the solution of large sparse linear systems of equations over finite fields. This paper presents the results of implementations of several linear algebra algorithms. It shows that very large sparse systems can ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
Many of the fast methods for factoring integers and computing discrete logarithms require the solution of large sparse linear systems of equations over finite fields. This paper presents the results of implementations of several linear algebra algorithms. It shows that very large sparse systems can be solved efficiently by using combinations of structured Gaussian elimination and the conjugate gradient, Lanczos, and Wiedemann methods. 1. Introduction Factoring integers and computing discrete logarithms often requires solving large systems of linear equations over finite fields. General surveys of these areas are presented in [14, 17, 19]. So far there have been few implementations of discrete logarithm algorithms, but many of integer factoring methods. Some of the published results have involved solving systems of over 6 \Theta 10 4 equations in more than 6 \Theta 10 4 variables [12]. In factoring, equations have had to be solved over the field GF (2). In that situation, ordinary...
On DiffieHellman Key Agreement with Short Exponents
 Proc. Eurocrypt '96, LNCS 1070
, 1996
"... The difficulty of computing discrete logarithms known to be "short" is examined, motivated by recent practical interest in using DiftieHellman key agreement with short exponents (e.g. over Zp with 160bit exponents and 1024bit primes p). A new divideandconquer algorithm for discrete logarith ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
The difficulty of computing discrete logarithms known to be "short" is examined, motivated by recent practical interest in using DiftieHellman key agreement with short exponents (e.g. over Zp with 160bit exponents and 1024bit primes p). A new divideandconquer algorithm for discrete logarithms is presented, combining Pollard's lambda method with a partial PohhgHellman decomposition. For random Diftie Hellman primes p, examination reveals this partial decomposition itself allows recovery of short exponents in many cases, while the new technique dramatically extends the range. Use of subgroups of large prime order precludes the attack at essentially no cost, and is the recommended solution.
Parallel Algorithms for Integer Factorisation
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends o ..."
Abstract

Cited by 41 (17 self)
 Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because the security of some of these cryptosystems, such as the RivestShamirAdelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiplepolynomial quadratic sieve (MPQS) algorithm, and discuss their parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of the 617decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
TimeOptimal MessageEfficient Work Performance in the Presence of Faults
, 1994
"... Performing work in parallel by a multitude of processes in a distributed environment is currently a fast growing area of computer applications (due to its cost effectiveness). Adaptation of such applications to changes in system's parallelism (i.e., the availability of processes) is essential for im ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Performing work in parallel by a multitude of processes in a distributed environment is currently a fast growing area of computer applications (due to its cost effectiveness). Adaptation of such applications to changes in system's parallelism (i.e., the availability of processes) is essential for improved performance and reliability. In this work we consider one aspect of coping with dynamic processes failures in such a setting, namely the following scenario formulated by Dwork, Halpern and Waarts [DHW92]: a system of n synchronous processes that communicate only by sending messages to one another. These processes must perform m independent units of work. Processes may fail by crashing and waitfreeness is required, i.e. that whenever at least one process survives, all m units of work will be performed. We consider the notion of fast algorithms in this setting, yet we are not willing to trade improved time for a high cost in communication. Thus, we require message efficiency as well. ...
Distributed MatrixFree Solution of Large Sparse Linear Systems over Finite Fields
 Algorithmica
, 1996
"... We describe a coarsegrain parallel software system for the homogeneous solution of linear systems. Our solutions are symbolic, i.e., exact rather than numerical approximations. Our implementation can be run on a network cluster of SPARC20 computers and on an SP2 multiprocessor. Detailed timings a ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
We describe a coarsegrain parallel software system for the homogeneous solution of linear systems. Our solutions are symbolic, i.e., exact rather than numerical approximations. Our implementation can be run on a network cluster of SPARC20 computers and on an SP2 multiprocessor. Detailed timings are presented for experiments with systems that arise in RSA challenge integer factoring efforts. For example, we can solve a 252; 222 \Theta 252; 222 system with about 11.04 million nonzero entries over the Galois field with 2 elements using 4 processors of an SP2 multiprocessor, in about 26.5 hours CPU time. 1 Introduction The problem of solving large, unstructured, sparse linear systems using exact arithmetic arises in symbolic linear algebra and computational number theory. For example the sievebased factoring of large integers can lead to systems containing over 569,000 equations and variables and over 26.5 million nonzero entries, that need to be solved over the Galois field of two...