## Distributed Symbolic Computation with DTS (1995)

### Cached

### Download Links

- [www-sr.informatik.uni-tuebingen.de]
- [www-ti.informatik.uni-tuebingen.de]
- DBLP

### Other Repositories/Bibliography

Venue: | PROCEEDINGS OF PARALLEL ALGORITHMS FOR IRREGULARLY STRUCTURED PROBLEMS, LNCS 980 |

Citations: | 17 - 6 self |

### BibTeX

@INPROCEEDINGS{Bubeck95distributedsymbolic,

author = {T. Bubeck and M. Hiller and W. Kuchlin and W. Rosenstiel},

title = {Distributed Symbolic Computation with DTS},

booktitle = {PROCEEDINGS OF PARALLEL ALGORITHMS FOR IRREGULARLY STRUCTURED PROBLEMS, LNCS 980},

year = {1995},

pages = {231--248},

publisher = {Springer}

}

### Years of Citing Articles

### OpenURL

### Abstract

We describe the design and implementation of the Distributed Threads System (DTS), a programming environment for the parallelization of irregular and highly data-dependent algorithms. DTS extends the support for fork/join parallel programming from shared memory threads to a distributed memory environment. It is currently implemented on top of PVM, adding an asynchronous RPC abstraction and turning the net into a pool of anonymous compute servers. Each node of DTS is multithreaded using the C threads interface and is thus ready to run on a multiprocessor workstation. We give performance results for a parallel implementation of the RSA cryptosystem, parallel long integer multiplication, and parallel multi-variate polynomial resultant computation.

### Citations

3188 | A method for obtaining digital signatures and public-key cryptosystems
- Rivest, Shamir, et al.
- 1977
(Show Context)
Citation Context ...l reported run-times are the average of 5 single runs. The measurements were done at night, on an almost empty network. 3.1 The RSA Cryptosystem First we want to show the behavior of DTS with the RSA =-=[26]-=- cryptosystem. This problem is irregular in the sense that the input data is treated in blocks of a certain size depending on the key length. Additionally, the size of the input data determines the nu... |

947 | Renesse. Distributed Operating Systems
- Tanenbaum, Van
- 1985
(Show Context)
Citation Context ...the load manager to determine whether the node is alive. The load manager is currently realized by a single process executing on some node (in addition to the server process). Techniques described in =-=[33]-=- can be used to decentralize load managing if it ever becomes a bottleneck. 1.5 Symbolic Algebra Symbolic algebraic algorithms find increasing application in scientific computation [12]. An important ... |

744 | Pvm: A framework for parallel distributed computing. Concurrency: Practice and Experience 2(4):315–339
- Sunderam
- 1990
(Show Context)
Citation Context ...specially the case in day-to-day software development on workstation networks. A partial solution has been provided by portable parallelization environments such as the Parallel Virtual Machine (PVM) =-=[32]-=-. PVM is rapidly emerging as one de facto standard for distributed computation and thus becomes important for scientific applications. PVM supports the distributed creation of computation tasks (UNIX ... |

270 | A taxonomy of scheduling in generalpurpose distributed computing systems
- Casavant, Kuhl
- 1988
(Show Context)
Citation Context ...ing In order to achieve good performance, parallel systems must balance the jobs on the processors. DTS uses the LBP to decide which job should be executed on which machine. Following the taxonomy of =-=[8]-=- our load balancing scheme could be characterized as adaptive physically non-distributed dynamic global scheduling with one-time assignment. Each node periodically sends its load (in general the numbe... |

259 |
Algorithms for computer algebra
- Geddes, Czapor, et al.
- 1992
(Show Context)
Citation Context ...l factorisation, polynomial resultant and g.c.d. computation, and fast integer arithmetic are important methods for this purpose. (For an overview and detailed information of algebraic algorithms see =-=[15]-=-.) If these tasks can be speeded up through parallelization, bigger systems of equations can be solved. While many methods employed in symbolic algebra, such as the computation with multiple homomorph... |

47 | The Nexus task-parallel runtime system
- FOSTER, KESSELMAN, et al.
- 1994
(Show Context)
Citation Context ...ide that the most effective way to execute the fork is for the client to do it himself. Carrying input parameters together with the function name is similar to active messages as implemented by Nexus =-=[13]-=-. In contrast to DTS, Nexus is designed as a compiler target and is therefore more flexible but also harder to use. In addition to [13], we also balance the jobs and recover them in case of crashes. I... |

41 |
The calculation of multivariate polynomial resultants
- Collins
- 1971
(Show Context)
Citation Context ...s, we get a method for solving systems of equations. For an introduction to resultants and their use in equation solving, see [15]; the modular resultant algorithm parallelized here is due to Collins =-=[9]-=-. The modular method recursively maps the multivariate resultant computation to multiple resultant computations of homomorphic images, and, using the Chinese Remainder Algorithm, lifts the image resul... |

20 |
PARSAC-2: A parallel SAC-2 based on threads
- Küchlin
- 1991
(Show Context)
Citation Context ...tivations, a concrete one and a more abstract one. The concrete motivation is to parallelize SAC-2 [10], a large library of symbolic algebra (Computer Algebra) codes. 3 The parallel library, PARSAC-2 =-=[19, 23]-=-, also covers symbolic algorithms in automated theorem proving, specifically variants of Knuth-Bendix completion of term-rewriting systems [5, 6]. Both Symbolic and Algebraic algorithms are typically ... |

15 |
Parallel computation of modular multivariate polynomial resultants on a shared memory machine
- Hong, Loidl
- 1994
(Show Context)
Citation Context ...her parallel Computer Algebra systems are also under construction. The PACLIB system [29] for shared memory machines has been influenced by Parsac; it has been used for parallel resultant computation =-=[18]-=-. DSC [11] for distributed symbolic computation is built directly upon TCP/IP; it has been used for very coarse-grained parallelization. PAC [27] was built on Transputers and used static assignment of... |

14 |
Experiments with virtual C Threads
- Küchlin, Ward
- 1992
(Show Context)
Citation Context ...llelism, while the small-grained low-level tasks are executed otherwise. In the shared memory part of PARSAC-2, lazy task creation is already supported by the virtual threads enhancement of S-threads =-=[24]-=-. Practical experience with this scheme is excellent. In the course of a few minutes, hundreds of thousands of virtual tasks can be created and mapped onto the few available processors [5]. Virtual ta... |

12 |
DSC: A system for distributed symbolic computation
- Diaz, Kaltofen, et al.
- 1991
(Show Context)
Citation Context ...el Computer Algebra systems are also under construction. The PACLIB system [29] for shared memory machines has been influenced by Parsac; it has been used for parallel resultant computation [18]. DSC =-=[11]-=- for distributed symbolic computation is built directly upon TCP/IP; it has been used for very coarse-grained parallelization. PAC [27] was built on Transputers and used static assignment of tasks to ... |

10 |
The S-threads environment for parallel symbolic computation, in
- Küchlin
- 1990
(Show Context)
Citation Context ...gorithms are typically extremely data-dependent; in this paper, we only consider the algebraic part. PARSAC-2 was developed first for shared memory machines, using multithreaded C code with S-threads =-=[22]-=-, a custom threads enhancement for symbolic computation. The original purpose of developing DTS was to logically extend the fork/join functionality of threads to the network as far as possible. If PAR... |

10 | The design of the PACLIB kernel for parallel algebraic computation
- Schreiner, Hong
- 1993
(Show Context)
Citation Context ...d the S-threads environment goes back to [19]; the work on DTS and its application goes back to [1, 16]. Several other parallel Computer Algebra systems are also under construction. The PACLIB system =-=[29]-=- for shared memory machines has been influenced by Parsac; it has been used for parallel resultant computation [18]. DSC [11] for distributed symbolic computation is built directly upon TCP/IP; it has... |

9 | A Fine-Grained Parallel Completion Procedure
- Bündgen, Göbel, et al.
- 1994
(Show Context)
Citation Context ...ter Algebra) codes. 3 The parallel library, PARSAC-2 [19, 23], also covers symbolic algorithms in automated theorem proving, specifically variants of Knuth-Bendix completion of term-rewriting systems =-=[5, 6]-=-. Both Symbolic and Algebraic algorithms are typically extremely data-dependent; in this paper, we only consider the algebraic part. PARSAC-2 was developed first for shared memory machines, using mult... |

8 |
Eine Systemumgebung zum verteilten funktionalen Rechnen
- Bubeck
- 1993
(Show Context)
Citation Context ...-sizes of the heavy tasks that can move across the net. Our use of divide-and-conquer parallelization and the S-threads environment goes back to [19]; the work on DTS and its application goes back to =-=[1, 16]-=-. Several other parallel Computer Algebra systems are also under construction. The PACLIB system [29] for shared memory machines has been influenced by Parsac; it has been used for parallel resultant ... |

8 |
Multithreaded AC term re-writing
- Bündgen, Göbel, et al.
- 1994
(Show Context)
Citation Context ...ter Algebra) codes. 3 The parallel library, PARSAC-2 [19, 23], also covers symbolic algorithms in automated theorem proving, specifically variants of Knuth-Bendix completion of term-rewriting systems =-=[5, 6]-=-. Both Symbolic and Algebraic algorithms are typically extremely data-dependent; in this paper, we only consider the algebraic part. PARSAC-2 was developed first for shared memory machines, using mult... |

7 |
PAC++ system and parallel algebraic numbers computation
- Gautier, Roch
- 1994
(Show Context)
Citation Context ...ymbolic computation is built directly upon TCP/IP; it has been used for very coarse-grained parallelization. PAC [27] was built on Transputers and used static assignment of tasks to processors. PAC++ =-=[14]-=- is more similar to our work, using a divide-and-conquer parallelization style and a kernel structure. Its task placement is aided by cost prediction functions [28]. For a more complete overview of re... |

7 | PARSAC-2: Parallel computer algebra on the desk-top
- Küchlin
- 1995
(Show Context)
Citation Context ...tivations, a concrete one and a more abstract one. The concrete motivation is to parallelize SAC-2 [10], a large library of symbolic algebra (Computer Algebra) codes. 3 The parallel library, PARSAC-2 =-=[19, 23]-=-, also covers symbolic algorithms in automated theorem proving, specifically variants of Knuth-Bendix completion of term-rewriting systems [5, 6]. Both Symbolic and Algebraic algorithms are typically ... |

7 |
Algebraic Computing on a Local Net
- Seitz
(Show Context)
Citation Context ...egular regarding its computation weight. This might be not true due to side effects of the underlying Computer Algebra system. This algorithm was already distributed on a net of workstations by Seitz =-=[30, 31]-=-. His system, called DSAC-2, uses hand-crafted low-level communication in the context of the Computer Algebra system SAC-2 and its implementation language ALDES [25]. Our approach was to rely on stand... |

6 |
Parallel ReDuX → PaReDuX
- Bündgen, Göbel, et al.
- 1995
(Show Context)
Citation Context ...quential code and new parallel algorithms. Figure 1 illustrates the system architecture. The application level consists of both sequential and parallel code; the term-rewriting part is called PaReDuX.=-=[7]-=- The clear separation into system environment and application provides for modularity and portability. Both parts have been developed simultaneously, S-threads responding to the needs of the applicati... |

5 |
Specification and Index of Sac-2 Algorithms
- Collins, Loos
- 1990
(Show Context)
Citation Context ...ulti-variate polynomial resultant computation. 1 Introduction 1.1 Motivation Our work has two main motivations, a concrete one and a more abstract one. The concrete motivation is to parallelize SAC-2 =-=[10]-=-, a large library of symbolic algebra (Computer Algebra) codes. 3 The parallel library, PARSAC-2 [19, 23], also covers symbolic algorithms in automated theorem proving, specifically variants of Knuth-... |

5 |
On the Multi-Threaded Computation of Integral Polynomial Greatest Common Divisors
- Küchlin
- 1991
(Show Context)
Citation Context ...polynomial remainder sequences. This modular method of computing resultants is structurally very similar to the modular method of computing polynomial g.c.d.'s which was also parallelized in PARSAC-2 =-=[20, 21]-=-. 3.3 IPRES IPRES is irregular in the sense that the number of distributed MPRES calls and the primes used for transformation depend on the input polynomials. It should theoretically be regular regard... |

5 |
The algorithm description language
- Loos
- 1976
(Show Context)
Citation Context ...a net of workstations by Seitz [30, 31]. His system, called DSAC-2, uses hand-crafted low-level communication in the context of the Computer Algebra system SAC-2 and its implementation language ALDES =-=[25]-=-. Our approach was to rely on standard communication systems, like PVM, and to build a distribution system also applicable to other languages and applications. We tested IPRES with 7 cases, varying th... |

4 |
Saclib user's guide
- Buchberger, Encarnaci'on, et al.
- 1993
(Show Context)
Citation Context ... network of multi-processor workstations, we want to be able to ? Research supported by DFG through SFB 382/C6 ?? Research supported by DFG under grant Ku 966/2 3 We use SAC-2 in its C variant SACLIB =-=[3]-=-. execute a suitable subclass of threads transparently across the network instead of on the local node. Of course, these network-threads must be heavy enough, side-effect free, and able to copy their ... |

4 |
On the Multi-Threaded Computation of Modular Polynomial Greatest Common Divisors
- Küchlin
- 1991
(Show Context)
Citation Context ...polynomial remainder sequences. This modular method of computing resultants is structurally very similar to the modular method of computing polynomial g.c.d.'s which was also parallelized in PARSAC-2 =-=[20, 21]-=-. 3.3 IPRES IPRES is irregular in the sense that the number of distributed MPRES calls and the primes used for transformation depend on the input polynomials. It should theoretically be regular regard... |

4 |
A new load-prediction scheme based on algorithmic cost functions
- Roch, Vermeerbergen, et al.
- 1994
(Show Context)
Citation Context ...rocessors or the amount of parallelism generated elsewhere in the system. Precomputation of parallelization and task and data placement schedules are largely out of the question. It has been observed =-=[28]-=- that task placement may be aided by cost prediction functions based on the asymptotic analysis available for most algebraic algorithms, usually dependent on the degree and coefficient lengths of poly... |

3 |
Verteiltes symbolisches Rechnen mit PARSAC-2. Master's thesis, W.-Schickard-Institut fur Informatik, Universitat Tubingen
- Hiller
- 1993
(Show Context)
Citation Context ...-sizes of the heavy tasks that can move across the net. Our use of divide-and-conquer parallelization and the S-threads environment goes back to [19]; the work on DTS and its application goes back to =-=[1, 16]-=-. Several other parallel Computer Algebra systems are also under construction. The PACLIB system [29] for shared memory machines has been influenced by Parsac; it has been used for parallel resultant ... |

3 |
PAC: Towards a parallel computer algebra co-processor, Computer algebra and parallelism
- Roch
- 1989
(Show Context)
Citation Context ...c; it has been used for parallel resultant computation [18]. DSC [11] for distributed symbolic computation is built directly upon TCP/IP; it has been used for very coarse-grained parallelization. PAC =-=[27]-=- was built on Transputers and used static assignment of tasks to processors. PAC++ [14] is more similar to our work, using a divide-and-conquer parallelization style and a kernel structure. Its task p... |

2 |
Towards a parallel Computer Algebra co-processor
- PAC
(Show Context)
Citation Context ...c; it has been used for parallel resultant computation [18]. DSC [11] for distributed symbolic computation is built directly upon TCP/IP; it has been used for very coarse-grained parallelization. PAC =-=[27]-=- was built on Transputers and used static assignment of tasks to processors. PAC++ [14] is more similar to our work, using a divide-and-conquer parallelization style and a kernel structure. Its task p... |

1 |
Symmetric distributed computing with dynamic load balancing and fault tolerance
- Bubeck, Kuchlin, et al.
- 1995
(Show Context)
Citation Context ...lacement is aided by cost prediction functions [28]. For a more complete overview of related work see [23]. 2 Design and Implementation of DTS In this section, we describe the basic properties of DTS =-=[2]-=- which provides a framework to parallelize a program on a number of workstations. They are connected in a way, that the programmer sees one single computer consisting of multiple processors. DTS consi... |

1 |
Verteiltes Rechnen in SAC-2
- Seitz
- 1990
(Show Context)
Citation Context ...egular regarding its computation weight. This might be not true due to side effects of the underlying Computer Algebra system. This algorithm was already distributed on a net of workstations by Seitz =-=[30, 31]-=-. His system, called DSAC-2, uses hand-crafted low-level communication in the context of the Computer Algebra system SAC-2 and its implementation language ALDES [25]. Our approach was to rely on stand... |

1 | Computer Algebra and Parallelism, volume 584 of LNCS - Zippel, editor - 1992 |