Results 1 
9 of
9
A.: An objectoriented platform for distributed highperformance Symbolic Computation
 Mathematics and Computers in Simulation 49
, 1999
"... We describe the Distributed ObjectOriented Threads System (DOTS), a programming environment designed to support objectoriented fork/join parallel programming in a heterogeneous distributed environment. A mixed network of Windows NT PC’s and UNIX workstations is transformed by DOTS into a homogeneo ..."
Abstract

Cited by 17 (11 self)
 Add to MetaCart
We describe the Distributed ObjectOriented Threads System (DOTS), a programming environment designed to support objectoriented fork/join parallel programming in a heterogeneous distributed environment. A mixed network of Windows NT PC’s and UNIX workstations is transformed by DOTS into a homogeneous pool of anonymous compute servers forming together a multicomputer. DOTS is a complete redesign of the Distributed Threads System (DTS) using the objectoriented paradigm both in its internal implementation and in the programming paradigm it supports. It has been used for the parallelization of applications in the field of computer algebra and in the field of computer graphics. We also give a brief account of applications in the domain of symbolic computation that were developed using DTS. Key words: distributed threads system, heterogeneous networks, Windows NT
Distributed Symbolic Computation with DTS
 PROCEEDINGS OF PARALLEL ALGORITHMS FOR IRREGULARLY STRUCTURED PROBLEMS, LNCS 980
, 1995
"... We describe the design and implementation of the Distributed Threads System (DTS), a programming environment for the parallelization of irregular and highly datadependent algorithms. DTS extends the support for fork/join parallel programming from shared memory threads to a distributed memory enviro ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We describe the design and implementation of the Distributed Threads System (DTS), a programming environment for the parallelization of irregular and highly datadependent algorithms. DTS extends the support for fork/join parallel programming from shared memory threads to a distributed memory environment. It is currently implemented on top of PVM, adding an asynchronous RPC abstraction and turning the net into a pool of anonymous compute servers. Each node of DTS is multithreaded using the C threads interface and is thus ready to run on a multiprocessor workstation. We give performance results for a parallel implementation of the RSA cryptosystem, parallel long integer multiplication, and parallel multivariate polynomial resultant computation.
The Design of the PACLIB Kernel for Parallel Algebraic Computation
 In ACPC2, LNCS vol.734
, 1993
"... . This paper describes the runtime kernel of Paclib, a new system for parallel algebraic computation on shared memory computers. Paclib has been developed as a professional tool for the simple design and efficient implementation of parallel algorithms in computer algebra and related areas. It provi ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
. This paper describes the runtime kernel of Paclib, a new system for parallel algebraic computation on shared memory computers. Paclib has been developed as a professional tool for the simple design and efficient implementation of parallel algorithms in computer algebra and related areas. It provides concurrency, shared memory communication, nondeterminism, speculative parallelism, streams and a parallelized garbage collection. We explain the main design decisions as motivated by the special demands of algebraic computation and give several benchmarks that demonstrate the performance of the system. Paclib has been implemented on a Sequent Symmetry multiprocessor and is portable to other shared memory machines and workstations. 1 Introduction Computer algebra is that branch of computer science that aims to provide exact solutions of scientific problems. Research results of this area are e.g. algorithms for symbolic integration, polynomial factorization or the exact solution of algeb...
A finegrained parallel completion procedure
 IN ISSAC ’94: PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND ALGEBRAIC COMPUTATION
, 1994
"... ..."
Parsac2: Parallel Computer Algebra On The DeskTop
, 1995
"... We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we present some examples of parallel algorithms useful for solving systems of polynomial equations.
Virtual Tasks for the PACLIB Kernel
 Parallel Processing: CONPAR 94  VAPP VI International Conference on Parallel and Vector Processing
, 1994
"... . We have extended the task management scheme for the parallel computer algebra package PACLIB. This extension supports "virtual tasks" (tasks that are not yet executable) which are created more efficiently than "real tasks" (tasks that are immediately scheduled for execution). Virtual tasks become ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
. We have extended the task management scheme for the parallel computer algebra package PACLIB. This extension supports "virtual tasks" (tasks that are not yet executable) which are created more efficiently than "real tasks" (tasks that are immediately scheduled for execution). Virtual tasks become real only when the system is idling or existing real tasks can be recycled. Consequently, the overhead for task creation and synchronization but also the memory requirements of a parallel program may be reduced. We analyze the system theoretically and experimentally and compare it with another virtual task package. 1 Introduction The purpose of this paper is twofold: first it reports the extension of the task management scheme for a parallel programming package developed at our institute. Second it carefully investigates the semantic and performance consequences of this modification and compares them with the results reported for a system that was developed elsewhere with similar objectives...
Parallel Computer Algebra on the DeskTop
, 1995
"... We report on the development of PARSAC2, a library of parallel algebraic algorithms designed specifically for networks of multiprocessor workstations. PARSAC2 is built upon the Sthreads system environment for multithreaded symbolic computation. Sthreads provides virtual parallelism by mapping t ..."
Abstract
 Add to MetaCart
We report on the development of PARSAC2, a library of parallel algebraic algorithms designed specifically for networks of multiprocessor workstations. PARSAC2 is built upon the Sthreads system environment for multithreaded symbolic computation. Sthreads provides virtual parallelism by mapping thousands of very lightweight processes onto the processors of a workstation. It is currently being extended with network functionality, so that heavyweight processes can be mapped across the network while preserving the Sthreads interface. The current goal of algorithm development in PARSAC is the construction of a parallel polynomial equation solver using GroebnerBases. We report on the design of a strategycompliant parallel GroebnerBasis computation with factorization. Introduction Symbolic computation is a highlevel computational task which makes it comparatively complex and slow. However, it is increasingly applied in science and engineering [FGHK94] and any significant increase i...
Parallel Buchberger Algorithms on Virtual Shared Memory KSR1
, 1994
"... We develop parallel versions of Buchbergers Gröbner Basis algorithm for a virtual shared memory KSR1 computer. A coarse grain version does Spolynomial reduction concurrently and respects the same critical pair selection strategy as the sequential algorithm. A fine grain version parallelizes polynom ..."
Abstract
 Add to MetaCart
We develop parallel versions of Buchbergers Gröbner Basis algorithm for a virtual shared memory KSR1 computer. A coarse grain version does Spolynomial reduction concurrently and respects the same critical pair selection strategy as the sequential algorithm. A fine grain version parallelizes polynomial reduction in a pipeline and can be combined with the parallel Spolynomial reduction. The algorithms are designed for a virtual shared memory architecture and a dynamic memory management with concurrent garbage collection implemented in the MAS computer algebra system. We discuss the achieved speedup figures for up to 24 processors on some standard examples.