Results 1 
9 of
9
Distributed Symbolic Computation with DTS
 PROCEEDINGS OF PARALLEL ALGORITHMS FOR IRREGULARLY STRUCTURED PROBLEMS, LNCS 980
, 1995
"... We describe the design and implementation of the Distributed Threads System (DTS), a programming environment for the parallelization of irregular and highly datadependent algorithms. DTS extends the support for fork/join parallel programming from shared memory threads to a distributed memory enviro ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We describe the design and implementation of the Distributed Threads System (DTS), a programming environment for the parallelization of irregular and highly datadependent algorithms. DTS extends the support for fork/join parallel programming from shared memory threads to a distributed memory environment. It is currently implemented on top of PVM, adding an asynchronous RPC abstraction and turning the net into a pool of anonymous compute servers. Each node of DTS is multithreaded using the C threads interface and is thus ready to run on a multiprocessor workstation. We give performance results for a parallel implementation of the RSA cryptosystem, parallel long integer multiplication, and parallel multivariate polynomial resultant computation.
StrategyAccurate Parallel Buchberger Algorithms
, 1996
"... this paper we describe two parallel formulations of Buchberger algorithm, one for y ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
this paper we describe two parallel formulations of Buchberger algorithm, one for y
Parsac2: Parallel Computer Algebra On The DeskTop
, 1995
"... We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we present some examples of parallel algorithms useful for solving systems of polynomial equations.
Componentlevel Parallelization of Triangular Decompositions
, 2007
"... We discuss the parallelization of algorithms for solving polynomial systems symbolically by way of triangular decompositions. We introduce a componentlevel parallelism for which the number of processors in use depends on the geometry of the solution set of the input system. Our long term goal is t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We discuss the parallelization of algorithms for solving polynomial systems symbolically by way of triangular decompositions. We introduce a componentlevel parallelism for which the number of processors in use depends on the geometry of the solution set of the input system. Our long term goal is to achieve an efficient multilevel parallelism: coarse grained (component) level for tasks computing geometric objects in the solution sets, and medium/fine grained level for polynomial arithmetic such as GCD/resultant computation within each task.
Fast Algorithms, Modular Methods, Parallel Approaches and Software Engineering for Solving Polynomial Systems Symbolically
, 2007
"... Symbolic methods are powerful tools in scientific computing. The implementation of symbolic solvers is, however, a highly difficult task due to the extremely high time and space complexity of the problem. In this thesis, we study and apply fast algorithms, modular methods, parallel approaches and so ..."
Abstract
 Add to MetaCart
Symbolic methods are powerful tools in scientific computing. The implementation of symbolic solvers is, however, a highly difficult task due to the extremely high time and space complexity of the problem. In this thesis, we study and apply fast algorithms, modular methods, parallel approaches and software engineering techniques to improve the efficiency of symbolic solvers for computing triangular decomposition, one of the most promising methods for solving nonlinear systems of equations symbolically. We first adapt nearly optimal algorithms for polynomial arithmetic over fields to direct products of fields for polynomial multiplication, inversion and GCD computations. Then, by introducing the notion of equiprojectable decomposition, a sharp modular method for triangular decompositions based on Hensel lifting techniques is obtained. Its implementation also brings to the Maple computer algebra system a unique capacity for automatic case discussion and recombination. A highlevel categorical parallel framework is developed, written in the Aldor language, to support highperformance computer algebra on symmetric multi
OPTIMIZING MKBTT (SYSTEM DESCRIPTION) ⋆
"... Abstract. We describe performance enhancements that have been added to mkbTT, a modern completion tool combining multicompletion with the use of termination tools. 1. ..."
Abstract
 Add to MetaCart
Abstract. We describe performance enhancements that have been added to mkbTT, a modern completion tool combining multicompletion with the use of termination tools. 1.
Parallel Buchberger Algorithms on Virtual Shared Memory KSR1
, 1994
"... We develop parallel versions of Buchbergers Gröbner Basis algorithm for a virtual shared memory KSR1 computer. A coarse grain version does Spolynomial reduction concurrently and respects the same critical pair selection strategy as the sequential algorithm. A fine grain version parallelizes polynom ..."
Abstract
 Add to MetaCart
We develop parallel versions of Buchbergers Gröbner Basis algorithm for a virtual shared memory KSR1 computer. A coarse grain version does Spolynomial reduction concurrently and respects the same critical pair selection strategy as the sequential algorithm. A fine grain version parallelizes polynomial reduction in a pipeline and can be combined with the parallel Spolynomial reduction. The algorithms are designed for a virtual shared memory architecture and a dynamic memory management with concurrent garbage collection implemented in the MAS computer algebra system. We discuss the achieved speedup figures for up to 24 processors on some standard examples.