Results 1  10
of
10
A case study of multithreaded Gröbner basis completion
 Proc. of ISSAC’96
, 1996
"... We investigate sources of parallelism in the Gröbner Basis algorithm for their practical use on the desktop. Our execution environment is a standard multiprocessor workstation, and our parallel programming environment is PARSAC2 on top of a multithreaded operating system. We investigate the perf ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We investigate sources of parallelism in the Gröbner Basis algorithm for their practical use on the desktop. Our execution environment is a standard multiprocessor workstation, and our parallel programming environment is PARSAC2 on top of a multithreaded operating system. We investigate the performance of two main variants of our master parallel algorithm on a standard set of examples. The first version exploits only work parallelism in a strategy compliant way. The second version investigates search parallelism in addition, where large superlinear speedups can be obtained. These speedups are due to improved Spolynomial selection behavior and therefore carry over to single processor machines. Since we obtain our parallel variants by a controlled variation of only a few parameters in the master algorithm, we obtain new insights into the way in which different sources of parallelism interact in Gröbner Basis completion.
Parsac2: Parallel Computer Algebra On The DeskTop
, 1995
"... We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
We give an introduction to programming methods, software systems, and algorithms, suitable for parallelizing Computer Algebra on modern multiprocessor workstations. As concrete examples we present multithreaded programming and its use in the PARSAC2 system for parallel symbolic computation, and we present some examples of parallel algorithms useful for solving systems of polynomial equations.
Componentlevel Parallelization of Triangular Decompositions
, 2007
"... We discuss the parallelization of algorithms for solving polynomial systems symbolically by way of triangular decompositions. We introduce a componentlevel parallelism for which the number of processors in use depends on the geometry of the solution set of the input system. Our long term goal is t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We discuss the parallelization of algorithms for solving polynomial systems symbolically by way of triangular decompositions. We introduce a componentlevel parallelism for which the number of processors in use depends on the geometry of the solution set of the input system. Our long term goal is to achieve an efficient multilevel parallelism: coarse grained (component) level for tasks computing geometric objects in the solution sets, and medium/fine grained level for polynomial arithmetic such as GCD/resultant computation within each task.
Numerical primary decomposition
 in PHCpack, Proceedings of ICMS 2006 (Nobuki
, 2003
"... Abstract. Consider an ideal I ⊂ R = C[x1,..., xn] defining a complex affine variety X ⊂ C n. We describe the components associated to I by means of numerical primary decomposition (NPD). The method is based on the construction of deflation ideal I (d) that defines the deflated variety X (d) in a com ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Consider an ideal I ⊂ R = C[x1,..., xn] defining a complex affine variety X ⊂ C n. We describe the components associated to I by means of numerical primary decomposition (NPD). The method is based on the construction of deflation ideal I (d) that defines the deflated variety X (d) in a complex space of higher dimension. For every embedded component there exists d and an isolated component Y (d) of I (d) projecting onto Y. In turn, Y (d) can be discovered by existing methods for prime decomposition, in particular, the numerical irreducible decomposition, applied to X (d). The concept of NPD gives a full description of the scheme Spec(R/I) by representing each component with a witness set. We propose an algorithm to produce a collection of witness sets that contains a NPD and that can be used to solve the ideal membership problem for I. 1.
On the Representation of Parallel Search in Theorem Proving
 Johannes Kepler Universitat
, 1997
"... This extended abstract summarizes two contributions from ongoing work on parallel search in theorem proving. First, we give a framework of definitions for parallel theorem proving, including inference system, communication operators, parallel search plan, subdivision function, parallel strategy, ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This extended abstract summarizes two contributions from ongoing work on parallel search in theorem proving. First, we give a framework of definitions for parallel theorem proving, including inference system, communication operators, parallel search plan, subdivision function, parallel strategy, parallel derivation, fairness and propagation of redundancy for parallel derivations. A notion of a parallel strategy being a parallelization of a sequential strategy, and a theorem establishing a general relation between sequential fairness and parallel fairness are also given. Second, we extend our approach to the modelling of search to parallel search, covering inferences (expansion and contraction), behaviour of the search plan, subdivision of the search space and communication among the processes. This model allows us to study the behavior of many search processes on a single marked search graph. In the full paper, we plan to extend our methodology for the measure of the complex...
Efficient Resource Scheduling in Multiprocessors
 UNIVERSITY OF CALIFORNIA, BERKELEY
, 1996
"... As multiprocessing becomes increasingly successful in scientific and commercial computing, parallel systems will be subjected to increasingly complex and challenging workloads. To ensure good job response and high resource utilization, algorithms are needed to allocate resources to jobs and to sch ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
As multiprocessing becomes increasingly successful in scientific and commercial computing, parallel systems will be subjected to increasingly complex and challenging workloads. To ensure good job response and high resource utilization, algorithms are needed to allocate resources to jobs and to schedule the jobs. This problem is of central importance, and pervades systems research at diverse places such as compilers, runtime, applications, and operating systems. Despite the attention this area has received, scheduling problems in practical parallel computing still lack satisfactory solutions. The focus of system builders is to provide functionality and features; the resulting systems get so complex that many models and theoretical results lack applicability. The focus of this thesis is in ...
Fast Algorithms, Modular Methods, Parallel Approaches and Software Engineering for Solving Polynomial Systems Symbolically
, 2007
"... Symbolic methods are powerful tools in scientific computing. The implementation of symbolic solvers is, however, a highly difficult task due to the extremely high time and space complexity of the problem. In this thesis, we study and apply fast algorithms, modular methods, parallel approaches and so ..."
Abstract
 Add to MetaCart
Symbolic methods are powerful tools in scientific computing. The implementation of symbolic solvers is, however, a highly difficult task due to the extremely high time and space complexity of the problem. In this thesis, we study and apply fast algorithms, modular methods, parallel approaches and software engineering techniques to improve the efficiency of symbolic solvers for computing triangular decomposition, one of the most promising methods for solving nonlinear systems of equations symbolically. We first adapt nearly optimal algorithms for polynomial arithmetic over fields to direct products of fields for polynomial multiplication, inversion and GCD computations. Then, by introducing the notion of equiprojectable decomposition, a sharp modular method for triangular decompositions based on Hensel lifting techniques is obtained. Its implementation also brings to the Maple computer algebra system a unique capacity for automatic case discussion and recombination. A highlevel categorical parallel framework is developed, written in the Aldor language, to support highperformance computer algebra on symmetric multi
Algorithms in computational algebraic analysis
, 2003
"... This thesis studies algorithms for symbolic computation of systems of linear partial differential equations using the corresponding ring of linear differential operators with polynomial coefficients, which is called the Weyl algebra An. BernsteinSato polynomials, one of the central notions in the ..."
Abstract
 Add to MetaCart
This thesis studies algorithms for symbolic computation of systems of linear partial differential equations using the corresponding ring of linear differential operators with polynomial coefficients, which is called the Weyl algebra An. BernsteinSato polynomials, one of the central notions in the algebraic analysis of Dmodules, is the topic of the first part of this work. We consider the question of constructibility of the stratum of polynomials of bounded number of variables and degree that produce a fixed BernsteinSato polynomial. Not only do we give a positive answer, but we construct an algorithm for computing these strata. Another theme of this thesis is two theorems of Stafford that say that every (left) ideal of An can be generated by two elements, and every holonomic Anmodule is cyclic, i.e. generated by one element. We reprove these results in an effective way that leads to algorithms for computation of these generators. The main engine of all our algorithms is Gröbner bases computations in the Weyl algebra. In order to speed these up we developed a parallel version of a Buchberger algorithm, which has been implemented and tested out using supercomputers and has delivered impressive speedups on several important examples.