Results 1  10
of
10
Parallel matrix multiplication on a linear array with a reconfigurable pipelined bus system
 Proceedings of IPPS/SPDP ’99 (2nd Merged Symp. of 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing
, 1999
"... The known fast sequential algorithms for multiplying two N N matrices (over an arbitrary ring) have time complexity O(N), where 2 < < 3. The current best value of is less than 2.3755. We show that for all 1 p N,multiplying two N N matrices can be performed on a pprocessor linear array with a ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
The known fast sequential algorithms for multiplying two N N matrices (over an arbitrary ring) have time complexity O(N), where 2 < < 3. The current best value of is less than 2.3755. We show that for all 1 p N,multiplying two N N matrices can be performed on a pprocessor linear array with a recon gurable pipelined bus system (LARPBS) in O N
Algorithmic Aspects of Symbolic Switch Network Analysis
 IEEE Trans. CAD/IC
, 1987
"... A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean eq ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean equations. For the class of networks that arise when analyzing digital metaloxide semiconductor (MOS) circuits, a simple pivot selection rule guarantees that most s switch networks encountered in practice can be solved with O(s) operations. When represented by a directed acyclic graph, the set of Boolean formulas generated by the analysis has total size bounded by the number of operations required by the Gaussian elimination. This paper presents the mathematical basis for systems of Boolean equations, their solution by Gaussian elimination, and data structures and algorithms for representing and manipulating Boolean formulas.
Symbolic and numeric methods for exploiting structure in constructing resultant matrices
 J. SYMB. COMP
, 2001
"... Resultants characterize the existence of roots of systems of multivariate nonlinear polynomial equations, while their matrices reduce the computation of all common zeros to a problem in linear algebra. Sparse elimination theory has introduced the sparse resultant, which takes into account the sparse ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
Resultants characterize the existence of roots of systems of multivariate nonlinear polynomial equations, while their matrices reduce the computation of all common zeros to a problem in linear algebra. Sparse elimination theory has introduced the sparse resultant, which takes into account the sparse structure of the polynomials. The construction of sparse resultant, or Newton, matrices is the critical step in the computation of the multivariate resultant and the solution of a nonlinear system. We reveal and exploit the quasiToeplitz structure of the Newton matrix, thus decreasing the time complexity of constructing such matrices by roughly one order of magnitude to achieve quasiquadratic complexity in the matrix dimension. The space complexity is also decreased analogously. These results imply similar improvements in the complexity of computing the resultant polynomial itself and of solving zerodimensional systems. Our approach relies on fast vectorbymatrix multiplication and uses the following two methods as building blocks. First, a fast and numerically stable method for determining the rank of rectangular matrices, which works exclusively over oating point arithmetic. Second, exact polynomial arithmetic algorithms that improve upon the complexity of polynomial multiplication under our model of sparseness, o ering bounds linear in the number of variables and the number of nonzero terms.
Algebraic algorithms
"... This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the class ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the classical resultant for n homogeneous polynomials in n variables. The Macaulay matrix si16 multaneously generalizes the Sylvester matrix and the coefficient matrix of a system of linear equations [Kapur and Lakshman Y. N. 1992]. As the Dixon formulation, the Macaulay determinant is a multiple of the resultant. Macaulay, however, proved that a certain minor of his matrix divides the matrix determinant so as to yield the exact resultant in the case of generic homogeneous polynomials. Canny [1990] has invented a general method that perturbs any polynomial system and extracts a nontrivial projection operator. Using recent results pertaining to sparse polynomial systems [Gelfand et al. 1994, Sturmfels 1991], a matrix formula for computing the sparse resultant of n + 1 polynomials in n variables was given by Canny and Emiris [1993] and consequently improved in [Canny and Pedersen 1993, Emiris and Canny 1995]. The determinant of the sparse resultant matrix, like the Macaulay and Dixon matrices, only yields a projection operation, not the exact resultant. Here, sparsity means that only certain monomials in each of the n + 1 polynomials have nonzero coefficients. Sparsity is measured in geometric terms, namely, by the Newton polytope
Parallel Output Sensitive Algorithms for Combinatorial and Linear Algebra Problems
, 2000
"... This paper gives output sensitive parallel algorithms whose performance ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
This paper gives output sensitive parallel algorithms whose performance
On the Power of Discontinuous Approximate Computations
, 1992
"... The set of operations S 1 = f+; \Gamma; ; =; ?g is used in algebraic computations to avoid degeneracies (e.g., division by zero), but is also used in numerical computations to avoid huge roundoff errors (e.g., division by a small quantity). On the other hand, the classes of algorithms using operatio ..."
Abstract
 Add to MetaCart
The set of operations S 1 = f+; \Gamma; ; =; ?g is used in algebraic computations to avoid degeneracies (e.g., division by zero), but is also used in numerical computations to avoid huge roundoff errors (e.g., division by a small quantity). On the other hand, the classes of algorithms using operations from the set S 2 = f+; \Gamma; ; =g or from the set S 3 = f+; \Gamma; g are the most studied in complexity theory, and are used, e.g., to obtain fast parallel algorithms for numerical problems. In this paper, we study, by using a simulation argument, the relative power of the sets S 1 , S 2 , and S 3 for computing with approximations. We prove that S 2 does very efficiently simulate S 1 , while S 3 does not; this
Generalized Scans and TriDiagonal Systems
"... Motivated by the analysis of known parallel techniques for the solution of linear tridiagonal system, weintroduce generalized scans, a class of recursively de#ned lengthpreserving, sequencetosequence transformations that generalize the wellknown pre#x computations #scans#. Generalized scan functi ..."
Abstract
 Add to MetaCart
Motivated by the analysis of known parallel techniques for the solution of linear tridiagonal system, weintroduce generalized scans, a class of recursively de#ned lengthpreserving, sequencetosequence transformations that generalize the wellknown pre#x computations #scans#. Generalized scan functions are described in terms of three algorithmic phases, the reduction phase that saves data for the third or expansion phase and prepares data for the second phase which is a recursiveinvocation of the same function on one fewer variable. Both the reduction and expansion phases operate on bounded numberofvariables, a key feature for their parallelization. Generalized scans enjoya property, called here protoassociativity, that gives rise to ordinary associativity when generalized scans are specialized to ordinary scans. We show that the solution of positive de#nite block tridiagonal linear systems can be cast as a generalized scan, thereby shedding light on the underlying structure enabling k...