Results 11  20
of
85
Towards a tighter coupling of bottomup and topdown sparse matrix ordering methods
 BIT
, 2001
"... Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our meth ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
Most stateoftheart ordering schemes for sparse matrices are a hybrid of a bottomup method such as minimum degree and a top down scheme such as George's nested dissection. In this paper we present an ordering algorithm that achieves a tighter coupling of bottomup and topdown methods. In our methodology vertex separators are interpreted as the boundaries of the remaining elements in an unfinished bottomup ordering. As a consequence, we are using bottomup techniques such as quotient graphs and special node selection strategies for the construction of vertex separators. Once all separators have been found, we are using them as a skeleton for the computation of several bottomup orderings. Experimental results show that the orderings obtained by our scheme are in general better than those obtained by other popular ordering codes.
An Efficient Algorithm to Compute Row and Column Counts for Sparse Cholesky Factorization
 SIAM J. Matrix Anal. Appl
, 1994
"... Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, f ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertex v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowlygrowing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is...
The Hierarchical Basis Multigrid Method And Incomplete LU Decomposition
 In Seventh International Symposium on Domain Decomposition Methods for Partial Differential Equations
, 1994
"... . A new multigrid or incomplete LU technique is developed in this paper for solving large sparse algebraic systems from discretizing partial differential equations. By exploring some deep connection between the hierarchical basis method and incomplete LU decomposition, the resulting algorithm can be ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
. A new multigrid or incomplete LU technique is developed in this paper for solving large sparse algebraic systems from discretizing partial differential equations. By exploring some deep connection between the hierarchical basis method and incomplete LU decomposition, the resulting algorithm can be effectively applied to problems discretized on completelyunstructured grids. Numerical experiments demonstrating the efficiency of the method are also reported. Key words. Finite element, hierarchical basis, multigrid, incomplete LU . AMS(MOS) subject classifications. 65F10, 65N20 1. Introduction. In this work, we explore the connection between the methods of sparse Gaussian elimination [8][13], incomplete LU (ILU) decomposition [9][10] and the hierarchical basis multigrid (HBMG) [16][4]. Hierarchical basis methods have proved to be one of the more robust classes of methods for solving broad classes of elliptic partial differential equations, especially the large systems arising in conju...
A WideRange Algorithm for Minimal Triangulation from an Arbitrary Ordering
, 2003
"... We present a new algorithm, called LBTriang, which computes minimal triangulations. ..."
Abstract

Cited by 26 (19 self)
 Add to MetaCart
We present a new algorithm, called LBTriang, which computes minimal triangulations.
Minimal triangulations of graphs: A survey
 Discrete Mathematics
"... Any given graph can be embedded in a chordal graph by adding edges, and the resulting chordal graph is called a triangulation of the input graph. In this paper we study minimal triangulations, which are the result of adding an inclusion minimal set of edges to produce a triangulation. This topic was ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Any given graph can be embedded in a chordal graph by adding edges, and the resulting chordal graph is called a triangulation of the input graph. In this paper we study minimal triangulations, which are the result of adding an inclusion minimal set of edges to produce a triangulation. This topic was first studied from the standpoint of sparse matrices and vertex elimination in graphs. Today we know that minimal triangulations are closely related to minimal separators of the input graph. Since the first papers presenting minimal triangulation algorithms appeared in 1976, several characterizations of minimal triangulations have been proved, and a variety of algorithms exist for computing minimal triangulations of both general and restricted graph classes. This survey presents and ties together these results in a unified modern notation, keeping an emphasis on the algorithms. 1 Introduction and
Autotuning Performance on Multicore Computers
, 2008
"... personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires pri ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific
Data Structures and Programming Techniques for the Implementation of Karmarkar's Algorithm
, 1989
"... This paper describes data structures and programming techniques used in an implementation of Karmarkar's algorithm for linear programming. Most of oar discussion focuses on applying Gaussian elimination toward the solution of a sequence of sparse symmetric positive dermite systems of linear equation ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
This paper describes data structures and programming techniques used in an implementation of Karmarkar's algorithm for linear programming. Most of oar discussion focuses on applying Gaussian elimination toward the solution of a sequence of sparse symmetric positive dermite systems of linear equations, the main requirement in Karmarkar's algorithm. Oar approach relies on a direct factorization scheme, with an extensive symbolic factodzation step performed in a preparatory stage of the linear programming algorithm. An interpretatire version of Gaussian elimination makes use of the symbolic information to perform the actual numerical computations at each iteration of algorithm. We also discuss ordering algorithms that attempt to reduce the mount offilldn in the LU factors, a procedare to build the linear system solved at each iteration, the use of a dense window data structure in the Gaussian elimination method, a preprecesslng procedare designed to increase the sparsity of the linear programming coefficient matrix, and the special treatment of dense columns in the coefficient matrix
A Computational Scheme For Reasoning In Dynamic Probabilistic Networks
, 1992
"... A computational scheme for reasoning about dynamic systems using (causal) probabilistic networks is presented. The scheme is based on the framework of Lauritzen & Spiegelhalter (1988), and may be viewed as a generalization of the inference methods of classical timeseries analysis in the sense th ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
A computational scheme for reasoning about dynamic systems using (causal) probabilistic networks is presented. The scheme is based on the framework of Lauritzen & Spiegelhalter (1988), and may be viewed as a generalization of the inference methods of classical timeseries analysis in the sense that it allows description of nonlinear, multivariate dynamic systems with complex conditional independence structures. Further, the scheme provides a method for efficient backward smoothing and possibilities for efficient, approximate forecasting methods. The scheme has been implemented on top of the HUGIN shell.
A Practical Algorithm for Making Filled Graphs Minimal
 THEOR. COMP. SC
, 2001
"... For an arbitrary filled graph G + of a given original graph G, we consider the problem of removing fill edges from G + in order to obtain a graph M that is both a minimal filled graph of G and a subgraph of G + . For G + with f fill edges and e original edges, we give a simple O(f(e+f)) a ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
For an arbitrary filled graph G + of a given original graph G, we consider the problem of removing fill edges from G + in order to obtain a graph M that is both a minimal filled graph of G and a subgraph of G + . For G + with f fill edges and e original edges, we give a simple O(f(e+f)) algorithm which solves the problem and computes a corresponding minimal elimination ordering of G. We report on experiments with an implementation of our algorithm, where we test graphs G corresponding to some real sparse matrix applications and apply wellknown and widely used ordering heuristics to find G + . Our findings show the amount of fill that is commonly removed by a minimalization for each of these heuristics, and also indicate that the runtime of our algorithm on these practical graphs is better than the presented worstcase bound.
The Incomplete Factorization Multigraph Algorithm
 SIAM J. SCI. COMPUT
"... We present a new family of multigraph algorithms, ILUMG, based upon an incomplete sparse matrix factorization using a particular ordering and allowing a limited amount of fillin. While much of the motivation for multigraph comes from multigrid ideas, ILUMG is distinctly different from algebraic ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We present a new family of multigraph algorithms, ILUMG, based upon an incomplete sparse matrix factorization using a particular ordering and allowing a limited amount of fillin. While much of the motivation for multigraph comes from multigrid ideas, ILUMG is distinctly different from algebraic multilevel methods. The graph of the sparse matrix A is recursively coarsened by eliminating vertices using a graph model similar to Gaussian elimination. Incomplete factorizations are obtained by allowing only the fillin generated by the vertex parents associated with each vertex. Multigraph is numerically compared with algebraic multigrid on some examples arising from discretizations of partial differential equations on unstructured grids.