Results 1  10
of
32
Exploiting Sparsity in Semidefinite Programming via Matrix Completion I: General Framework
 SIAM JOURNAL ON OPTIMIZATION
, 1999
"... A critical disadvantage of primaldual interiorpoint methods against dual interiorpoint methods for large scale SDPs (semidefinite programs) has been that the primal positive semidefinite variable matrix becomes fully dense in general even when all data matrices are sparse. Based on some fundamenta ..."
Abstract

Cited by 62 (27 self)
 Add to MetaCart
A critical disadvantage of primaldual interiorpoint methods against dual interiorpoint methods for large scale SDPs (semidefinite programs) has been that the primal positive semidefinite variable matrix becomes fully dense in general even when all data matrices are sparse. Based on some fundamental results about positive semidefinite matrix completion, this article proposes a general method of exploiting the aggregate sparsity pattern over all data matrices to overcome this disadvantage. Our method is used in two ways. One is a conversion of a sparse SDP having a large scale positive semidefinite variable matrix into an SDP having multiple but smaller size positive semidefinite variable matrices to which we can effectively apply any interiorpoint method for SDPs employing a standard blockdiagonal matrix data structure. The other way is an incorporation of our method into primaldual interiorpoint methods which we can apply directly to a given SDP. In Part II of this article, we wi...
ILUM: A MultiElimination ILU Preconditioner For General Sparse Matrices
 SIAM J. Sci. Comput
, 1999
"... Standard preconditioning techniques based on incomplete LU (ILU) factorizations offer a limited degree of parallelism, in general. A few of the alternatives advocated so far consist of either using some form of polynomial preconditioning, or applying the usual ILU factorization to a matrix obtain ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
Standard preconditioning techniques based on incomplete LU (ILU) factorizations offer a limited degree of parallelism, in general. A few of the alternatives advocated so far consist of either using some form of polynomial preconditioning, or applying the usual ILU factorization to a matrix obtained from a multicolor ordering. In this paper we present an incomplete factorization technique based on independent set orderings and multicoloring. We note that in order to improve robustness, it is necessary to allow the preconditioner to have an arbitrarily high accuracy, as is done with ILUs based on threshold techniques. The ILUM factorization described in this paper is in this category. It can be viewed as a multifrontal version a Gaussian elimination procedure with threshold dropping which has a high degree of potential parallelism. The emphasis is on methods that deal specifically with general unstructured sparse matrices such as those arising from finite element methods on un...
Highly Parallel Sparse Cholesky Factorization
 SIAM Journal on Scientific and Statistical Computing
, 1992
"... We develop and compare several finegrained parallel algorithms to compute the Cholesky factorization of a sparse matrix. Our experimental implementations are on the Connection Machine, a distributedmemory SIMD machine whose programming model conceptually supplies one processor per data element. In ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
We develop and compare several finegrained parallel algorithms to compute the Cholesky factorization of a sparse matrix. Our experimental implementations are on the Connection Machine, a distributedmemory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to specialpurpose algorithms in which the matrix structure conforms to the connection structure of the machine, our focus is on matrices with arbitrary sparsity structure.
Graph Partitioning Algorithms With Applications To Scientific Computing
 Parallel Numerical Algorithms
, 1997
"... Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a graph partitioning problem in which the vertices of a graph are divided into a specified number of su ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
Identifying the parallelism in a problem by partitioning its data and tasks among the processors of a parallel computer is a fundamental issue in parallel computing. This problem can be modeled as a graph partitioning problem in which the vertices of a graph are divided into a specified number of subsets such that few edges join two vertices in different subsets. Several new graph partitioning algorithms have been developed in the past few years, and we survey some of this activity. We describe the terminology associated with graph partitioning, the complexity of computing good separators, and graphs that have good separators. We then discuss early algorithms for graph partitioning, followed by three new algorithms based on geometric, algebraic, and multilevel ideas. The algebraic algorithm relies on an eigenvector of a Laplacian matrix associated with the graph to compute the partition. The algebraic algorithm is justified by formulating graph partitioning as a quadratic assignment p...
Spectral Nested Dissection
, 1992
"... . We describe a spectral nested dissection algorithm for computing orderings appropriate for parallel factorization of sparse, symmetric matrices. The algorithm makes use of spectral properties of the Laplacian matrix associated with the given matrix to compute separators. We evaluate the quality of ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
. We describe a spectral nested dissection algorithm for computing orderings appropriate for parallel factorization of sparse, symmetric matrices. The algorithm makes use of spectral properties of the Laplacian matrix associated with the given matrix to compute separators. We evaluate the quality of the spectral orderings with respect to several measures: fill, elimination tree height, height and weight balances of elimination trees, and clique tree heights. Spectral orderings compare quite favorably with commonly used orderings, outperforming them by a wide margin for some of these measures. These results are confirmed by computing a multifrontal numerical factorization with the different orderings on a Cray YMP with eight processors. Keywords. graph partitioning, graph spectra, Laplacian matrix, ordering algorithms, parallel orderings, parallel sparse Cholesky factorization, sparse matrix, vertex separator AMS(MOS) subject classifications. 65F50, 65F05, 65F15, 68R10 1. Introducti...
Fully dynamic algorithms for chordal graphs
 In Proceedings of the 10th Annual ACMSIAM Symposium on Discrete Algorithms (SODA'99
, 1999
"... We present the rst dynamic algorithm that maintains a clique tree representation of a chordal graph and supports the following operations: (1) query whether deleting or inserting an arbitrary edge preserves chordality, (2) delete or insert an arbitrary edge, provided it preserves chordality. Wegivet ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
We present the rst dynamic algorithm that maintains a clique tree representation of a chordal graph and supports the following operations: (1) query whether deleting or inserting an arbitrary edge preserves chordality, (2) delete or insert an arbitrary edge, provided it preserves chordality. Wegivetwo implementations. In the rst, each operation runs in O(n) time, where n is the numberofvertices. In the second, an insertion query runs in O(log 2 n) time, an insertion in O(n) time, a deletion query in O(n) time, and a deletion in O(n log n) time. We also present a data structure that allows a deletion query to run in O ( p m) time in either implementation, where m is the current number of edges. Updating this data structure after a deletion or insertion requires O(m) time. We also present avery simple dynamic algorithm that supports each of the following operations in O(1) time on a general graph: (1) query whether the graph is split, (2) delete or insert an arbitrary edge. 1
Exploiting sparsity in semidefinite programming via matrix completion II: implementation and numerical results
"... In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs) when solving them by primaldual interiorpoint methods. This framework is based on some results about po ..."
Abstract

Cited by 28 (14 self)
 Add to MetaCart
In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs) when solving them by primaldual interiorpoint methods. This framework is based on some results about positive semidefinite matrix completion, and it can be embodied in two di#erent ways. One is by a conversion of a given sparse SDP having a large scale positive semidefinite matrix variable into an SDP having multiple but smaller positive semidefinite matrix variables. The other is by incorporating a positive definite matrix completion itself in a primaldual interiorpoint method. The current article presents the details of their implementations. We introduce new techniques to deal with the sparsity through a clique tree in the former method and through new computational formulae in the latter one. Numerical results over di#erent classes of SDPs show that these methods can be very e#cient for some problems. Keywords: Semidefinite programming; Primaldual interiorpoint method; Matrix completion problem; Clique tree; Numerical results. # Department of Applied Physics, The University of Tokyo, 731 Hongo, Bunkyoku, Tokyo 1138565 Japan (nakata@zzz.t.utokyo.ac.jp ). + Department of Architecture and Architectural Systems, Kyoto University, Kyoto 6068501 Japan (fujisawa@ismj.archi.kyotou.ac.jp). # Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2121 OhOkayama, Meguroku, Tokyo 1528552 Japan (mituhiro@is.titech.ac.jp). The author was supported by The Ministry of Education, Culture, Sports, Science and Technology of Japan. Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2121 OhOkayama, Meguroku, Toky...
An Efficient Algorithm to Compute Row and Column Counts for Sparse Cholesky Factorization
 SIAM J. Matrix Anal. Appl
, 1994
"... Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, f ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
Let an undirected graph G be given, along with a specified depthfirst spanning tree T . We give almostlineartime algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertex v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowlygrowing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is...
Orderings for factorized sparse approximate inverse preconditioners
 SIAM J. SCI. COMPUT
, 2000
"... The influence of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. Some theoretical results on the effect of orderings on the fillin and decay behavior of the inverse factors of a sparse matrix are presented. It is shown experimentally that certa ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
The influence of reorderings on the performance of factorized sparse approximate inverse preconditioners is considered. Some theoretical results on the effect of orderings on the fillin and decay behavior of the inverse factors of a sparse matrix are presented. It is shown experimentally that certain reorderings, like minimum degree and nested dissection, can be very beneficial. The benefit consists of a reduction in the storage and time required for constructing the preconditioner, and of faster convergence of the preconditioned iteration in many cases of practical interest.
A Practical Algorithm for Making Filled Graphs Minimal
 THEOR. COMP. SC
, 2001
"... For an arbitrary filled graph G + of a given original graph G, we consider the problem of removing fill edges from G + in order to obtain a graph M that is both a minimal filled graph of G and a subgraph of G + . For G + with f fill edges and e original edges, we give a simple O(f(e+f)) a ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
For an arbitrary filled graph G + of a given original graph G, we consider the problem of removing fill edges from G + in order to obtain a graph M that is both a minimal filled graph of G and a subgraph of G + . For G + with f fill edges and e original edges, we give a simple O(f(e+f)) algorithm which solves the problem and computes a corresponding minimal elimination ordering of G. We report on experiments with an implementation of our algorithm, where we test graphs G corresponding to some real sparse matrix applications and apply wellknown and widely used ordering heuristics to find G + . Our findings show the amount of fill that is commonly removed by a minimalization for each of these heuristics, and also indicate that the runtime of our algorithm on these practical graphs is better than the presented worstcase bound.