Results 1  10
of
18
Nearlylinear time algorithms for graph partitioning, graph sparsification, and solving linear systems (Extended Abstract)
 STOC'04
, 2004
"... We present algorithms for solving symmetric, diagonallydominant linear systems to accuracy ɛ in time linear in their number of nonzeros and log(κf (A)/ɛ), where κf (A) isthe condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with ..."
Abstract

Cited by 133 (7 self)
 Add to MetaCart
We present algorithms for solving symmetric, diagonallydominant linear systems to accuracy ɛ in time linear in their number of nonzeros and log(κf (A)/ɛ), where κf (A) isthe condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearlylinear time algorithms for graph sparsification and graph partitioning.
Combinatorial preconditioners for sparse, symmetric, diagonally dominant linear systems
, 1996
"... ..."
Support Theory For Preconditioning
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2001
"... We present support theory, a set of techniques for bounding extreme eigenvalues and condition numbers for matrix pencils. Our intended application of support theory is to enable proving condition number bounds for preconditioners for symmetric, positive definite systems. One key feature sets our app ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
We present support theory, a set of techniques for bounding extreme eigenvalues and condition numbers for matrix pencils. Our intended application of support theory is to enable proving condition number bounds for preconditioners for symmetric, positive definite systems. One key feature sets our approach apart from most other works: We use support numbers instead of generalized eigenvalues. Although closely related, we believe support numbers are more convenient to work with algebraically. This paper provides
SupportGraph Preconditioners
 SIAM Journal on Matrix Analysis and Applications
"... . We present a littleknown preconditioning technique, called supportgraph preconditioning, and use it to analyze two classes of preconditioners. The technique was first described in a talk by Pravin Vaidya, who did not formally publish his results. Vaidya used the technique to devise and analyze a ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
. We present a littleknown preconditioning technique, called supportgraph preconditioning, and use it to analyze two classes of preconditioners. The technique was first described in a talk by Pravin Vaidya, who did not formally publish his results. Vaidya used the technique to devise and analyze a class of novel preconditioners. The technique was later extended by Gremban and Miller, who used it in the development and analysis of yet another class of new preconditioners. This paper extends the technique further and uses it to analyze two classes of existing preconditioners: modified incompleteCholesky and multilevel diagonal scaling. The paper also contains a presentation of Vaidya's preconditioners, which was previously missing from the literature. 1. Introduction. This paper presents new applications of a littleknown technique for constructing and analyzing preconditioners called supportgraph preconditioning. The technique was first proposed and used by Pravin Vaidya [11], who ...
Solving Sparse, Symmetric, DiagonallyDominant Linear Systems in Time 0(m^1.31)
 IN FOCS ’03: PROCEEDINGS OF THE 44TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2003
"... ..."
Online Prediction on Large Diameter Graphs
"... We continue our study of online prediction of the labelling of a graph. We show a fundamental limitation of Laplacianbased algorithms: if the graph has a large diameter then the number of mistakes made by such algorithms may be proportional to the square root of the number of vertices, even when ta ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We continue our study of online prediction of the labelling of a graph. We show a fundamental limitation of Laplacianbased algorithms: if the graph has a large diameter then the number of mistakes made by such algorithms may be proportional to the square root of the number of vertices, even when tackling simple problems. We overcome this drawback by means of an efficient algorithm which achieves a logarithmic mistake bound. It is based on the notion of a spine, a path graph which provides a linear embedding of the original graph. In practice, graphs may exhibit cluster structure; thus in the last part, we present a modified algorithm which achieves the “best of both worlds”: it performs well locally in the presence of cluster structure, and globally on large diameter graphs. 1
Efficient Approximate Solution of Sparse Linear Systems
, 1998
"... We consider the problem of approximate solution of of a linear system Ax = b over the reals, such that HAE  bll _ ellbll, for a given ,0 e 1. This is one of the most fundamental of all computational problems. Let (A) = IlARHA111 be the condition number of the n x n input matrix A. Sparse, Diagonal ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We consider the problem of approximate solution of of a linear system Ax = b over the reals, such that HAE  bll _ ellbll, for a given ,0 e 1. This is one of the most fundamental of all computational problems. Let (A) = IlARHA111 be the condition number of the n x n input matrix A. Sparse, Diagonally Dominant (DD) linear systems appear very frequently in the solution of linear systems associated with PDEs and stochastic systems, and generally have polynomial condition number. While there is a vast literature on methods for approximate solution of sparse DD linear systems, most of the results are empirical, and to date there are no known proven linear bounds on the complexity of this problem. Using iterative algorithms, and building on the work of Valdya [1] and Gretaban et al. [24] we provide the best known sequential work bounds for the solution of a number of major classes of DD sparse linear systems. Let r = log((A)/e). The sparsity graph of A is a graph whose nodes are the indices and whose edges represent pairs of indices of A with nonzero entries. The following results hold for a DD matrix A with nonzero offdiagonal entries of bounded magnitude: (1) if A has a sparsity graph which is a regular ddimensional grid for constant d, then our work is O(n'2), (2) if A is a stochastic matrix with fixed s(n)separable graph as its sparsity graph, then our work is O((n + s(n)2)r). The following results hold for a DD matrix A with entries of unbounded magnitude: (3) if A is sparse (i.e., O(n) nonzeros), our work is less than O(n(r + 1ogn)) 1.5, (4) if A has a sparsity graph in a family of graphs with constant size forbidden graph minors (e.g., planar graphs), then our work is bounded by O(n(r+log n)l+(1)) in the case log n = o(log and O(n(r + 1ogn)) 1+(1) in the case log...
Finding effective supporttree preconditioners
 in Proceedings of the 17th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
, 2005
"... In 1995, Gremban, Miller, and Zagha introduced supporttree preconditioners and a parallel algorithm called supporttree conjugate gradient (STCG) for solving linear systems of the form Ax = b, where A is an n × n Laplacian matrix. A Laplacian is a symmetric matrix in which the offdiagonal entries ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
In 1995, Gremban, Miller, and Zagha introduced supporttree preconditioners and a parallel algorithm called supporttree conjugate gradient (STCG) for solving linear systems of the form Ax = b, where A is an n × n Laplacian matrix. A Laplacian is a symmetric matrix in which the offdiagonal entries are nonpositive, and the row and column sums are zero. A Laplacian A with 2m nonzeros can be interpreted as an undirected positivelyweighted graph G with n vertices and m edges, where there is an edge between two nodes i and j with weight c((i, j)) = −Ai,j = −Aj,i if Ai,j = Aj,i < 0. Gremban et al. showed experimentally that STCG performs well on several classes of graphs commonly used in scientific computations. In his thesis, Gremban also proved upper bounds on the number of iterations required for STCG to converge for certain classes of graphs. In this paper, we present an algorithm for finding a preconditioner for an arbitrary graph G = (V, E) with n nodes, m edges, and a weight function c> 0 on the edges, where w.l.o.g., mine∈E c(e) = 1. Equipped with this preconditioner, STCG requires O(log 4 n · � ∆/α) iterations, where α = min U⊂V,U≤V /2 c(U, V \U)/U  is the minimum edge expansion of the graph, and ∆ = maxv∈V c(v) is the maximum incident weight on any vertex. Each iteration requires O(m) work and can be implemented in O(log n) steps in parallel, using only O(m) space. Our results generalize to matrices that are symmetric and diagonallydominant (SDD). 1
An iterative method for solving complexsymmetric systems arising in electrical power modeling
 SIAM J. Matrix Analysis App
, 2000
"... Abstract. We propose an iterative method for solving a complexsymmetric linear system arising in electric power networks. Our method extends Gremban, Miller, and Zagha’s [in Proceedings of the International Parallel Processing Symposium, IEEE Computer Society, Los Alamitos, CA, 1995] supporttree p ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. We propose an iterative method for solving a complexsymmetric linear system arising in electric power networks. Our method extends Gremban, Miller, and Zagha’s [in Proceedings of the International Parallel Processing Symposium, IEEE Computer Society, Los Alamitos, CA, 1995] supporttree preconditioner to handle complex weights and vastly different admittances. Our underlying iteration is a modification to transposefree QMR [6] to enhance accuracy. Computational results are described.