Results 1 
9 of
9
Combinatorial preconditioners for sparse, symmetric, diagonally dominant linear systems
, 1996
"... ..."
Mesh Generation
 Handbook of Computational Geometry. Elsevier Science
, 2000
"... this article, we emphasize practical issues; an earlier survey by Bern and Eppstein [24] emphasized theoretical results. Although there is inevitably some overlap between these two surveys, we intend them to be complementary. ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
this article, we emphasize practical issues; an earlier survey by Bern and Eppstein [24] emphasized theoretical results. Although there is inevitably some overlap between these two surveys, we intend them to be complementary.
Performance Evaluation of a New Parallel Preconditioner
 In Proceedings of the Ninth International Parallel Processing Symposium
, 1995
"... The linear systems associated with large, sparse, symmetric, positive definite matrices are often solved iteratively using the preconditioned conjugate gradient method. We have developed a new class of preconditioners, support tree preconditioners, that are based on the connectivity of the graphs co ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
The linear systems associated with large, sparse, symmetric, positive definite matrices are often solved iteratively using the preconditioned conjugate gradient method. We have developed a new class of preconditioners, support tree preconditioners, that are based on the connectivity of the graphs corresponding to the matrices and are wellstructured for parallel implementation. In this paper, we evaluate the performance of support tree preconditioners by comparing them against two common types of preconditioners: diagonal scaling, and incomplete Cholesky. Support tree preconditioners require less overall storage and less work per iteration than incomplete Cholesky preconditioners. In terms of total execution time, support tree preconditioners outperform both diagonal scaling and incomplete Cholesky preconditioners. 1
Optimal Control Of Two And ThreeDimensional Incompressible NavierStokes Flows
, 1997
"... . The focus of this work is on the development of largescale numerical optimization methods for optimal control of steady incompressible NavierStokes flows. The control is affected by the suction or injection of fluid on portions of the boundary, and the objective function represents the rate at w ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
. The focus of this work is on the development of largescale numerical optimization methods for optimal control of steady incompressible NavierStokes flows. The control is affected by the suction or injection of fluid on portions of the boundary, and the objective function represents the rate at which energy is dissipated in the fluid. We develop reduced Hessian sequential quadratic programming methods that avoid converging the flow equations at each iteration. Both quasiNewton and Newton variants are developed, and compared to the approach of eliminating the flow equations and variables, which is effectively the generalized reduced gradient method. Optimal control problems are solved for twodimensional flow around a cylinder and threedimensional flow around a sphere. The examples demonstrate at least an orderofmagnitude reduction in time taken, allowing the optimal solution of flow control problems in as little as half an hour on a desktop workstation. Key words. optimal contr...
Directionpreserving and Schurmonotonic semiseparable approximations of symmetric positive definite matrices
 SIAM J. Matrix Anal. Appl
"... Abstract. For a given symmetric positive definite matrix A ∈ R N×N, we develop a fast and backward stable algorithm to approximate A by a symmetric positive definite semiseparable matrix, accurate to a constant multiple of any prescribed tolerance. In addition, this algorithm preserves the product, ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. For a given symmetric positive definite matrix A ∈ R N×N, we develop a fast and backward stable algorithm to approximate A by a symmetric positive definite semiseparable matrix, accurate to a constant multiple of any prescribed tolerance. In addition, this algorithm preserves the product, AZ, for a given matrix Z ∈ R N×d,whered ≪ N. Our algorithm guarantees the positivedefiniteness of the semiseparable matrix by embedding an approximation strategy inside a Cholesky factorization procedure to ensure that the Schur complements during the Cholesky factorization all remain positive definite after approximation. It uses a robust directionpreserving approximation scheme to ensure the preservation of AZ. We present numerical experiments and discuss the potential implications of our work.
A parallel direct solver for the selfadaptive hp
 J. PARALLEL DISTRIB COMPUT
, 2010
"... ..."
(Show Context)
COMMUNICATION AVOIDING ILU0 PRECONDITIONER∗
"... Abstract. In this paper we present a communication avoiding ILU0 preconditioner for solving large linear systems of equations by using iterative Krylov subspace methods. Recent research has focused on communication avoiding Krylov subspace methods based on socalled sstep methods. However, there ar ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a communication avoiding ILU0 preconditioner for solving large linear systems of equations by using iterative Krylov subspace methods. Recent research has focused on communication avoiding Krylov subspace methods based on socalled sstep methods. However, there are not many communication avoiding preconditioners yet, and this represents a serious limitation of these methods. Our preconditioner allows us to perform s iterations of the iterative method with no communication, through ghosting some of the input data and performing redundant computation. To avoid communication, an alternating reordering algorithm is introduced for structured and well partitioned unstructured matrices, which requires the input matrix to be ordered by using a graph partitioning technique such as kway or nested dissection. We show that the reordering does not affect the convergence rate of the ILU0 preconditioned system as compared to kway or nested dissection ordering, while it reduces data movement and is expected to reduce the time needed to solve a linear system. In addition to communication avoiding Krylov subspace methods, our preconditioner can be used with classical methods such as GMRES to reduce communication.
Dedicated to My Parents and Teachers Acknowledgements
"... I would like to take this opportunity to express my deep sense of gratitude to the person who has taught me what dedication is, my thesis supervisor Dr. Phalguni Gupta. His benevolent guidance, apt suggestion, unstinted help and constructive criticism has inspired me in successful completion of pre ..."
Abstract
 Add to MetaCart
(Show Context)
I would like to take this opportunity to express my deep sense of gratitude to the person who has taught me what dedication is, my thesis supervisor Dr. Phalguni Gupta. His benevolent guidance, apt suggestion, unstinted help and constructive criticism has inspired me in successful completion of present work. I also extend my sincere thanks to all the faculty members of the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, for the invaluable knowledge they have imparted to me and for teaching the principles in most exciting and enjoyable way. My stay at Indian Institute of Technology Kanpur has been excited and enlightening. The time, I spent with my friends Gaurav, Mohit, Rahul, Ashish, Ashvin, Saeed is unforgettable. I am grateful for their continuous attachment which strengthen me at difficult moments. I take this opportunity to thank my parents for all that they have done for me. Without their love, support and encouragement, I would never have reached this stage in my life. Peeyush Jain
Objectives Cluster solution of block tridiagonal systems
"... In order to exploit the capacities of cluster computing in relatively small numerical problems, we compare the performance of parallel algorithms for the solution of block tridiagonal linear systems, one based on cyclic reduction and the other on the divide and conquer paradigm. ..."
Abstract
 Add to MetaCart
(Show Context)
In order to exploit the capacities of cluster computing in relatively small numerical problems, we compare the performance of parallel algorithms for the solution of block tridiagonal linear systems, one based on cyclic reduction and the other on the divide and conquer paradigm.