Results 1  10
of
11
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 178 (27 self)
 Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
On the solution of equality constrained quadratic programming problems arising . . .
, 1998
"... ..."
Incomplete Factorization Preconditioning For Linear Least Squares Problems
, 1994
"... this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a
CIMGS: An incomplete orthogonal factorization preconditioner
 SIAM J. Sci. Comput
, 1997
"... Abstract. A new preconditioner for symmetric positive definite systems is proposed, analyzed, and tested. The preconditioner, compressed incomplete modified Gram–Schmidt (CIMGS), is based on an incomplete orthogonal factorization. CIMGS is robust both theoretically and empirically, existing (in exac ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Abstract. A new preconditioner for symmetric positive definite systems is proposed, analyzed, and tested. The preconditioner, compressed incomplete modified Gram–Schmidt (CIMGS), is based on an incomplete orthogonal factorization. CIMGS is robust both theoretically and empirically, existing (in exact arithmetic) for any full rank matrix. Numerically it is more robust than an incomplete Cholesky factorization preconditioner (IC) and a complete Cholesky factorization of the normal equations. Theoretical results show that the CIMGS factorization has better backward error properties than complete Cholesky factorization. For symmetric positive definite Mmatrices, CIMGS induces a regular splitting and better estimates the complete Cholesky factor as the set of dropped positions gets smaller. CIMGS lies between complete Cholesky factorization and incomplete Cholesky factorization in its approximation properties. These theoretical properties usually hold numerically, even when the matrix is not an Mmatrix. When the drop set satisfies a mild and easily verified (or enforced) property, the upper triangular factor CIMGS generates is the same as that generated by incomplete Cholesky factorization. This allows the existence of the IC factorization to be guaranteed, based solely on the target sparsity pattern.
Parallel Inner ProductFree Algorithm for Least Squares Problems
 University of Denmark
, 1996
"... . The performance of CGLS, a basic iterative method whose main idea is to organize the computation of conjugate gradient method applied to normal equations for solving least squares problems. On modern architecture is always limited because of the global communication required for inner products. In ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
. The performance of CGLS, a basic iterative method whose main idea is to organize the computation of conjugate gradient method applied to normal equations for solving least squares problems. On modern architecture is always limited because of the global communication required for inner products. Inner products often therefore present a bottleneck, and it is desirable to reduce or even eliminate all the inner products. Following a note of B. Fischer and R. Freund [11], an inner productfree conjugate gradientlike algorithm is presented that simulates the standard conjugate gradient by approximating the conjugate gradient orthogonal polynomial by suitable chosen orthogonal polynomial from BernsteinSzego class. We also apply this kind of algorithm into normal equations as CGLS to solve the least squares problems and compare the performance with the standard and modified approaches. 1 Introduction Many scientific and engineering applications such as linear programming [4], augmented La...
Separators and Structure Prediction in Sparse Orthogonal Factorization
, 1993
"... In the factorization A = QR of a matrix A, the orthogonal matrix Q can be represented either explicitly (as a matrix) or implicitly (as a matrix H of Householder vectors). We derive both upper and lower bounds on the number of nonzeros in H and the number of nonzeros in Q, in the case where the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In the factorization A = QR of a matrix A, the orthogonal matrix Q can be represented either explicitly (as a matrix) or implicitly (as a matrix H of Householder vectors). We derive both upper and lower bounds on the number of nonzeros in H and the number of nonzeros in Q, in the case where the graph of A T A has "good" separators and A need not be square. We also derive an upper bound on the number of nonzeros in the nullbasis part of Q in the case where A is the edgevertex incidence matrix of a planar graph. The significance of these results is that they both illuminate and amplify a folk theorem of sparse QR factorization, which holds that the matrix H of Householder vectors represents the orthogonal factor of A much more compactly than Q itself. To facilitate discussion of this and related issues, we review several related results which have appeared previously. Keywords: Sparse matrix algorithms, QR factorization, separators, column intersection graph, strong Hall...
Combinatorial Algorithms for Computing Column Space Bases That Have Sparse Inverses
 ETNA
"... Abstract. This paper presents a new combinatorial approach towards constructing a sparse, implicit basis for the null space of a sparse, underdetermined matrix. Our approach is to compute a column space basis of that has a sparse inverse, which could be used to represent a null space basis in impli ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. This paper presents a new combinatorial approach towards constructing a sparse, implicit basis for the null space of a sparse, underdetermined matrix. Our approach is to compute a column space basis of that has a sparse inverse, which could be used to represent a null space basis in implicit form. We investigate three different algorithms for computing column space bases: two greedy algorithms implemented using graph matchings, and a third, which employs a divide and conquer strategy implemented with hypergraph partitioning followed by a matching. Our results show that for many matrices from linear programming, structural analysis, and circuit simulation, it is possible to compute column space bases having sparse inverses, contrary to conventional wisdom. The hypergraph partitioning method yields sparser basis inverses and has low computational time requirements, relative to the greedy approaches. We also discuss the complexity of selecting a column space basis when it is known that such a basis exists in block diagonal form with a given small block size. Key words. sparse column space basis, sparse null space basis, block angular matrix, block diagonal matrix, matching, hypergraph partitioning, inverse of a basis AMS subject classifications. 65F50, 68R10, 90C20 1. Introduction. Many
Dual variable methods for mixedhybrid finite element approximation of the potential fluid flow problem in porous media
, 2001
"... Abstract. Mixedhybrid finite element discretization of Darcy’s law and the continuity equation that describe the potential fluid flow problem in porous media leads to symmetric indefinite saddlepoint problems. In this paper we consider solution techniques based on the computation of a nullspace bas ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Mixedhybrid finite element discretization of Darcy’s law and the continuity equation that describe the potential fluid flow problem in porous media leads to symmetric indefinite saddlepoint problems. In this paper we consider solution techniques based on the computation of a nullspace basis of the whole or of a part of the left lower offdiagonal block in the system matrix and on the subsequent iterative solution of a projected system. This approach is mainly motivated by the need to solve a sequence of such systems with the same mesh but different material properties. A fundamental cycle nullspace basis of the whole offdiagonal block is constructed using the spanning tree of an associated graph. It is shown that such a basis may be theoretically rather illconditioned. Alternatively, the orthogonal nullspace basis of the subblock used to enforce continuity over faces can be easily constructed. In the former case, the resulting projected system is symmetric positive definite and so the conjugate gradient method can be applied. The projected system in the latter case remains indefinite and the preconditioned minimal residual method (or the smoothed conjugate gradient method) should be used. The theoretical rate of convergence for both algorithms is discussed and their efficiency is compared in numerical experiments. Key words. Saddlepoint problem, preconditioned iterative methods, sparse matrices, finite element method AMS subject classifications. 65F05, 65F50 1. Introduction. Let
Isoefficiency Analysis of CGLS Algorithm for Parallel Least Squares Problems
 The International Conference on High Performance Computing and Networking (HPCN97
, 1997
"... . In this paper we study the parallelization of CGLS, a basic iterative method for large and sparse least squares problems whose main idea is to organize the computation of conjugate gradient method to normal equations. A performance model called isoefficiency concept is used to analyze the beha ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. In this paper we study the parallelization of CGLS, a basic iterative method for large and sparse least squares problems whose main idea is to organize the computation of conjugate gradient method to normal equations. A performance model called isoefficiency concept is used to analyze the behavior of this method implemented on massively parallel distributed memory computers with two dimensional mesh communication scheme. Two different mappings of data to processors, namely simple stripe and cyclic stripe partitionings are compared by putting these communication times into the isoefficiency concept which models scalability aspects. Theoretically, the cyclic stripe partitioning is shown to be asymptotically more scalable. 1 Introduction Many scientific and engineering applications such as linear programming [4], augmented Lagrangian method for CFD [10], and the natural factor method in structure engineering [1, 13] give rise to the least squares problems min x kAx \Gamma...
Computing sparse orthogonal factors in MATLAB
, 1998
"... In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the mu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this report a new version of the multifrontal sparse QR factorization routine sqr, originally by Matstoms, for general sparse matrices is described and evaluated. In the previous version the orthogonal factor Q is discarded due to storage considerations. The new version provides Q and uses the multifrontal structure to store this orthogonal factor in a compact way. A new data class with overloaded operators is implemented in Matlab to provide an easy usage of the compact orthogonal factors. This implicit way of storing the orthogonal factor also results in faster computation and application of Q and Q T . Examples are given, where the new version is up to four times faster when computing only R and up to 1000 times faster when computing both Q and R, than the builtin function qr in Matlab. The sqr package is available at URL: http://www.mai.liu.se/~milun/sls/. Key words: QR factorization, sparse problems, multifrontal method, orthogonal factorization. 1 Introduction. Let A 2 IR...