Results 1 
7 of
7
On the implementation of an algorithm for largescale equality constrained optimization
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..."
Abstract

Cited by 46 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasiNewton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection.
Finding Good Column Orderings for Sparse QR Factorization
 In Second SIAM Conference on Sparse Matrices
, 1996
"... For sparse QR factorization, finding a good column ordering of the matrix to be factorized, is essential. Both the amount of fill in the resulting factors, and the number of floatingpoint operations required by the factorization, are highly dependent on this ordering. A suitable column ordering of ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
For sparse QR factorization, finding a good column ordering of the matrix to be factorized, is essential. Both the amount of fill in the resulting factors, and the number of floatingpoint operations required by the factorization, are highly dependent on this ordering. A suitable column ordering of the matrix A is usually obtained by minimum degree analysis on A T A. The objective of this analysis is to produce low fill in the resulting triangular factor R. We observe that the efficiency of sparse QR factorization is also dependent on other criteria, like the size and the structure of intermediate fill, and the size and the structure of the frontal matrices for the multifrontal method, in addition to the amount of fill in R. An important part of this information is lost when A T A is formed. However, the structural information from A is important to consider in order to find good column orderings. We show how a suitable equivalent reordering of an initial fillreducing ordering can...
Incomplete Factorization Preconditioning For Linear Least Squares Problems
, 1994
"... this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
this paper is the modified version of GramSchmidt orthogonalization with a rejection test applied right after the formation of the offdiagonal elements of the factor R. For a given rejection parameter 0 / 1, the rejection test is: if r ij ! /= k a
The Solution of Augmented Systems
, 1993
"... We examine the solution of sets of linear equations for which the coefficient matrix has the form / H A A T 0 ! where the matrix H is symmetric. We are interested in the case when the matrices H and A are sparse. These augmented systems occur in many application areas, for example in the solu ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We examine the solution of sets of linear equations for which the coefficient matrix has the form / H A A T 0 ! where the matrix H is symmetric. We are interested in the case when the matrices H and A are sparse. These augmented systems occur in many application areas, for example in the solution of linear programming problems, structural analysis, magnetostatics, differential algebraic systems, constrained optimization, electrical networks, and computational fluid dynamics. We discuss in some detail how they arise in the last three of these applications and consider particular characteristics and methods of solution. We then concentrate on direct methods of solution. We examine issues related to conditioning and scaling, and discuss the design and performance of a code for solving these systems. Keywords: augmented systems, constrained optimization, Stokes problem, indefinite sparse matrices, KKT systems, systems matrix, equilibrium problems, electrical networks, interior poi...
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;:1–34 Prepared using nlaauth.cls [Version: 2002/09/18 v1.02] Multisplitting for Regularized Least Squares with Krylov Subspace Recycling
"... The method of multisplitting, implemented as a restricted additive Schwarz type algorithm, is extended for the solution of regularized least squares problems. The presented nonstationary version of the algorithm uses dynamic updating of the weights applied to the subdomains in reconstituting the gl ..."
Abstract
 Add to MetaCart
(Show Context)
The method of multisplitting, implemented as a restricted additive Schwarz type algorithm, is extended for the solution of regularized least squares problems. The presented nonstationary version of the algorithm uses dynamic updating of the weights applied to the subdomains in reconstituting the global solution. Standard convergence results follow from extensive prior literature on linear multisplitting schemes. Additional convergence results on nonstationary iterations yield convergence conditions for the presented nonstationary multisplitting algorithm. The global iteration uses repeated solves of local problems with changing right hand sides but a fixed system matrix. These problems are solved
Parallel Execution Time Analysis for Least Squares Problems on Distributed Memory Architectures
"... In this paper we study the parallelization of PCGLS, a basic iterative method which main idea is to organize the computation of conjugate gradient method with preconditioner applied to normal equations. Two important schemes are discussed. What is the best possible data distribution and which commun ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we study the parallelization of PCGLS, a basic iterative method which main idea is to organize the computation of conjugate gradient method with preconditioner applied to normal equations. Two important schemes are discussed. What is the best possible data distribution and which communication network topology is most suitable for solving least squares problems on massively parallel distributed memory computers. A theoretical model of data distribution and communication phases is presented which allows us to give a detail execution time complexity analysis and to investigate its usefulness. It is shown that the implementation of PCGLS, with a rowblock decomposition of the coefficient matrix, on a ring of communication structure is the most efficient choice. Performance tests of the developed parallel PCGLS algorithm have been carried out on the massively distributed memory system Parsytec and experimental timing results are compared with the theoretical execution time complexity analysis.