Results 1  10
of
15
Computational Issues for a New Class of Preconditioners
, 1999
"... this paper we consider solving a sequence of weighted linear least squares problems where the only changes from one problem to the next are the weights and the right hand side (or data). We alternate between iterative and direct methods to solve the normal equations for the least squares problems. T ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
this paper we consider solving a sequence of weighted linear least squares problems where the only changes from one problem to the next are the weights and the right hand side (or data). We alternate between iterative and direct methods to solve the normal equations for the least squares problems. The direct method is the Cholesky factorization. For the iterative method we discuss a class of preconditioners based on a low rank correction of a Cholesky factorization of a weighted normal equation coefficient matrix. Different ways to compute the preconditioner are given. Further, a sparse algorithm for modifying the Cholesky factors by a low rank matrix is derived.
On the Convergence of an Inexact PrimalDual Interior Point Method for Linear Programming
, 2000
"... The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length in both the primal and dual spaces and is globally convergent. Key Words. Linear programming, inexact primaldual interior point algorithm, inexact search direction, short step lengths, termination criteria, global convergence 1 Introduction Consider the primal linear programming problem minimize c T x subject to: Ax = b; x 0; (1a) where A is an mbyn matrix of full rank m, b an mvector, and c an nvector; and its dual problem maximize b T y subject to: A T y + z = c; z 0: (1b) Technical report number 188, Department of Informatics, University of Bergen 1 The optimality conditions for the linear program pair (1a) and (1b) are the KarushKuhnTucker (KKT) conditions: F (x;...
On the Properties of Preconditioners for Robust Linear Regression
, 2000
"... In this paper, we consider solving the robust linear regression problem, y = Ax + " by Newton's method and iteratively reweighted least squares method. We show that each of these methods can be combined with preconditioned conjugate gradient least squares algorithm to solve large, sparse, rectangul ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper, we consider solving the robust linear regression problem, y = Ax + " by Newton's method and iteratively reweighted least squares method. We show that each of these methods can be combined with preconditioned conjugate gradient least squares algorithm to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We consider the constant preconditioner A T A and preconditioners based on lowrank updates and or downdates of existing matrix factorizations. Numerical results are given to demonstrate the effectiveness of these preconditioners. keywords: Robust regression, Iteratively reweighted least squares, Newton 's method, Conjugate gradient least squares method, Preconditioner 1 Introduction Consider the standard linear regression model y = Ax + "; (1) where y 2 ! m is a vector of observations, A 2 ! m\Thetan (m ? n) is the data or design matrix of rank n, x 2 ! n is the vector of unknown parameters, and Technical Report No. 184, D...
A New Function for Robust Linear Regression: An Iterative Approach
 16th IMACS WORLD CONGRESS 2000 on Scientific Computation, Applied Mathematics and Simulation
, 2000
"... In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new functi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new function that leads to a cheap preconditioner. Further, for this function, we show that the upper bound on the condition number of the preconditioned matrix is independent of the conditioning of the data matrix (is determined by a predefined constant). We give numerical results that demonstrate the effectiveness of preconditioners based on this function. Key words: Robust regression, Iteratively reweighted least squares, Newton's method, New weighting function, Conjugate gradient least squares method, Preconditioner. AMS subject classifications: 62J05, 65D10, 65F10, 65F20. 1 Introduction Consider the standard linear regression model y = Ax + "; (1) where y 2 ! m is a vector of observations,...
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
, 2007
"... 1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented syste ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hypersparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and largescale problems from public domain test collections. Computational experience is encouraging.
Properties and Computational Issues of a Preconditioner for Interior Point Methods
, 1999
"... This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between a direct and an iterative method. The preconditioner is based on lowrank modifications of the coefficient matrix where a direct solution technique has been used. We compare two different techniques of forming the lowrank modification matrix; namely one by Wang and O'Leary [11] and the other by Baryamureeba, Steihaug and Zhang [3]. The theory and numerical testing strongly support the latter. We derive a sparse algorithm for modifying the Cholesky factors by a lowrank matrix, discuss the computational issues of this preconditioner, and finally give numerical results that show the approach of alternating between a direct and an iterative method to be promising. Key Words. Linear Programmi...
The Impact of EqualWeighting of Both LowConfidence and HighConfidence Observations on Robust Linear Regression Computation
, 2000
"... Equal weighting of lowconfidence observations and highconfidence observations occurs for Huber, Talwar, and Barya weighting functions when Newton's method is used to solve robust linear regression problems. This leads to easy updates and downdates of existing matrix factorizations or easy computat ..."
Abstract
 Add to MetaCart
Equal weighting of lowconfidence observations and highconfidence observations occurs for Huber, Talwar, and Barya weighting functions when Newton's method is used to solve robust linear regression problems. This leads to easy updates and downdates of existing matrix factorizations or easy computation of coefficient matrices in linear systems from previous ones. Thus these functions have proven to to be computationally cheap (Huber [4] function is regard by many as the most used function) when the linear system is solved by direct methods. For the case of iterative methods, this kind of weighting of observations leads to very efficient preconditioners for the Barya function. The Talwar function unlike the Huber function has also been shown to work well with iterative methods. We will give numerical results to validate our claims. key words: Robust linear regression, Iteratively reweighted least squares method, Newton's method, New weighting function, Conjugate gradient least squares ...
Preconditioning for Iterative Methods in Robust Linear Regression
, 2000
"... In this paper, we consider solving the robust linear regression problem with an inexact Newton Method combined a preconditioned conjugate gradient least squares algorithm. The eciency of this approach for solving large and sparse problems depends on the preconditioner. Preconditioners based on lowr ..."
Abstract
 Add to MetaCart
In this paper, we consider solving the robust linear regression problem with an inexact Newton Method combined a preconditioned conjugate gradient least squares algorithm. The eciency of this approach for solving large and sparse problems depends on the preconditioner. Preconditioners based on lowrank updates or downdates of an existing matrix factorization are presented. Numerical results are given to demonstrate the eectiveness of these preconditioners. Key words: Robust regression, Iteratively reweighted least squares, Newton's method, Conjugate gradient least squares method, Preconditioner. AMS subject classications: 65F10, 90C30, 65F22, 62J05. 1 Introduction Consider the standard linear regression model y = Ax + "; (1) where y 2 < m is a vector of observations, A 2 < mn (m > n) is the data or design matrix of rank n, x 2 < n is the vector of parameters to be determined, and " 2 < m is the unknown vector of measurement errors. The residual vector r is given by r(x)...
Application of a New Class of Preconditioners to LargeScale Linear Programming Problems
"... . In every primaldual interior point method, a sequence of weighted linear least squares problems are solved, where the only change from one problem to the next are the weights and the right hand side (or data). We propose a mixed primaldual method where we solve the weighted least squares problem ..."
Abstract
 Add to MetaCart
. In every primaldual interior point method, a sequence of weighted linear least squares problems are solved, where the only change from one problem to the next are the weights and the right hand side (or data). We propose a mixed primaldual method where we solve the weighted least squares problems with a direct method for every even interior point iteration and an iterative method for every odd iteration. A class of preconditioners based on a low rank correction of a Cholesky factorization of a weighted normal equation coefficient matrix is introduced. Key Words. Weighted linear least squares, Preconditioners, Preconditioned conjugate gradient for least squares, Linear programming, Primaldual infeasible interior point algorithms. 1 Introduction An interior point algorithm solves the linear programming problem by generating a sequence of interior points from an initial interior point. At every interior point iteration weighted linear least squares problems are solved. The new class...
Application of a Class of Preconditioners to Large Scale Linear Programming Problems
"... . In most interior point methods for linear programming, a sequence of weighted linear least squares problems are solved, where the only changes from one iteration to the next are the weights and the right hand side. The weighted least squares problems are usually solved as weighted normal equations ..."
Abstract
 Add to MetaCart
. In most interior point methods for linear programming, a sequence of weighted linear least squares problems are solved, where the only changes from one iteration to the next are the weights and the right hand side. The weighted least squares problems are usually solved as weighted normal equations by the direct method of Cholesky factorization. In this paper, we consider solving the weighted normal equations by a preconditioned conjugate gradient method at every other iteration. We use a class of preconditioners based on a low rank correction to a Cholesky factorization obtained from the previous iteration. Numerical results show that when properly implemented, the approach of combining direct and iterative methods is promising. Key Words. Weighted linear least squares, Parallel processing, Preconditioners, Linear programming, Primaldual infeasible interior point algorithms. 1 Introduction The class of preconditioners we will consider is a low rank correction of a Cholesky factoriz...