Results 1  10
of
16
Computational Issues for a New Class of Preconditioners
 LargeScale Scientific Computations of Engineering and Environmental Problems II, Series Notes on Numerical Fluid Mechanics, VIEWEG 73
"... ..."
On the Convergence of an Inexact PrimalDual Interior Point Method for Linear Programming
, 2000
"... The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length in both the primal and dual spaces and is globally convergent. Key Words. Linear programming, inexact primaldual interior point algorithm, inexact search direction, short step lengths, termination criteria, global convergence 1 Introduction Consider the primal linear programming problem minimize c T x subject to: Ax = b; x 0; (1a) where A is an mbyn matrix of full rank m, b an mvector, and c an nvector; and its dual problem maximize b T y subject to: A T y + z = c; z 0: (1b) Technical report number 188, Department of Informatics, University of Bergen 1 The optimality conditions for the linear program pair (1a) and (1b) are the KarushKuhnTucker (KKT) conditions: F (x;...
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
, 2007
"... 1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented syste ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hypersparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and largescale problems from public domain test collections. Computational experience is encouraging.
A New Function for Robust Linear Regression: An Iterative Approach
 16th IMACS WORLD CONGRESS 2000 on Scientific Computation, Applied Mathematics and Simulation
, 2000
"... In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new functi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new function that leads to a cheap preconditioner. Further, for this function, we show that the upper bound on the condition number of the preconditioned matrix is independent of the conditioning of the data matrix (is determined by a predefined constant). We give numerical results that demonstrate the effectiveness of preconditioners based on this function. Key words: Robust regression, Iteratively reweighted least squares, Newton's method, New weighting function, Conjugate gradient least squares method, Preconditioner. AMS subject classifications: 62J05, 65D10, 65F10, 65F20. 1 Introduction Consider the standard linear regression model y = Ax + "; (1) where y 2 ! m is a vector of observations,...
Properties and Computational Issues of a Preconditioner for Interior Point Methods
, 1999
"... This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between a direct and an iterative method. The preconditioner is based on lowrank modifications of the coefficient matrix where a direct solution technique has been used. We compare two different techniques of forming the lowrank modification matrix; namely one by Wang and O'Leary [11] and the other by Baryamureeba, Steihaug and Zhang [3]. The theory and numerical testing strongly support the latter. We derive a sparse algorithm for modifying the Cholesky factors by a lowrank matrix, discuss the computational issues of this preconditioner, and finally give numerical results that show the approach of alternating between a direct and an iterative method to be promising. Key Words. Linear Programmi...
SOLVING SCALARIZED MULTIOBJECTIVE NETWORK FLOW PROBLEMS WITH AN INTERIOR POINT METHOD
"... Abstract. In this paper we present a primaldual interiorpoint algorithm to solve a class of multiobjective network flow problems. More precisely, our algorithm is an extension of the singleobjective primal infeasible dual feasible inexact interior point method for multiobjective linear network ..."
Abstract
 Add to MetaCart
Abstract. In this paper we present a primaldual interiorpoint algorithm to solve a class of multiobjective network flow problems. More precisely, our algorithm is an extension of the singleobjective primal infeasible dual feasible inexact interior point method for multiobjective linear network flow problems. Our algorithm is contrasted with standard interior point methods and experimental results on biobjective instances are reported. The multiobjective instances are converted into single objective problems with the aid of an achievement function, which is particularly adequate for interactive decisionmaking methods. 1.
50 Properties of Preconditioners for Robust Linear Regression
"... In this paper, we consider solving the robust linear regression problem by an inexact Newton method and an iteratively reweighted least squares method. We show that each of these methods can be combined with the preconditioned conjugate gradient least square algorithm to solve large, sparse systems ..."
Abstract
 Add to MetaCart
In this paper, we consider solving the robust linear regression problem by an inexact Newton method and an iteratively reweighted least squares method. We show that each of these methods can be combined with the preconditioned conjugate gradient least square algorithm to solve large, sparse systems of linear equations efficiently. We consider the constant preconditioner and preconditioners based on lowrank updates and downdates of existing matrix factorizations. Numerical results are given to demonstrate the effectiveness of these preconditioners.
Convergence Analysis of a LongStep PrimalDual Infeasible InteriorPoint LP Algorithm Based on Iterative Linear Solvers
, 2003
"... In this paper, we consider a modified version of a wellknown longstep primaldual infeasible IP algorithm for solving the linear program min{cT x: Ax = b, x ≥ 0}, A ∈ Rm×n, where the search directions are computed by means of an iterative linear solver applied to a preconditioned normal system of ..."
Abstract
 Add to MetaCart
In this paper, we consider a modified version of a wellknown longstep primaldual infeasible IP algorithm for solving the linear program min{cT x: Ax = b, x ≥ 0}, A ∈ Rm×n, where the search directions are computed by means of an iterative linear solver applied to a preconditioned normal system of equations. We show that the number of (inner) iterations of the iterative linear solver at each (outer) iteration of the algorithm is bounded by a polynomial in m, n and a certain condition number associated with A, while the number of outer iterations is bounded by O(n 2 log ɛ −1), where ɛ is a given relative accuracy level. As a special case, it follows that the total number of inner iterations is polynomial in m and n for the minimum cost network flow problem.
The Impact of EqualWeighting of Both LowConfidence and HighConfidence Observations on Robust Linear Regression Computation
, 2000
"... Equal weighting of lowconfidence observations and highconfidence observations occurs for Huber, Talwar, and Barya weighting functions when Newton's method is used to solve robust linear regression problems. This leads to easy updates and downdates of existing matrix factorizations or easy com ..."
Abstract
 Add to MetaCart
Equal weighting of lowconfidence observations and highconfidence observations occurs for Huber, Talwar, and Barya weighting functions when Newton's method is used to solve robust linear regression problems. This leads to easy updates and downdates of existing matrix factorizations or easy computation of coefficient matrices in linear systems from previous ones. Thus these functions have proven to to be computationally cheap (Huber [4] function is regard by many as the most used function) when the linear system is solved by direct methods. For the case of iterative methods, this kind of weighting of observations leads to very efficient preconditioners for the Barya function. The Talwar function unlike the Huber function has also been shown to work well with iterative methods. We will give numerical results to validate our claims. key words: Robust linear regression, Iteratively reweighted least squares method, Newton's method, New weighting function, Conjugate gradient least squares ...