Results 1  10
of
23
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 156 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
InfeasibleInteriorPoint PrimalDual PotentialReduction Algorithms For Linear Programming
 SIAM Journal on Optimization
, 1995
"... . In this paper, we propose primaldual potentialreduction algorithms which can start from an infeasible interior point. We first describe two such algorithms and show that both are polynomialtime bounded. One of the algorithms decreases the TanabeToddYe primaldual potential function by a const ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
. In this paper, we propose primaldual potentialreduction algorithms which can start from an infeasible interior point. We first describe two such algorithms and show that both are polynomialtime bounded. One of the algorithms decreases the TanabeToddYe primaldual potential function by a constant at each iteration under the condition that the duality gap decreases by at most the same ratio as the infeasibility. The other reduces a new potential function, which has one more term in the TanabeToddYe potential function, by a fixed constant at each iteration without any other conditions on the step size. Finally, we describe modifications of these methods (incorporating centering steps) which dramatically decrease their computational complexity. Our algorithms also extend to the case of monotone linear complementarity problems. Key words. Polynomial Time, Linear Programming, PrimalDual, InfeasibleInteriorPoint Algorithm, Potential Function. AMS subject classifications. 90C05, ...
Approximate Farkas Lemmas and Stopping Rules for Iterative InfeasiblePoint Algorithms for Linear Programming
 Mathematical Programming
, 1994
"... In exact arithmetic, the simplex method applied to a particular linear programming problem instance either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions to both problems. Interiorpoint methods do not provide such clearcut information. We provide gene ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In exact arithmetic, the simplex method applied to a particular linear programming problem instance either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions to both problems. Interiorpoint methods do not provide such clearcut information. We provide general tools (extensions of the Farkas Lemma) for concluding that a problem or its dual is likely (in a certain welldefined sense) to be infeasible, and apply them to develop stopping rules for a generic infeasibleinteriorpoint method and for the homogeneous selfdual algorithm for linear programming. These rules allow precise conclusions to be drawn about the linear programming problem and its dual: either nearoptimal solutions are produced, or we obtain "certificates" that all optimal solutions, or all feasible solutions to the primal or dual, must have large norm. Our rules thus allow more definitive interpretation of the output of such an algorithm than previous termination criteria. We...
An Interior Point Potential Reduction Method for Constrained Equations
, 1995
"... We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In gen ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlinear complementarity problems. In general, constrained equations provide a unified formulation for many mathematical programming problems, including complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities and nonlinear programs. Combining ideas from the damped Newton and interior point methods, we present an iterative algorithm for solving a constrained system of equations and investigate its convergence properties. Specialization of the algorithm and its convergence analysis to complementarity problems of various kinds and the KarushKuhnTucker systems of variational inequalities are discussed in detail. We also report the computational results of the implementation of the algo...
On the Convergence of an Inexact PrimalDual Interior Point Method for Linear Programming
, 2000
"... The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length in both the primal and dual spaces and is globally convergent. Key Words. Linear programming, inexact primaldual interior point algorithm, inexact search direction, short step lengths, termination criteria, global convergence 1 Introduction Consider the primal linear programming problem minimize c T x subject to: Ax = b; x 0; (1a) where A is an mbyn matrix of full rank m, b an mvector, and c an nvector; and its dual problem maximize b T y subject to: A T y + z = c; z 0: (1b) Technical report number 188, Department of Informatics, University of Bergen 1 The optimality conditions for the linear program pair (1a) and (1b) are the KarushKuhnTucker (KKT) conditions: F (x;...
Linear Algebra for Semidefinite Programming
, 1995
"... Let M n (IK) denote the set of all n 2 n matrices with elements in IK, where IK represents the field IR of real numbers, the field 0 C of complex numbers or the (noncommutative) field IH of quaternion numbers. We call a subset T of M n (IK) a *subalgebra of M n (IK) over the field IR (or simply a ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Let M n (IK) denote the set of all n 2 n matrices with elements in IK, where IK represents the field IR of real numbers, the field 0 C of complex numbers or the (noncommutative) field IH of quaternion numbers. We call a subset T of M n (IK) a *subalgebra of M n (IK) over the field IR (or simply a *subalgebra) if (i) T forms a subring of M n (IK) with the usual addition A + B and multiplication AB of matrices A; B 2 M n (IK); specifically the zero matrix O and the identity matrix I belong to T . (ii) T is an IRmodule, i.e., a vector space over the field IR; ffA + fiB 2 T for every ff; fi 2 IR and A; B 2 T , (iii) A 3 2 T if A 2 T , where A 3 denotes the conjugate transpose of A 2 M n (IK). The introduction of *subalgebras T provides us with a unified and compact way of handling LPs (linear programs) in IR n , SDPs (semidefinite programs) in M n (IR), M n ( 0 C) and M n (IH), and monotone SDLCPs (semidefinite linear complementarity problems) in those spaces. We can extend t...
Monotone Semidefinite Complementarity Problems
, 1996
"... . In this paper, we study some basic properties of the monotone semidefinite nonlinear complementarity problem (SDCP). We show that the trajectory continuously accumulates into the solution set of the SDCP passing through the set of the infeasible but positive definite matrices under certain conditi ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
. In this paper, we study some basic properties of the monotone semidefinite nonlinear complementarity problem (SDCP). We show that the trajectory continuously accumulates into the solution set of the SDCP passing through the set of the infeasible but positive definite matrices under certain conditions. Especially, for the monotone semidefinite linear complementarity problem, the trajectory converges to an analytic center of the solution set, provided that there exists a strictly complementary solution. Finally, we propose the globally convergent infeasibleinteriorpoint algorithm for the SDCP. Key words Monotone Semidefinite Complementarity Problem, Trajectory, Interior Point Algorithm Research Report B312 on Mathematical and Computing Sciences, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology. 1 Introduction. Let M(n) and S(n) denote the class of n2n real matrices and the class of n2n symmetric real matrices, respectively. Assume that A; B 2 M(n)....
Horizontal and Vertical Decomposition in Interior Point Methods for Linear Programs
, 1993
"... . Corresponding to the linear program: Maximize c T x subject to Ax = a; Bx = b; x 0; we introduce two functions in the penalty parameter t ? 0 and the Lagrange relaxation parameter vector w, ~ f p (t; w) = maxfc T x \Gamma w T (Ax \Gamma a) + t n X j=1 ln x j : Bx = b; x ? 0g (for hor ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
. Corresponding to the linear program: Maximize c T x subject to Ax = a; Bx = b; x 0; we introduce two functions in the penalty parameter t ? 0 and the Lagrange relaxation parameter vector w, ~ f p (t; w) = maxfc T x \Gamma w T (Ax \Gamma a) + t n X j=1 ln x j : Bx = b; x ? 0g (for horizontal decomposition), ~ f d (t; w) = minfa T w + b T y \Gamma t n X j=1 ln z j : B T y \Gamma z = c \Gamma A T w; z ? 0g (for vertical decomposition). For each t ? 0, ~ f p (t; \Delta) and ~ f d (t; \Delta) are strictly convex C 1 functions with a common minimizer w(t), which converges to an optimal Lagrange multiplier vector w associated with the constraint Ax = a as t ! 0, and enjoy the strong selfconcordance property given by Nesterov and Nemirovsky. Based on these facts, we present conceptual algorithms with the use of Newton's method for tracing the trajectory f w(t) : t ? 0g, and analyze their computational complexity. 1. Introduction. This paper presents...
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.