Results 1  10
of
16
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
An Implementation Of Karmarkar's Algorithm For Linear Programming
 Mathematical Programming
, 1986
"... . This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implement ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
. This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex code MINOS 4.0. 1. INTRODUCTION We describe in this paper a family of interior point power series affine scaling algorithms based on the linear programming algorithm presented by Karmarkar (1984). Two algorithms from this family, corresponding to first and second order pow...
On the curvature of the central path of linear programming theory
 Foundations of Computational Mathematics
, 2003
"... Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming pro ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming problem defined on a compact polytope contained in R n, the total curvature of the central path is less than or
Degeneracy in Interior Point Methods for Linear Programming
, 1991
"... ... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
... In this paper, we survey the various theoretical and practical issues related to degeneracy in IPM's for linear programming. We survey results which for the most part already appeared in the literature. Roughly speaking, we shall deal with four topics: the effect of degeneracy on the convergence of IPM's, on the trajectories followed by the algorithms, the effect of degeneracy in numerical performance, and on finding basic solutions.
Further Development on the Interior Algorithm for Convex Quadratic Programming
 Dept. of EngineeringEconomic Systems, Stanford University
, 1987
"... The interior trust region algorithm for convex quadratic programming is further developed. This development is motivated by the barrier function and the "center" pathfollowing methods, which create a sequence of primal and dual interior feasible points converging to the optimal solution. At each it ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The interior trust region algorithm for convex quadratic programming is further developed. This development is motivated by the barrier function and the "center" pathfollowing methods, which create a sequence of primal and dual interior feasible points converging to the optimal solution. At each iteration, the gap between the primal and dual objective values (or the complementary slackness value) is reduced at a global convergence ratio (1 \Gamma 1 4 p n ), where n is the number of variables in the convex QP problem. A safeguard line search technique is also developed to relax the smallstepsize restriction in the original path following algorithm. Key words: Convex Quadratic Programming, Primal and Dual, Complementarity Slackness, Polynomial Interior Algorithm. Abbreviated title: Interior Algorithm for Convex Quadratic Programming Since Karmarkar proposed the new polynomial algorithm (Karmarkar [19]), several developments have been made to the growing literature on interior a...
How good are interior point methods? KleeMinty. cubes tighten iterationcomplexity bounds
, 2004
"... ..."
Trust Region Affine Scaling Algorithms for Linearly Constrained Convex and Concave Programs
 Mathematical Programming
, 1996
"... We study a trust region affine scaling algorithm for solving the linearly constrained convex or concave programming problem. Under primal nondegeneracy assumption, we prove that every accumulation point of the sequence generated by the algorithm satisfies the first order necessary condition for opti ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We study a trust region affine scaling algorithm for solving the linearly constrained convex or concave programming problem. Under primal nondegeneracy assumption, we prove that every accumulation point of the sequence generated by the algorithm satisfies the first order necessary condition for optimality of the problem. For a special class of convex or concave functions satisfying a certain invariance condition on their Hessians, it is shown that the sequences of iterates and objective function values generated by the algorithm converge Rlinearly and Q linearly, respectively. Moreover, under primal nondegeneracy and for this class of objective functions, it is shown that the limit point of the sequence of iterates satisfies the first and second order necessary conditions for optimality of the problem. Key words: Linearly constrained problem, affine scaling algorithm, trust region method, interior point method. AMS 1991 subject classification: 49M37, 49M45, 65K05, 90C25, 90C26, 90C...
A Polynomial Method of Weighted Centers for Convex Quadratic Programming
 Journal of Information & Optimization Sciences
, 1991
"... A generalization of the weighted central pathfollowing method for convex quadratic programming is presented. This is done by uniting and modifying the main ideas of the weighted central pathfollowing method for linear programming and the interior point methods for convex quadratic programming. B ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A generalization of the weighted central pathfollowing method for convex quadratic programming is presented. This is done by uniting and modifying the main ideas of the weighted central pathfollowing method for linear programming and the interior point methods for convex quadratic programming. By means of the linear approximation of the weighted logarithmic barrier function and weighted inscribed ellipsoids, `weighted' trajectories are defined. Each strictly feasible primal dual point pair define such a weighted trajectory. The algorithm can start in any strictly feasible primaldual point pair that defines a weighted trajectory, which is followed through the algorithm. This algorithm has the nice feature, that it is not necessary to start the algorithm close to the central path and so additional transformations are not needed. In return, the theoretical complexity of our algorithm is dependent on the position of the starting point. Polynomiality is proved under the usual mild cond...
Convergence of the Dual Variables for the Primal Affine Scaling Method With Unit Steps in the Homogeneous Case
 J. Optim. Theory Appl
, 1994
"... In this paper we investigate the behavior of the primal affine scaling method with unit steps when applied to the case that b = 0 and c ? 0: We prove that the method is globally convergent and that the dual iterates converge to the analytic center of the dual feasible region. Key words: primal affi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we investigate the behavior of the primal affine scaling method with unit steps when applied to the case that b = 0 and c ? 0: We prove that the method is globally convergent and that the dual iterates converge to the analytic center of the dual feasible region. Key words: primal affine scaling method, Karmarkar potential function, analytic center. iii 1 Introduction In this paper we deal with the primal affine scaling method for solving the linear programming problem in standard format, given by (P ) minfc T x : Ax = b; x 0g; where A is a matrix with m rows and n columns, with rank m; and c 2 IR n ; b 2 IR m . If x is any vector such that Ax = b then we can reformulate this problem as minfc T \Deltax : A\Deltax = 0; x + \Deltax 0g; because if \Deltax solves this problem, then x+ \Deltax solves the original problem. The difficulty in solving the last problem is caused by the inequality constraint x + \Deltax 0. Assuming that x is positive, Dikin [3] propos...