Results 1  10
of
15
On the Classical Logarithmic Barrier Function Method for a Class of Smooth Convex Programming Problems
, 1990
"... In this paper we propose a largestep analytic center method for smooth convex programming. The method is a natural implementation of the classical method of centers. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
In this paper we propose a largestep analytic center method for smooth convex programming. The method is a natural implementation of the classical method of centers. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. A great advantage of the method, above the existing pathfollowing methods, is that the steps can be made long by performing linesearches. In our method we do linesearches along the Newton direction with respect to a strictly convex potential function if we are far away from the central path. If we are sufficiently close to this path we update a lower bound for the optimal value. We prove that the number of iterations required by the algorithm to converge to an ffloptimal solution is O((1 +M 2 ) p nj ln fflj) or O((1 +M 2 )nj ln fflj), dependent on the updating scheme for the lower bound.
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
Logarithmic Barrier Decomposition Methods for SemiInfinite Programming
, 1996
"... A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solv ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solve semiinfinite programming problems. Usually decomposition (cutting plane methods) use cutting planes to improve the localization of the given problem. In this paper we propose an extension which uses linear cuts to solve large scale, difficult real world problems. This algorithm uses both static and (doubly) dynamic enumeration of the parameter space and allows for multiple cuts to be simultaneously added for larger/difficult problems. The algorithm is implemented both on sequential and parallel computers. Implementation issues and parallelization strategies are discussed and encouraging computational results are presented. Keywords: column generation, convex programming, cutting plane met...
A Long Step Barrier Method for Convex Quadratic Programming
 ALGORITHMICA
, 1990
"... In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the cent ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations is O( p nL) or O(nL), dependent on how the barrier parameter is updated.
Convergence property of the IriImai algorithm for some smooth convex programming problems
 Journal of Optimization Theory and Applications
, 1994
"... In this paper, the IriImai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions sat ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
In this paper, the IriImai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity (called the harmonic convexity in this paper). A characterization of this convexity condition is given. In Ref. 14, the same convexity condition is used to prove the convergence of a pathfollowing algorithm. The IriImai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior point algorithms for smooth convex programming are mainly based on the pathfollowing approach.
A Unifying Investigation of InteriorPoint Methods for Convex Programming
 FACULTY OF MATHEMATICS AND INFORMATICS, TU DELFT, NL2628 BL
, 1992
"... In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorp ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpoint methods for these programs can thus be reduced to the analysis of interiorpoint methods with selfconcordant barrier functions.
A Short Survey on Ten Years Interior Point Methods
, 1995
"... The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability to certain structured nonlinear programming and combinatorial problems. We will give a historical account of the developments and outline the major contributions to the field in the last decade. An important class of problems to which interior point methods are applicable is semidefinite optimization, which has recently gained much attention. It has a lot of applications in various fields (like control and system theory, combinatorial optimization, algebra, statistics, structural design) and can be efficiently solved with interior point methods.
Improving Complexity of Structured Convex Optimization Problems Using SelfConcordant Barriers
, 2001
"... The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to pr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma from [3] is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
A potential reduction variant of Renegar's shortstep pathfollowing method for linear programming
, 1990
"... In this paper we propose a new polynomial potential reduction method for linear programming, which can also be seen as a largestep pathfollowing method. In our method we do an (approximate) linesearch along the Newton direction with respect to Renegar's strictly convex potential function if ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper we propose a new polynomial potential reduction method for linear programming, which can also be seen as a largestep pathfollowing method. In our method we do an (approximate) linesearch along the Newton direction with respect to Renegar's strictly convex potential function if the iterate is far away from the central trajectory. If the iterate lies close to the trajectory we update the lower bound for the optimal value. Dependent on this updating scheme the iteration bound can be proved to be O( p nL) or O(nL). Our method differs from the recently published potential reduction methods in the choice of the potential function and the search direction.
A Potential Reduction Method for a Class of Smooth Convex Programming Problems
, 1990
"... In this paper we propose a potential reduction method for smooth convex programming. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. The great advantage of this method, above the existing pathfollowing metho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In this paper we propose a potential reduction method for smooth convex programming. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. The great advantage of this method, above the existing pathfollowing methods, is that it allows linesearches. In our method we do linesearches along the Newton direction with respect to a strictly convex potential function if we are far away from the central path. If we are sufficiently close to this path we update a lower bound for the optimal value. We prove that the number of iterations required by the algorithm to converge to an ffloptimal solution is O((1 + M 2 ) p nj ln fflj) or O((1 + M 2 )nj ln fflj), dependent on the updating scheme for the lower bound.