Results 1  10
of
11
On the Classical Logarithmic Barrier Function Method for a Class of Smooth Convex Programming Problems
, 1990
"... In this paper we propose a largestep analytic center method for smooth convex programming. The method is a natural implementation of the classical method of centers. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we propose a largestep analytic center method for smooth convex programming. The method is a natural implementation of the classical method of centers. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. A great advantage of the method, above the existing pathfollowing methods, is that the steps can be made long by performing linesearches. In our method we do linesearches along the Newton direction with respect to a strictly convex potential function if we are far away from the central path. If we are sufficiently close to this path we update a lower bound for the optimal value. We prove that the number of iterations required by the algorithm to converge to an ffloptimal solution is O((1 +M 2 ) p nj ln fflj) or O((1 +M 2 )nj ln fflj), dependent on the updating scheme for the lower bound.
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappi ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
Logarithmic Barrier Decomposition Methods for SemiInfinite Programming
, 1996
"... A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solv ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solve semiinfinite programming problems. Usually decomposition (cutting plane methods) use cutting planes to improve the localization of the given problem. In this paper we propose an extension which uses linear cuts to solve large scale, difficult real world problems. This algorithm uses both static and (doubly) dynamic enumeration of the parameter space and allows for multiple cuts to be simultaneously added for larger/difficult problems. The algorithm is implemented both on sequential and parallel computers. Implementation issues and parallelization strategies are discussed and encouraging computational results are presented. Keywords: column generation, convex programming, cutting plane met...
A Long Step Barrier Method for Convex Quadratic Programming
 Algorithmica
, 1990
"... In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the cent ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations is O( p nL) or O(nL), dependent on how the barrier parameter is updated. Key Words: convex quadratic programming, interior point method, logarithmic barrier function, polynomial algorithm. 1 Introduction Karmarkar's [14] invention of the projective method for linear programming has given rise to active research in interior point algorithms. At this moment, the variants can roughly be categorized into four classes: projective, affine scaling, pathfollowing and potential reduction methods. Researchers have also extended interior point methods to other problems, including convex qu...
A Unifying Investigation of InteriorPoint Methods for Convex Programming
 Faculty of Mathematics and Informatics, TU Delft, NL2628 BL
, 1992
"... In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpo ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpoint methods for these programs can thus be reduced to the analysis of interiorpoint methods with selfconcordant barrier functions. Key words: interiorpoint method, barrier function, dual geometric programming, (extended) entropy programming, primal and dual l p programming, relative Lipschitz condition, scaled Lipschitz condition, selfconcordance. 1 Introduction The efficiency of a barrier method for solving convex programs strongly depends on the properties of the barrier function used. A key property that is sufficient to prove fast convergence for barrier methods is the property of selfconcordance introduced in [17]. This condition not only allows a proof of polynomial convergen...
A Short Survey on Ten Years Interior Point Methods
, 1995
"... The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability to c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The introduction of Karmarkar's polynomial algorithm for linear programming (LP) in 1984 has influenced wide areas in the field of optimization. While in 80s emphasis was on developing and implementing efficient variants of interior point methods for LP, the nineties have shown applicability to certain structured nonlinear programming and combinatorial problems. We will give a historical account of the developments and outline the major contributions to the field in the last decade. An important class of problems to which interior point methods are applicable is semidefinite optimization, which has recently gained much attention. It has a lot of applications in various fields (like control and system theory, combinatorial optimization, algebra, statistics, structural design) and can be efficiently solved with interior point methods.
A Potential Reduction Method for a Class of Smooth Convex Programming Problems
, 1990
"... In this paper we propose a potential reduction method for smooth convex programming. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. The great advantage of this method, above the existing pathfollowing metho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper we propose a potential reduction method for smooth convex programming. It is assumed that the objective and constraint functions fulfil the socalled Relative Lipschitz Condition, with Lipschitz constant M ? 0. The great advantage of this method, above the existing pathfollowing methods, is that it allows linesearches. In our method we do linesearches along the Newton direction with respect to a strictly convex potential function if we are far away from the central path. If we are sufficiently close to this path we update a lower bound for the optimal value. We prove that the number of iterations required by the algorithm to converge to an ffloptimal solution is O((1 + M 2 ) p nj ln fflj) or O((1 + M 2 )nj ln fflj), dependent on the updating scheme for the lower bound.
Improving Complexity of Structured Convex Optimization Problems Using SelfConcordant Barriers
, 2001
"... The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma from [3] is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
On the Complexity of the TranslationalCuts Algorithm of Burke, Goldstein, Tseng and Ye for Convex Minimax Problems
"... . Burke, Goldstein, Tseng and Ye (Ref. 1) have presented an interesting interior point algorithm for a class of smooth convex minimax problems. They have also presented a complexity analysis leading to a worstcase bound on the total work necessary to obtain a solution within a prescribed tolerance. ..."
Abstract
 Add to MetaCart
. Burke, Goldstein, Tseng and Ye (Ref. 1) have presented an interesting interior point algorithm for a class of smooth convex minimax problems. They have also presented a complexity analysis leading to a worstcase bound on the total work necessary to obtain a solution within a prescribed tolerance. In this paper we present refinements to the analysis of Burke et al. which show that the resulting complexity bound can be worse than those for other algorithms available at the time Ref. 1 was published. Key Words: Complexity, minimax optimization, global Newton method, interior point methods, analytic center. Introduction Let f i , i = 1; : : : ; n be realvalued, convex, thricedifferentiable functions defined on the mdimensional Euclidean space R m . Define F as the function denoting the pointwise maximum of f i ; i = 1; : : : ; n : F (x) := max i=1;:::;n f i (x). In Ref. 1, Burke, Goldstein, Tseng and Ye present an interior point algorithm for finding a minimizer x and the min...
SelfConcordant Functions in Structured Convex Optimization
, 2000
"... This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration ..."
Abstract
 Add to MetaCart
This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy, how they react to addition and scaling and which one is the best to fix. A lemma from [2] is improved and allows us to review several classes of structured convex optimization problems and evaluate their algorithmic complexity, using the selfconcordancy of the associated logarithmic barriers.