Results 1  10
of
39
Semidefinite optimization
 Acta Numerica
, 2001
"... Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
(Show Context)
Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the strongest column, checking the stability of a differential inclusion, and obtaining tight bounds for hard combinatorial optimization problems. Part also derives from great advances in our ability to solve such problems efficiently in theory and in practice (perhaps “or ” would be more appropriate: the most effective computational methods are not always provably efficient in theory, and vice versa). Here we describe this class of optimization problems, give a number of examples demonstrating its significance, outline its duality theory, and discuss algorithms for solving such problems.
On implementing a primaldual interiorpoint method for conic quadratic optimization
 MATHEMATICAL PROGRAMMING SER. B
, 2000
"... Conic quadratic optimization is the problem of minimizing a linear function subject to the intersection of an affine set and the product of quadratic cones. The problem is a convex optimization problem and has numerous applications in engineering, economics, and other areas of science. Indeed, linea ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Conic quadratic optimization is the problem of minimizing a linear function subject to the intersection of an affine set and the product of quadratic cones. The problem is a convex optimization problem and has numerous applications in engineering, economics, and other areas of science. Indeed, linear and convex quadratic optimization is a special case. Conic quadratic optimization problems can in theory be solved efficiently using interiorpoint methods. In particular it has been shown by Nesterov and Todd that primaldual interiorpoint methods developed for linear optimization can be generalized to the conic quadratic case while maintaining their efficiency. Therefore, based on the work of Nesterov and Todd, we discuss an implementation of a primaldual interiorpoint method for solution of largescale sparse conic quadratic optimization problems. The main features of the implementation are it is based on a homogeneous and selfdual model, handles the rotated quadratic cone directly, employs a Mehrotra type predictorcorrector
InfeasibleStart PrimalDual Methods And Infeasibility Detectors For Nonlinear Programming Problems
 Mathematical Programming
, 1996
"... In this paper we present several "infeasiblestart" pathfollowing and potentialreduction primaldual interiorpoint methods for nonlinear conic problems. These methods try to find a recession direction of the feasible set of a selfdual homogeneous primaldual problem. The methods under ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
In this paper we present several "infeasiblestart" pathfollowing and potentialreduction primaldual interiorpoint methods for nonlinear conic problems. These methods try to find a recession direction of the feasible set of a selfdual homogeneous primaldual problem. The methods under consideration generate an fflsolution for an fflperturbation of an initial strictly (primal and dual) feasible problem in O( p ln fflae f ) iterations, where is the parameter of a selfconcordant barrier for the cone, ffl is a relative accuracy and ae f is a feasibility measure. We also discuss the behavior of pathfollowing methods as applied to infeasible problems. We prove that strict infeasibility (primal or dual) can be detected in O( p ln ae \Delta ) iterations, where ae \Delta is a primal or dual infeasibility measure. 1 Introduction Nesterov and Nemirovskii [9] first developed and investigated extensions of several classes of interiorpoint algorithms for linear programming t...
Conic convex programming and selfdual embedding
 Optim. Methods and Software
"... ..."
(Show Context)
Advances in convex optimization: Conic programming
 In Proceedings of International Congress of Mathematicians
, 2007
"... Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit this structure in order to process the program efficiently. In the paper, we overview the major components of the resulting theory (conic duality and primaldual interior point polynomial time algorithms), outline the extremely rich “expressive abilities ” of conic quadratic and semidefinite programming and discuss a number of instructive applications.
A Computational Study of the Homogeneous Algorithm for LargeScale Convex Optimization
, 1997
"... Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of th ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex optimization problems that also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for largescale sparse convex optimization. Moreover, we present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and variables. The results indicate that the proposed algorithm is also practically efficient. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark. Email: eda@busieco.ou.dk y ...
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
A New SelfDual Embedding Method for Convex Programming
 Journal of Global Optimization
, 2001
"... In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint function ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint functions and their corresponding conjugate functions respectively. Hence they are completely symmetric. This allows for a standard primaldual path following approach for solving the embedded problem. Moreover, there are two immediate logarithmic barrier functions for the primal and dual cones. We show that these two logarithmic barrier functions are conjugate to each other. The explicit form of the conjugate functions are in fact not required to be known in the algorithm. An advantage of the new approach is that there is no need to assume an initial feasible solution to start with. To guarantee the polynomiality of the pathfollowing procedure, we may apply the selfconcordant barrier theory of Nesterov and Nemirovski. For this purpose, as one application, we prove that the barrier functions constructed this way are indeed selfconcordant when the original constraint functions are convex and quadratic. Keywords: Convex Programming, Convex Cones, SelfDual Embedding, SelfConcordant Barrier Functions. # Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong. Research supported by Hong Kong RGC Earmarked Grants CUHK4181/00E and CUHK4233/01E. 1 1
Solving Semidefinite Programs using Preconditioned Conjugate Gradients
 Optim. Methods Softw
, 2003
"... The contribution of this paper is to describe a general technique to solve some classes of large but sparse semidefinite problems via a robust primaldual interiorpoint technique which uses an inexact GaussNewton approach with a matrix free preconditioned conjugate gradient method. This approach a ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
The contribution of this paper is to describe a general technique to solve some classes of large but sparse semidefinite problems via a robust primaldual interiorpoint technique which uses an inexact GaussNewton approach with a matrix free preconditioned conjugate gradient method. This approach avoids the illconditioning pitfalls that result from symmetrization and from forming the socalled normal equations, while maintaining the primaldual framework.
R.: Error bounds and limiting behavior of weighted paths associated with the sdp map X 1/2 SX 1/2
 SIAM Journal on Optimization
, 2005
"... Abstract. This paper studies the limiting behavior of weighted infeasible central paths for semidefinite programming (SDP) obtained from centrality equations of the form X1/2SX1/2 = νW, where W is a fixed positive definite matrix and ν> 0 is a parameter, under the assumption that the problem has ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the limiting behavior of weighted infeasible central paths for semidefinite programming (SDP) obtained from centrality equations of the form X1/2SX1/2 = νW, where W is a fixed positive definite matrix and ν> 0 is a parameter, under the assumption that the problem has a strictly complementary primaldual optimal solution. It is shown that a weighted central path as a function of ν can be extended analytically beyond 0 and hence that the path converges as ν ↓ 0. Characterization of the limit points of the path and its normalized firstorder derivatives are also provided. As a consequence, it is shown that a weighted central path can have two types of behavior: it converges either as Θ(ν) or as Θ( ν) depending on whether the matrix W on a certain scaled space is block diagonal or not, respectively. We also derive an error bound on the distance between a point lying in a certain neighborhood of the central path and the set of primaldual optimal solutions. Finally, in light of the results of this paper, we give a characterization of a sufficient condition proposed by Potra and Sheng which guarantees the superlinear convergence of a class of primaldual interiorpoint SDP algorithms.