Results 1 
7 of
7
Copositive Relaxation for General Quadratic Programming
 OPTIM. METHODS SOFTW
, 1998
"... We consider general, typically nonconvex, Quadratic Programming Problems. The Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraints are also involved. To get rid of the linear sideconstraint ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We consider general, typically nonconvex, Quadratic Programming Problems. The Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraints are also involved. To get rid of the linear sideconstraints, another, stronger convex relaxation is derived. This relaxation uses copositive matrices. Special cases are discussed for which both relaxations are equal. At the end of the paper, the complexity and solvability of the relaxations are discussed.
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
Selfregular proximities and new search directions for linear and semidefinite optimization
 Mathematical Programming
, 2000
"... In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular p ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular proximity measure for pathfollowing interior point methods for solving linear optimization (LO) problems. Any selfregular proximity measure naturally defines a primaldual search direction. In this way a new class of primaldual search directions for solving LO problems is obtained. Using the appealing properties of selfregular functions, we prove that these new largeupdate pathfollowing methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the socalled barrier degree of the selfregular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods, namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of smallupdate IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.
A Short Survey on Semidefinite Programming
 in Ten Years LNMB, Ph.d. Research and Graduate Courses of the Dutch Network of Operations Research
, 1997
"... Semidefinite programming (SDP) is one of the fastest developing branches of mathematical programming. The reason is twofold: efficient solution algorithms for SDP have come to light in the past few years, and SDP finds applications in combinatorial optimization and engineering. In this short survey ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Semidefinite programming (SDP) is one of the fastest developing branches of mathematical programming. The reason is twofold: efficient solution algorithms for SDP have come to light in the past few years, and SDP finds applications in combinatorial optimization and engineering. In this short survey we show how SDP duality theory can be used to prove classical results, and review the development of interior point algorithms for SDP. Key words: interiorpoint method, semidefinite programming 1 Introduction One could easily be led to believe that the field of semidefinite programming (SDP) originated in this decade. A glance at a bibliography of SDP papers indeed indicates an explosion of research effort, starting around 1991. A closer look reveals that interest in this class of problems is somewhat older, and dates back to the 1960's (see e.g. [6]). A paper on SDP from 1981 is descriptively named Linear Programming with Matrix Variables [11], and this apt title may be the best way to i...
An Information Geometric Approach to Polynomialtime Interiorpoint Algorithms — Complexity Bound via Curvature Integral —
, 2007
"... In this paper, we study polynomialtime interiorpoint algorithms in view of information geometry. Information geometry is a differential geometric framework which has been successfully applied to statistics, learning theory, signal processing etc. We consider information geometric structure for con ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we study polynomialtime interiorpoint algorithms in view of information geometry. Information geometry is a differential geometric framework which has been successfully applied to statistics, learning theory, signal processing etc. We consider information geometric structure for conic linear programs introduced by selfconcordant barrier functions, and develop a precise iterationcomplexity estimate of the polynomialtime interiorpoint algorithm based on an integral of (embedding) curvature of the central trajectory in a rigorous differential geometrical sense. We further study implication of the theory applied to classical linear programming, and establish a surprising link to the strong “primaldual curvature ” integral bound established by Monteiro and Tsuchiya, which is based on the work of Vavasis and Ye of the layeredstep interiorpoint algorithm. By using these results, we can show that the total embedding curvature of the central trajectory, i.e., the aforementioned integral over the whole central trajectory, is bounded by O(n3.5 log(¯χ ∗ A + n)) where ¯χ ∗ A is a condition number of the coefficient matrix A and n is the number of nonnegative variables. In particular, the integral is bounded by O(n4.5m) for combinatorial linear programs including network flow problems where m is the number of constraints. We also provide a complete differentialgeometric characterization of the primaldual curvature in the primaldual algorithm. Finally, in view of this integral bound, we observe that the primal (or dual) interiorpoint algorithm requires fewer number of iterations than the primaldual interiorpoint algorithm at least in the case of linear programming.