Results 1  10
of
25
Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
, 1998
"... SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..."
Abstract

Cited by 736 (3 self)
 Add to MetaCart
SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.
Solving Euclidean Distance Matrix Completion Problems Via Semidefinite Programming
, 1997
"... Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by ..."
Abstract

Cited by 69 (14 self)
 Add to MetaCart
Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primaldual interiorpoint algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.
Conic Convex Programming And SelfDual Embedding
 Optim. Methods Softw
, 1998
"... How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the twophase approach or using the socalled big M technique. In the interior point method, there is a more ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the twophase approach or using the socalled big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem, viz. the selfdual embedding technique proposed by Ye, Todd and Mizuno [30]. For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving its embedded selfdual problem. The embedded selfdual problem has a trivial initial solution and has the same structure as the original problem. Hence, it eliminates the need to consider the initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programming. Since a nonlinear conic convex programming problem may lack the socalled stri...
Pattern Separation Via Ellipsoids and Conic Programming
, 1998
"... this document. The first chapter is about mathematical programming. We will start by describing how and why researchers were led to study special types of mathematical programs, namely convex programs and conic programs. We will also provide a detailed discussion about conic duality and give a class ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
this document. The first chapter is about mathematical programming. We will start by describing how and why researchers were led to study special types of mathematical programs, namely convex programs and conic programs. We will also provide a detailed discussion about conic duality and give a classification of conic programs. We will then describe what are selfscaled cones and why they are so useful in conic programming. Finally, we will give an overview of what can be modelled using a SQL conic program, keeping in mind our pattern separation problem. Since most of the material in the chapter is standard, many of the proofs are omitted. The second chapter will concentrate on pattern separation. After a short description of the problem, we will successively describe four different separation methods using SQL conic programming. For each method, various properties are investigated. Each algorithm has in fact been successively designed with the objective of eliminating the drawbacks of the previous one, CONTENTS 3 while keeping its good properties. We conclude this chapter with a small section describing the state of the art in pattern separation with ellipsoids. The third chapter reports some computational experiments with our four methods, and provides a comparison with other separation procedures. Finally, we conclude this work by providing a short summary, highlighting the author's personal contribution and giving some interesting perspectives for further research. Chapter 1 Conic programming 1.1 Introduction
On sensitivity of central solutions in semidefinite programming
 Math. Program
, 1998
"... In this paper we study the properties of the analytic central path of a semide nite programming problem under perturbation of a set of input parameters. Speci cally, we analyze the behavior of solutions on the central path with respect to changes on the right hand side of the constraints, including ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper we study the properties of the analytic central path of a semide nite programming problem under perturbation of a set of input parameters. Speci cally, we analyze the behavior of solutions on the central path with respect to changes on the right hand side of the constraints, including the limiting behavior when the central optimal solution is approached. Our results are of interest for the sake ofnumerical analysis, sensitivity analysis and parametric programming. Under the primaldual Slater condition and the strict complementarity condition we show that the derivatives of central solutions with respect to the right hand side parameters converge as the path tends to the central optimal solution. Moreover, the derivatives are bounded, i.e. a Lipschitz constant exists. This Lipschitz constant can be thought of as a condition number for the semide nite programming problem. It is a generalization of the familiar condition number for linear equation systems and linear programming problems. However, the generalized condition number depends on the right hand side parameters as well, whereas it is wellknown that in the linear programming case the condition number depends only on the constraint matrix. We demonstrate that the existence of strictly complementary solutions is important for the Lipschitz constant to exist. Moreover, we give an example in which the set of right hand side parameters for which the strict complementarity condition holds is neither open nor closed. This is remarkable since a similar set for which the primaldual Slater condition holds is always open. Key words: analytic central path, semide nite programming, sensitivity, condition number.
A LongStep PrimalDual Algorithm for the Symmetric Programming Problem
 SYSTEMS & CONTROL LETTERS
, 1999
"... Based on the techniques of Euclidean Jordan algebras, we prove complexity estimates for a longstep primaldual interiorpoint algorithm for the optimization problem of the minimization of a linear function on a feasible set obtained as the intersection of an ane subspace and a symmetric cone. This ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Based on the techniques of Euclidean Jordan algebras, we prove complexity estimates for a longstep primaldual interiorpoint algorithm for the optimization problem of the minimization of a linear function on a feasible set obtained as the intersection of an ane subspace and a symmetric cone. This result provides a meaningful illustration of a power of the technique of Euclidean Jordan algebras applied to problems under consideration.
Optimization Over Symmetric Cones
, 1999
"... We consider the problem of optimizing a linear function over the intersection of an a#ne space and a special class of closed, convex cones, namely the symmetric cones over the reals. This problem subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite pr ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We consider the problem of optimizing a linear function over the intersection of an a#ne space and a special class of closed, convex cones, namely the symmetric cones over the reals. This problem subsumes linear programming, convex quadratically constrained quadratic programming, and semidefinite programming as special cases. First, we derive some perturbation results for this problem class. Then, we discuss two solution methods: an interior point method capable of delivering highly accurate solutions to problems of modest size, and a first order bundle method which provides solutions of low accuracy, but can handle much larger problems. Finally, we describe an application of semidefinite programming in electronic structure calculations, and give some numerical results on sample problems. vi Contents Dedication iii Acknowledgment iv Abstract vi List of Figures ix List of Tables x List of Symbols and Notations x 1 Conic Optimization Problems 1 1.1 Problem Formulation . . . . . . . ...
The GaussNewton Direction in Semidefinite Programming
, 1998
"... Primaldual interiorpoint methods have proven to be very successful for both linear programming (LP) and, more recently, for semidefinite programming (SDP) problems. Many of the techniques that have been so successful for LP have been extended to SDP. In fact, interior point methods are currently t ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Primaldual interiorpoint methods have proven to be very successful for both linear programming (LP) and, more recently, for semidefinite programming (SDP) problems. Many of the techniques that have been so successful for LP have been extended to SDP. In fact, interior point methods are currently the only successful techniques for SDP. Research supported by Natural Sciences Engineering Research Council Canada. Email sgkruk@acm.org y Department of Mechanical Engineering, Sophia University, 71 Kioicho, Chiyodaku, Tokyo 102 Japan z Technische Universitat Graz, Institut fur Mathematik, Steyrergasse 30, A8010 Graz, Austria x EMS Program Director, ACE42 EQuad, Princeton University, Princeton NJ 08544, Tel: 6092580876, Fax: 6092583796, Email rvdb@princeton.edu, http://www.princeton.edu/~rvdb/  Research supported by Natural Sciences Engineering Research Council Canada. Email hwolkowi@orion.math.uwaterloo.ca, http://orion.math.uwaterloo.ca/~hwolkowi 0 This report is av...
Proving Strong Duality for Geometric Optimization Using a Conic Formulation
, 1999
"... Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the wellknown associated duality theory, using conic optimization. After introducing suitable convex cones and studying their ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Geometric optimization is an important class of problems that has many applications, especially in engineering design. In this article, we provide new simplified proofs for the wellknown associated duality theory, using conic optimization. After introducing suitable convex cones and studying their properties, we model geometric optimization problems with a conic formulation, which allows us to apply the powerful duality theory of conic optimization and derive the duality results valid for geometric optimization.
Error Bounds for Linear Matrix Inequalities
, 1998
"... For iterative sequences that converge to the solution set of a linear matrix inequality, we show that the distance of the iterates to the solution set is at most O(ffl 2 \Gammad ). The nonnegative integer d is the socalled degree of singularity of the linear matrix inequality, and ffl denotes th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
For iterative sequences that converge to the solution set of a linear matrix inequality, we show that the distance of the iterates to the solution set is at most O(ffl 2 \Gammad ). The nonnegative integer d is the socalled degree of singularity of the linear matrix inequality, and ffl denotes the amount of constraint violation in the iterate. For infeasible linear matrix inequalities, we show that the minimal norm of fflapproximate primal solutions is at least 1=O(ffl 1=(2 d \Gamma1) ), and the minimal norm of fflapproximate Farkas type dual solutions is at most O(1=ffl 2 d \Gamma1 ). As an application of these error bounds, we show that for any bounded sequence of fflapproximate solutions to a semidefinite programming problem, the distance to the optimal solution set is at most O(ffl 2 \Gammak ), where k is the degree of singularity of the optimal solution set. Keywords: semidefinite programming, error bounds, linear matrix inequality, regularized duality. AMS s...