Results 1  10
of
71
Detecting global optimality and extracting solutions in GloptiPoly
 Chapter in D. Henrion, A. Garulli (Editors). Positive polynomials in control. Lecture Notes in Control and Information Sciences
, 2005
"... GloptiPoly is a Matlab/SeDuMi addon to build and solve convex linear matrix inequality (LMI) relaxations of nonconvex optimization problems with multivariate polynomial objective function and constraints, based on the theory of moments. In contrast with the dual sumofsquares decompositions of po ..."
Abstract

Cited by 82 (12 self)
 Add to MetaCart
GloptiPoly is a Matlab/SeDuMi addon to build and solve convex linear matrix inequality (LMI) relaxations of nonconvex optimization problems with multivariate polynomial objective function and constraints, based on the theory of moments. In contrast with the dual sumofsquares decompositions of positive polynomials, the theory of moments allows to detect global optimality of an LMI relaxation and extract globally optimal solutions. In this report, we describe and illustrate the numerical linear algebra algorithm implemented in GloptiPoly for detecting global optimality and extracting solutions. We also mention some related heuristics that could be useful to reduce the number of variables in the LMI relaxations. 1
New Results on Quadratic Minimization
, 2001
"... In this paper we present several new results on minimizing an indefinite quadratic function under quadratic/linear constraints. The emphasis is placed on the case where the constraints are two quadratic inequalities. This formulation is known as the extended trust region subproblem and the computati ..."
Abstract

Cited by 64 (8 self)
 Add to MetaCart
(Show Context)
In this paper we present several new results on minimizing an indefinite quadratic function under quadratic/linear constraints. The emphasis is placed on the case where the constraints are two quadratic inequalities. This formulation is known as the extended trust region subproblem and the computational complexity of this problem is still unknown. We consider several interesting cases related to this problem and show that for those cases the corresponding SDP relaxation admits no gap with the true optimal value, and consequently we obtain polynomial time procedures for solving those special cases of quadratic optimization. For the extended trust region subproblem itself, we introduce a parameterized problem and prove the existence of a trajectory which will lead to an optimal solution. Combining with a result obtained in the first part of the paper, we propose a polynomialtime solution procedure for the extended trust region subproblem arising from solving nonlinear programs with a single equality constraint.
Complex matrix decomposition and quadratic programming,”Mathematics of Operations Research
, 2007
"... This paper studies the possibilities of the Linear Matrix Inequality (LMI) characterization of the matrix cones formed by nonnegative complex Hermitian quadratic functions over specific domains in the complex space. In its real case analog, such studies were conducted in Sturm and Zhang [11]. In thi ..."
Abstract

Cited by 60 (17 self)
 Add to MetaCart
(Show Context)
This paper studies the possibilities of the Linear Matrix Inequality (LMI) characterization of the matrix cones formed by nonnegative complex Hermitian quadratic functions over specific domains in the complex space. In its real case analog, such studies were conducted in Sturm and Zhang [11]. In this paper it is shown that stronger results can be obtained for the complex Hermitian case. In particular, we show that the matrix rankone decomposition result of Sturm and Zhang [11] can be strengthened for the complex Hermitian matrices. As a consequence, it is possible to characterize several new matrix copositive cones (over specific domains) by means of LMI. As examples of the potential application of the new rankone decomposition result, we present an upper bound on the lowest rank among all the optimal solutions for a standard complex SDP problem, and offer alternative proofs for a result of Hausdorff [5] and a result of Brickman [3] on the joint numerical range.
A survey of the Slemma
 SIAM Review
"... Abstract. In this survey we review the many faces of the Slemma, a result about the correctness of the Sprocedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
Abstract. In this survey we review the many faces of the Slemma, a result about the correctness of the Sprocedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as well. These were all active research areas, but as there was little interaction between researchers in these different areas, their results remained mainly isolated. Here we give a unified analysis of the theory by providing three different proofs for the Slemma and revealing hidden connections with various areas of mathematics. We prove some new duality results and present applications from control theory, error estimation, and computational geometry. Key words. Slemma, Sprocedure, control theory, nonconvex theorem of alternatives, numerical range, relaxation theory, semidefinite optimization, generalized convexities
Strong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints
 SIAM JOURNAL ON OPTIMIZATION
, 2006
"... We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection betwe ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems.
BiQuadratic Optimization over Unit Spheres and Semidefinite Programming Relaxations
, 2008
"... Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl ..."
Abstract

Cited by 34 (17 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl
New Results on Hermitian Matrix RankOne Decomposition
, 2009
"... In this paper, we present several new rankone decomposition theorems for Hermitian positive semidefinite matrices, which generalize our previous results in [18, 2]. The new matrix rankone decomposition theorems appear to have wide applications in theory as well as in practice. On the theoretical s ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
In this paper, we present several new rankone decomposition theorems for Hermitian positive semidefinite matrices, which generalize our previous results in [18, 2]. The new matrix rankone decomposition theorems appear to have wide applications in theory as well as in practice. On the theoretical side, for example, we show how to further extend some of the classical results including a lemma due to Yuan [27], the classical results on the convexity of the joint numerical ranges [23, 4], and the socalled Finsler’s lemma [9, 4]. On the practical side, we show that the new results can be applied to solve two typical problems in signal processing and communication: one for radar code optimization and the other for robust beamforming. The new matrix decomposition theorems are proven by construction in this paper, and we demonstrate that the constructive procedures can be implemented efficiently, stably, and accurately. The URL of our Matlab programs is given in this paper. We strongly believe that the new decomposition procedures, as a means to solve nonconvex quadratic optimization with a few quadratic constraints, are useful for many other potential engineering applications.
Accelerated Training for Matrixnorm Regularization: A Boosting Approach
"... Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm. Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrixnorm constrained problems or provide suboptimal convergenc ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm. Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrixnorm constrained problems or provide suboptimal convergence rates. In this paper, we propose a boosting method for regularized learning that guarantees ɛ accuracy within O(1/ɛ) iterations. Performance is further accelerated by interlacing boosting with fixedrank local optimization—exploiting a simpler local objective than previous work. The proposed method yields stateoftheart performance on largescale problems. We also demonstrate an application to latent multiview learning for which we provide the first efficient weakoracle. 1
On the minimum volume covering ellipsoid of ellipsoids
 SIAM Journal on Optimization
, 2006
"... We study the problem of computing a (1+ɛ)approximation to the minimum volume covering ellipsoid of a given set S of the convex hull of m fulldimensional ellipsoids in R n. We extend the firstorder algorithm of Kumar and Yıldırım that computes an approximation to the minimum volume covering ellips ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
We study the problem of computing a (1+ɛ)approximation to the minimum volume covering ellipsoid of a given set S of the convex hull of m fulldimensional ellipsoids in R n. We extend the firstorder algorithm of Kumar and Yıldırım that computes an approximation to the minimum volume covering ellipsoid of a finite set of points in R n, which, in turn, is a modification of Khachiyan’s algorithm. For fixed ɛ> 0, we establish a polynomialtime complexity, which is linear in the number of ellipsoids m. In particular, the iteration complexity of our algorithm is identical to that for a set of m points. The main ingredient in our analysis is the extension of polynomialtime complexity of certain subroutines in the algorithm from a set of points to a set of ellipsoids. As a byproduct, our algorithm returns a finite “core ” set X ⊆ S with the property that the minimum volume covering ellipsoid of X provides a good approximation to that of S. Furthermore, the size of X depends only on the dimension n and ɛ, but not on the number of ellipsoids m. We also discuss the extent to which our algorithm can be used to compute the minimum volume covering ellipsoid of the convex hull of other sets in R n. We adopt the real number model of computation in our analysis.
Maximum block improvement and polynomial optimization
 SIAM Journal on Optimization
"... Abstract. In this paper we propose an efficient method for solving the spherically constrained homogeneous polynomial optimization problem. The new approach has the following three main ingredients. First, we establish a block coordinate descent type search method for nonlinear optimization, with t ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose an efficient method for solving the spherically constrained homogeneous polynomial optimization problem. The new approach has the following three main ingredients. First, we establish a block coordinate descent type search method for nonlinear optimization, with the novelty being that we only accept a block update that achieves the maximum improvement, hence the name of our new search method: Maximum Block Improvement (MBI). Convergence of the sequence produced by the MBI method to a stationary point is proven. Second, we establish that maximizing a homogeneous polynomial over a sphere is equivalent to its tensor relaxation problem, thus we can maximize a homogeneous polynomial function over a sphere by its tensor relaxation via the MBI approach. Third, we propose a scheme to reach a KKT point of the polynomial optimization, provided that a stationary solution for the relaxed tensor problem is available. Numerical experiments have shown that our new method works very efficiently: for a majority of the test instances that we have experimented with, the method finds the global optimal solution at a low computational cost.