Results 1  10
of
15
A semidefinite framework for trust region subproblems with applications to large scale minimization
 Math. Programming
, 1997
"... This is an abbreviated revision of the University of Waterloo research report CORR 9432. y ..."
Abstract

Cited by 59 (8 self)
 Add to MetaCart
This is an abbreviated revision of the University of Waterloo research report CORR 9432. y
A survey of the Slemma
 SIAM Review
"... Abstract. In this survey we review the many faces of the Slemma, a result about the correctness of the Sprocedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
Abstract. In this survey we review the many faces of the Slemma, a result about the correctness of the Sprocedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as well. These were all active research areas, but as there was little interaction between researchers in these different areas, their results remained mainly isolated. Here we give a unified analysis of the theory by providing three different proofs for the Slemma and revealing hidden connections with various areas of mathematics. We prove some new duality results and present applications from control theory, error estimation, and computational geometry. Key words. Slemma, Sprocedure, control theory, nonconvex theorem of alternatives, numerical range, relaxation theory, semidefinite optimization, generalized convexities
Quadratic matrix programming
 SIAM J. Optim
"... We introduce and study a special class of nonconvex quadratic problems in which the objective and constraint functions have the form f(X) = Tr(X T AX) + 2Tr(B T X) + c, X ∈ R n×r The latter formulation is termed quadratic matrix programming (QMP) of order r. We construct a specially devised semidef ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
We introduce and study a special class of nonconvex quadratic problems in which the objective and constraint functions have the form f(X) = Tr(X T AX) + 2Tr(B T X) + c, X ∈ R n×r The latter formulation is termed quadratic matrix programming (QMP) of order r. We construct a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and show that under some mild conditions strong duality holds for QMP problems with at most r constraints. Using a result on the equivalence of two characterizations of the nonnegativity property of quadratic functions of the above form, we are able to compare the constructed SDR and dual problems to other known SDR and dual formulations of the problem. An application to robust least squares problems is discussed. 1
Strong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints
 SIAM Journal on Optimization
"... Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connecti ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems.
Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares
 SIAM J. Matrix Anal. Appl
"... Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an ɛglobal optimal solution in a computational effort of O(n3 log ɛ−1). The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other wellknown RTLS solvers illustrates the attractiveness of our new method. Key words. regularized total least squares, fractional programming, nonconvex quadratic optimization, convex programming
On the solution of the Tikhonov regularization of the total least squares problem
 SIAM J. Optim
"... Abstract. Total least squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadra ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract. Total least squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function G over a closed interval. Computing a value and a derivative of G consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that G is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method.
A convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid
"... the date of receipt and acceptance should be inserted later Abstract We consider the nonconvex problem (RQ) of minimizing the ratio of two nonconvex quadratic functions over a possibly degenerate ellipsoid. This formulation is motivated by the socalled Regularized Total Least Squares problem (RTLS) ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
the date of receipt and acceptance should be inserted later Abstract We consider the nonconvex problem (RQ) of minimizing the ratio of two nonconvex quadratic functions over a possibly degenerate ellipsoid. This formulation is motivated by the socalled Regularized Total Least Squares problem (RTLS), which is a special case of the problem’s class we study. We prove that under a certain mild assumption on the problem’s data, problem (RQ) admits an exact semidefinite programming relaxation. We then study a simple iterative procedure which is proven to converge superlinearly to a global solution of (RQ) and show that the dependency of the number of iterations on the optimality tolerance ε grows as O ( √ ln ε −1). Keywords ratio of quadratic minimization · nonconvex quadratic minimization · semidefinite programming · strong duality · regularized total least squares · fixed point algorithms · convergence analysis
LMI Approximations for the Radius of the Intersection of Ellipsoids
 Journal of Optimization Theory and Applications
, 1998
"... This paper addresses the problem of evaluating the maximum norm vector within the intersection of several ellipsoids. This difficult nonconvex optimization problem frequently arises in robust control synthesis. Linear matrix inequality relaxations of the problem are enumerated. Two randomized algor ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
This paper addresses the problem of evaluating the maximum norm vector within the intersection of several ellipsoids. This difficult nonconvex optimization problem frequently arises in robust control synthesis. Linear matrix inequality relaxations of the problem are enumerated. Two randomized algorithms and several ellipsoidal approximations are described. Guaranteed approximation bounds are derived in order to evaluate the quality of these relaxations. 1 Introduction 1.1 Problem Statement In this paper we consider the optimization problem p opt = max x x 0 x s.t. x 2 F (1) where x is a vector in R n and the set F is the intersection of m ellipsoids F = E 1 " E 2 " \Delta \Delta \Delta " Em (2) Corresponding Author. Email: henrion@laas.fr defined as E i = fx : x 0 P i x 1g (3) for P i a given symmetric positive definite matrix in R n\Thetan . Feasible set F is the intersection of m centered ellipsoids in R n , hence F is convex and centered about the origin. It i...
Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming ∗
, 2008
"... We establish several convexity results which are concerned with nonconvex quadratic matrix (QM) functions: strong duality of quadratic matrix programming problems, convexity of the image of mappings comprised of several QM functions and the existence of a corresponding SLemma. As a consequence of o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We establish several convexity results which are concerned with nonconvex quadratic matrix (QM) functions: strong duality of quadratic matrix programming problems, convexity of the image of mappings comprised of several QM functions and the existence of a corresponding SLemma. As a consequence of our results, we prove that a class of quadratic problems involving several functions with similar matrix terms has a zero duality gap. We present applications to robust optimization, solution of linear systems immune to implementation errors and to the problem of computing the Chebyshev center of an intersection of balls. 1
The Regularized Total Least Squares Problem: Theoretical Properties and Three Globally Convergent Algorithms
"... Total Least Squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. In practical situations, the linear system is often illconditioned. For example, this happens when the system is obtained via d ..."
Abstract
 Add to MetaCart
Total Least Squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. In practical situations, the linear system is often illconditioned. For example, this happens when the system is obtained via discretization of illposed problems such as integral equations of the first kind (see e.g., [7] and references therein). In these cases the TLS solution can be physically meaningless and thus regularization is essential for stabilizing the solution. Regularization of the TLS solution was addressed by several approaches such as truncation methods [6, 8] and Tikhonov regularization [1]. In this talk we will consider a third approach in which a quadratic constraint is introduced. It is well known [7, 11] that the quadratically constrained total least squares problem can be formulated as a problem of minimizing a ratio of two quadratic function subject to a quadratic constraint: (RT LS) min x∈Rn � �Ax − b�2 �x�2 + 1: �Lx�2 � ≤ ρ, where A ∈ R m×n, b ∈ R m, ρ> 0 and L ∈ R k×n (k ≤ n) is a matrix that defines a (semi)norm on the