Results 11  20
of
645
Monotonicity of primaldual interiorpoint algorithms for semidefinite programming problems
, 1998
"... We present primaldual interiorpoint algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly imp ..."
Abstract

Cited by 181 (34 self)
 Add to MetaCart
We present primaldual interiorpoint algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly improved.
A direct formulation for sparse pca using semidefinite programming
 In NIPS 17
, 2004
"... Abstract. Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into ..."
Abstract

Cited by 167 (29 self)
 Add to MetaCart
Abstract. Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into sparse factors or sparse principal component analysis (PCA), and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming–based relaxation for our problem. We also discuss Nesterov’s smooth minimization technique applied to the semidefinite program arising in the semidefinite relaxation of the sparse PCA problem. The method has complexity O(n 4 √ log(n)/ɛ), where n is the size of the underlying covariance matrix and ɛ is the desired absolute accuracy on the optimal value of the problem.
Unsupervised Learning of Image Manifolds by Semidefinite Programming
, 2004
"... Can we detect low dimensional structure in high dimensional data sets of images and video? The problem of dimensionality reduction arises often in computer vision and pattern recognition. In this paper, we propose a new solution to this problem based on semidefinite programming. Our algorithm can be ..."
Abstract

Cited by 162 (9 self)
 Add to MetaCart
Can we detect low dimensional structure in high dimensional data sets of images and video? The problem of dimensionality reduction arises often in computer vision and pattern recognition. In this paper, we propose a new solution to this problem based on semidefinite programming. Our algorithm can be used to analyze high dimensional data that lies on or near a low dimensional manifold. It overcomes certain limitations of previous work in manifold learning, such as Isomap and locally linear embedding. We illustrate the algorithm on easily visualized examples of curves and surfaces, as well as on actual images of faces, handwritten digits, and solid objects.
A rank minimization heuristic with application to minimum order system approximation
 In Proceedings of the 2001 American Control Conference
, 2001
"... Several problems arising in control system analysis and design, such as reduced order controller synthesis, involve minimizing the rank of a matrix variable subject to linear matrix inequality (LMI) constraints. Except in some special cases, solving this rank minimization problem (globally) is very ..."
Abstract

Cited by 147 (9 self)
 Add to MetaCart
Several problems arising in control system analysis and design, such as reduced order controller synthesis, involve minimizing the rank of a matrix variable subject to linear matrix inequality (LMI) constraints. Except in some special cases, solving this rank minimization problem (globally) is very difficult. One simple and surprisingly effective heuristic, applicable when the matrix variable is symmetric and positive semidefinite, is to minimize its trace in place of its rank. This results in a semidefinite program (SDP) which can be efficiently solved. In this paper we describe a generalization of the trace heuristic that applies to general nonsymmetric, even nonsquare, matrices, and reduces to the trace heuristic when the matrix is positive semidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm. We show that this problem can be reduced to an SDP, hence efficiently solved. To motivate the heuristic, we show that the dual spectral norm is the convex envelope of the rank on the set of matrices with norm less than one. We demonstrate the method on the problem of minimum order system approximation. 1
SecondOrder Cone Programming
 Mathematical Programming
, 2001
"... In this paper we survey the second order cone programming problem (SOCP). First we present several applications of the problem in various areas of engineering and robust optimization problems. We also give examples of optimization problems that can be cast as SOCPs. Next we review an algebraic struc ..."
Abstract

Cited by 143 (8 self)
 Add to MetaCart
In this paper we survey the second order cone programming problem (SOCP). First we present several applications of the problem in various areas of engineering and robust optimization problems. We also give examples of optimization problems that can be cast as SOCPs. Next we review an algebraic structure that is connected to SOCP. This algebra is a special case of a Euclidean Jordan algebra. After presenting duality theory, complementary slackness conditions, and definitions and algebraic characterizations of primal and dual nondegeneracy and strict complementarity we review the logarithmic barrier function for the SOCP problem and survey the pathfollowing interior point algorithms for it. Next we examine numerically stable methods for solving the interior point methods and study ways that sparsity in the input data can be exploited. Finally we give some current and future research direction in SOCP.
Solving semidefinitequadraticlinear programs using SDPT3
 MATHEMATICAL PROGRAMMING
, 2003
"... This paper discusses computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs). Many test problems of this type are solved using a new release of SDPT3, a Matlab implementation of infeasible primaldual pathfollowing algorithm ..."
Abstract

Cited by 139 (18 self)
 Add to MetaCart
This paper discusses computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs). Many test problems of this type are solved using a new release of SDPT3, a Matlab implementation of infeasible primaldual pathfollowing algorithms. The software developed by the authors uses Mehrotratype predictorcorrector variants of interiorpoint methods and two types of search directions: the HKM and NT directions. A discussion of implementation details is provided and computational results on problems from the SDPLIB and DIMACS Challenge collections are reported.
Solving LargeScale Sparse Semidefinite Programs for Combinatorial Optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational re ..."
Abstract

Cited by 116 (11 self)
 Add to MetaCart
We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational results of interiorpoint algorithms for approximating the maximum cut semidefinite programs with dimension upto 3000.
Learning a kernel matrix for nonlinear dimensionality reduction
 In Proceedings of the Twenty First International Conference on Machine Learning (ICML04
, 2004
"... We investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. Noting that the kernel matrix implicitly maps the data into a nonlinear feature space, we show how to discover a mapping that “unfolds ” the underlying manifold from which the data ..."
Abstract

Cited by 112 (7 self)
 Add to MetaCart
We investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. Noting that the kernel matrix implicitly maps the data into a nonlinear feature space, we show how to discover a mapping that “unfolds ” the underlying manifold from which the data was sampled. The kernel matrix is constructed by maximizing the variance in feature space subject to local constraints that preserve the angles and distances between nearest neighbors. The main optimization involves an instance of semidefinite programming—a fundamentally different computation than previous algorithms for manifold learning, such as Isomap and locally linear embedding. The optimized kernels perform better than polynomial and Gaussian kernels for problems in manifold learning, but worse for problems in large margin classification. We explain these results in terms of the geometric properties of different kernels and comment on various interpretations of other manifold learning algorithms as kernel methods.
On the NesterovTodd direction in semidefinite programming
 SIAM Journal on Optimization
, 1996
"... Nesterov and Todd discuss several pathfollowing and potentialreduction interiorpoint methods for certain convex programming problems. In the special case of semidefinite programming, we discuss how to compute the corresponding directions efficiently, how to view them as Newton directions, and how ..."
Abstract

Cited by 108 (22 self)
 Add to MetaCart
Nesterov and Todd discuss several pathfollowing and potentialreduction interiorpoint methods for certain convex programming problems. In the special case of semidefinite programming, we discuss how to compute the corresponding directions efficiently, how to view them as Newton directions, and how to take Mehrotra predictorcorrector steps in this framework. We also provide some computational results suggesting that our algorithm is more robust than alternative methods.