Results 1  10
of
29
Solving LargeScale Sparse Semidefinite Programs for Combinatorial Optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational re ..."
Abstract

Cited by 116 (11 self)
 Add to MetaCart
We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational results of interiorpoint algorithms for approximating the maximum cut semidefinite programs with dimension upto 3000.
Semidefinite optimization
 Acta Numerica
, 2001
"... Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the ..."
Abstract

Cited by 107 (2 self)
 Add to MetaCart
Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the strongest column, checking the stability of a differential inclusion, and obtaining tight bounds for hard combinatorial optimization problems. Part also derives from great advances in our ability to solve such problems efficiently in theory and in practice (perhaps “or ” would be more appropriate: the most effective computational methods are not always provably efficient in theory, and vice versa). Here we describe this class of optimization problems, give a number of examples demonstrating its significance, outline its duality theory, and discuss algorithms for solving such problems.
Solving Euclidean Distance Matrix Completion Problems Via Semidefinite Programming
, 1997
"... Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by ..."
Abstract

Cited by 69 (14 self)
 Add to MetaCart
Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (IgDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (IgDM). In this paper, we follow the successful approach in [20] and solve the IgDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primaldual interiorpoint algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.
Polynomial Convergence of a New Family of PrimalDual Algorithms for Semidefinite Programming
, 1996
"... This paper establishes the polynomial convergence of a new class of (feasible) primaldual interiorpoint path following algorithms for semidefinite programming (SDP) whose search directions are obtained by applying Newton method to the symmetric central path equation (P T XP ) 1=2 (P \Gamma1 ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
This paper establishes the polynomial convergence of a new class of (feasible) primaldual interiorpoint path following algorithms for semidefinite programming (SDP) whose search directions are obtained by applying Newton method to the symmetric central path equation (P T XP ) 1=2 (P \Gamma1 SP \GammaT )(P T XP ) 1=2 \Gamma I = 0; where P is a nonsingular matrix. Specifically, we show that the shortstep path following algorithm based on the Frobenius norm neighborhood and the semilongstep path following algorithm based on the operator 2norm neighborhood have O( p nL) and O(nL) iterationcomplexity bounds, respectively. When P = I, this yields the first polynomially convergent semilongstep algorithm based on a pure Newton direction. Restricting the scaling matrix P at each iteration to a certain subset of nonsingular matrices, we are able to establish an O(n 3=2 L) iterationcomplexity for the longstep path following method. The resulting subclass of search direct...
Mixed Linear and Semidefinite Programming for Combinatorial and Quadratic Optimization
, 1999
"... We use the semidefinite relaxation to approximate combinatorial and quadratic optimization problems subject to linear, quadratic, as well as boolean constraints. We present a dual potential reduction algorithm and show how to exploit the sparse structure of various problems. Coupled with randomized ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
We use the semidefinite relaxation to approximate combinatorial and quadratic optimization problems subject to linear, quadratic, as well as boolean constraints. We present a dual potential reduction algorithm and show how to exploit the sparse structure of various problems. Coupled with randomized and heuristic methods, we report computational results for approximating graphpartition and quadratic problems with dimensions 800 to 10,000. This finding, to the best of our knowledge, is the first computational evidence of the effectiveness of these approximation algorithms for solving largescale problems.
Solving Sparse Semidefinite Programs Using the Dual Scaling Algorithm with an Iterative Solver
, 2000
"... Recently, the dualscaling interiorpoint algorithm has been used to solve largescale semidefinite programs arisen from discrete optimization, since it better exploits the sparsity structure of the problems than several other interiorpoint methods, while retain the same polynomial time complexity. ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
Recently, the dualscaling interiorpoint algorithm has been used to solve largescale semidefinite programs arisen from discrete optimization, since it better exploits the sparsity structure of the problems than several other interiorpoint methods, while retain the same polynomial time complexity. However, solving a linear system of a fully dense Gram matrix in each iteration of the algorithm becomes the timebottleneck of computational eciency. To overcome this diculty, we have tested using an iterative method, the conjugate gradient method with a simple preconditioner, to solve the linear system for a prescribed accuracy. In this report, we report computational results of solving semidenite programs with dimension up to 20,000, which show that the iterative method could save computation time upto 25 times of using the directed Cholesky factorization solver. Key words. Semidenite program, dualscaling algorithm, conjugate gradient method, precondition. This work is partially ...
Generalization Of PrimalDual InteriorPoint Methods To Convex Optimization Problems In Conic Form
, 1999
"... We generalize primaldual interiorpoint methods for linear programming problems to the convex optimization problems in conic form. Previously, the most comprehensive theory of symmetric primaldual interiorpoint algorithms was given by Nesterov and Todd [8, 9] for the feasible regions expressed as ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We generalize primaldual interiorpoint methods for linear programming problems to the convex optimization problems in conic form. Previously, the most comprehensive theory of symmetric primaldual interiorpoint algorithms was given by Nesterov and Todd [8, 9] for the feasible regions expressed as the intersection of a symmetric cone with an affine subspace. In our setting, we allow an arbitrary convex cone in place of the symmetric cone. Even though some of the impressive properties attained by NesterovTodd algorithms is impossible in this general setting of convex optimization problems, we show that essentially all primaldual interiorpointalgorithms for LP can be extended easily to the general setting. We provide three frameworks for primaldual algorithms, each framework corresponding to a different level of sophistication in the algorithms. As the level of sophistication increases, we demand better formulations of the feasible solution sets, but our algorithms, in return, atta...
Parallel Computing on Semidefinite Programs
 In Proceedings of the 5th European Workshop on Natural Language Generation
, 2003
"... This paper demonstrates how interiorpoint methods can use multiple processors efficiently to solve large semidefinite programs that arise in VLSI design, control theory, and graph coloring. Previous implementations of these methods have been restricted to a single processor. By computing and solvin ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
This paper demonstrates how interiorpoint methods can use multiple processors efficiently to solve large semidefinite programs that arise in VLSI design, control theory, and graph coloring. Previous implementations of these methods have been restricted to a single processor. By computing and solving the Schur complement matrix in parallel, multiple processors enable the faster solution of medium and large problems. The dualscaling algorithm for semidefinite programming was adapted to a distributed memory environment and used to solve medium and large problems than faster than could previously be solved by interiorpoint algorithms. Three criteria that influence the parallel scalability of the solver are identified. Numerical results show that on problems of appropriate size and structure, the implementation of an interiorpoint method exhibits good scalability on parallel architectures.
APPROXIMATING MAXIMUM STABLE SET AND MINIMUM GRAPH COLORING PROBLEMS WITH THE POSITIVE SEMIDEFINITE RELAXATION
"... We compute approximate solutions to the maximum stable set problem and the minimum graph coloring problem using a positive semidefinite relaxation. The positive semidefinite programs are solved using an implementation of the dual scaling algorithm that takes advantage of the sparsity inherent in m ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We compute approximate solutions to the maximum stable set problem and the minimum graph coloring problem using a positive semidefinite relaxation. The positive semidefinite programs are solved using an implementation of the dual scaling algorithm that takes advantage of the sparsity inherent in most graphs and the structure inherent in the problem formulation. From the solution to the relaxation, we apply a randomized algorithm to find approximate maximum stable sets and a modification of a popular heuristic to find graph colorings. We obtained high quality answers for graphs with over 1000 vertices and almost 7000 edges.