## A Spectral Bundle Method for Semidefinite Programming (1997)

Venue: | SIAM JOURNAL ON OPTIMIZATION |

Citations: | 138 - 6 self |

### BibTeX

@ARTICLE{Helmberg97aspectral,

author = {C. Helmberg and F. Rendl},

title = {A Spectral Bundle Method for Semidefinite Programming},

journal = {SIAM JOURNAL ON OPTIMIZATION},

year = {1997},

volume = {10},

pages = {673--696}

}

### Years of Citing Articles

### OpenURL

### Abstract

A central drawback of primal-dual interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts applicability to problems of small dimension. Typically semidefinite relaxations arising in combinatorial applications have sparse and well structured cost and coefficient matrices of huge order. We present a method that allows to compute acceptable approximations to the optimal solution of large problems within reasonable time. Semidefinite programming problems with constant trace on the primal feasible set are equivalent to eigenvalue optimization problems. These are convex nonsmooth programming problems and can be solved by bundle methods. We propose replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored for eigenvalue problems. Convergence follows from the traditional approach but a proof is included for completene...

### Citations

3278 | Convex analysis
- Rockafellar
- 1970
(Show Context)
Citation Context ...(W; y) + u 2 ky \Gammasyk 2 = max W2c W \Omega C \Gamma A Tsy; W ff + b Tsy \Gamma 1 2u hAW \Gamma b; AW \Gamma bi : The first equality follows from interchanging min and max (see Corollary 37.3.2 of =-=[41]-=-) and using first order optimality for the inner minimization with respect to y, y =sy + 1 u (AW \Gamma b): (3.3) The final problem is a semidefinite program with (concave) quadratic cost function. We... |

935 | Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
- GOEMANS, WILLIAMSON
- 1995
(Show Context)
Citation Context ...ations could be of practical value. Within short time several approximation algorithms relying on semidefinite programming were published, most of them based on the approach by Goemans and Williamson =-=[8]-=-. On the implementational side [14, 16, 20] cutting plane approaches for semidefinite relaxations of Konrad-Zuse-Zentrum fur Informationstechnik Berlin, Takustrae 7, 14195 Berlin, Germany. helmberg@zi... |

473 | Primal-dual interior-point methods for semidefinite programming: convergence results, stability and numerical results
- Alizadeh, Haeberly, et al.
- 1998
(Show Context)
Citation Context ...roximal bundle method, large-scale problems. AMS subject classifications. 65F15, 90C25; Secondary 52A41, 90C06. 1. Introduction. The development of interior point methods for semidefinite programming =-=[19, 31, 1, 46]-=- has increased interest in semidefinite modeling techniques in several fields such as control theory, eigenvalue optimization, and combinatorial optimization. In fact, interior point methods proved to... |

431 |
Convex Analysis and Minimization Algorithms I
- Hiriart-Urruty, Lemaréchal
- 1993
(Show Context)
Citation Context ...n with an additional linear objective term. These functions are well known to be convex and non-smooth. A very general method for optimizing non-smooth convex functions is the bundle method, see e.g. =-=[21, 42, 17, 18]-=-. In each step the function value and a subgradient of the function is computed for some specific point. By means of the collected subgradients a cutting plane model of the function is formed. The min... |

340 |
Interior-Point Polynomial Algorithms
- Nesterov, Nemirovskii
- 1994
(Show Context)
Citation Context ...roximal bundle method, large-scale problems. AMS subject classifications. 65F15, 90C25; Secondary 52A41, 90C06. 1. Introduction. The development of interior point methods for semidefinite programming =-=[19, 31, 1, 46]-=- has increased interest in semidefinite modeling techniques in several fields such as control theory, eigenvalue optimization, and combinatorial optimization. In fact, interior point methods proved to... |

245 |
Implicit application of polynomial filters in a k-step Arnoldi method
- Sorensen
- 1992
(Show Context)
Citation Context ...e [43]. Rather recently interest in the Lanczos method has risen again, see [25, 3, 5, 10, 30] and references therein. Most of these papers are based on the concept of an implicit restart proposed in =-=[44]-=- which is a polynomial acceleration approach that does not require additional matrix vector multiplications. It will be interesting to test these new ideas within the bundle framework. We thank K.C. K... |

232 | Fast approximation algorithms for fractional packing and covering problems
- Plotkin, Shmoys, et al.
- 1991
(Show Context)
Citation Context ...st function and have to cope with dense matrices X and Z \Gamma1 . An alternative approach based on a combination of the power method with a generic optimization scheme of Plotkin, Shmoys, and Tardos =-=[37]-=- was proposed in [22] but seems to be purely theoretical. In Table 7.1 we compare the proposed bundle method to our semidefinite primaldual interior point code of [14] (called PDIP in the sequel) for ... |

202 | An interior-point method for semidefinite programming
- Helmberg, Rendl, et al.
- 1996
(Show Context)
Citation Context ...if the problem is defined over large matrix variables or a huge number of constraints interior point methods grow terribly slow and consume huge amounts of memory. The most efficient methods of today =-=[15, 23, 2, 32, 45, 29]-=- are primal-dual methods that require, in each iteration of the interior point method, the factorization of a dense matrix of order equal to the number of constraints and one to three factorizations o... |

162 |
Self-scaled barriers and interior-point methods in convex programming
- Nesterov, Todd
- 1997
(Show Context)
Citation Context ...if the problem is defined over large matrix variables or a huge number of constraints interior point methods grow terribly slow and consume huge amounts of memory. The most efficient methods of today =-=[15, 23, 2, 32, 45, 29]-=- are primal-dual methods that require, in each iteration of the interior point method, the factorization of a dense matrix of order equal to the number of constraints and one to three factorizations o... |

119 |
A version of bundle idea for minimizing a non-smooth function: conceptual idea, convergence analysis, numerical results
- Schramm, Zowe
- 1992
(Show Context)
Citation Context ...n with an additional linear objective term. These functions are well known to be convex and non-smooth. A very general method for optimizing non-smooth convex functions is the bundle method, see e.g. =-=[21, 42, 17, 18]-=-. In each step the function value and a subgradient of the function is computed for some specific point. By means of the collected subgradients a cutting plane model of the function is formed. The min... |

113 | Solving Large–Scale Sparse Semidefinite Programs for Combinatorial Optimization
- Benson, Ye, et al.
(Show Context)
Citation Context ...e. It is important to realize that either the primal or the dual matrix is generically dense even if cost and coefficient matrices are very sparse. Very recently, a pure dual approach was proposed in =-=[4]-=- which offers some possibilities to exploit sparsity. It is too early to judge the potential of this method. In combinatorial optimization semidefinite relaxations where introduced in [27]. At that ti... |

101 | Complementarity and nondegeneracy in semidefinite programming - Alizadeh, Haeberly, et al. - 1997 |

99 |
Proximity control in bundle methods for convex nondifferentiable minimization
- Kiwiel
- 1990
(Show Context)
Citation Context ...n with an additional linear objective term. These functions are well known to be convex and non-smooth. A very general method for optimizing non-smooth convex functions is the bundle method, see e.g. =-=[21, 42, 17, 18]-=-. In each step the function value and a subgradient of the function is computed for some specific point. By means of the collected subgradients a cutting plane model of the function is formed. The min... |

85 | A shifted block Lanczos algorithm for solving sparse symmetric generalized eigenproblems
- Grimes, Lewis, et al.
- 1994
(Show Context)
Citation Context ...A straight forward approach to achieve serious speed-ups is to implement the algorithm on parallel machines, see for instance [43]. Rather recently interest in the Lanczos method has risen again, see =-=[25, 3, 5, 10, 30]-=- and references therein. Most of these papers are based on the concept of an implicit restart proposed in [44] which is a polynomial acceleration approach that does not require additional matrix vecto... |

84 | A primal-dual potential reduction method for problems involving matrix inequalities
- Vandenberghe, Boyd
- 1995
(Show Context)
Citation Context ...roximal bundle method, large-scale problems. AMS subject classifications. 65F15, 90C25; Secondary 52A41, 90C06. 1. Introduction. The development of interior point methods for semidefinite programming =-=[19, 31, 1, 46]-=- has increased interest in semidefinite modeling techniques in several fields such as control theory, eigenvalue optimization, and combinatorial optimization. In fact, interior point methods proved to... |

79 | Large-scale optimization of eigenvalues
- Overton
- 1992
(Show Context)
Citation Context ...nvalue, thus close to the optimal solution this number is at least as large as the multiplicity of the maximal eigenvalue in the optimal solution. In the quadratically convergent algorithm of Overton =-=[35]-=- each step is computed from a complete spectral decomposition of the matrix and a guess of the exact multiplicity of the maximal eigenvalue in the optimal solution. In recent work [33, 34] Oustry rein... |

67 | On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues
- Pataki
- 1998
(Show Context)
Citation Context ...m without aggregate subgradients, it suffices to store in P only the subspace spanning the eigenvectors corresponding to non-zero eigenvalues of an optimal solution W k+1 of (3.2). Using the bound of =-=[36]-=- it is not too difficult to show that in this case the maximal number of columns one has to provide is the largestsr 2 N satisfying \Gammasr+1 2 \Deltasm+ 1 plus the number of eigenvectors to be added... |

62 |
Interior-point methods for the monotone linear complementarity problem in symmetric matrices
- Kojima, Shindoh, et al.
- 1997
(Show Context)
Citation Context ...if the problem is defined over large matrix variables or a huge number of constraints interior point methods grow terribly slow and consume huge amounts of memory. The most efficient methods of today =-=[15, 23, 2, 32, 45, 29]-=- are primal-dual methods that require, in each iteration of the interior point method, the factorization of a dense matrix of order equal to the number of constraints and one to three factorizations o... |

54 | An implicitly restarted Lanczos method for large symmetric eigenvalue problems
- Calvetti, Reichel, et al.
- 1994
(Show Context)
Citation Context ...A straight forward approach to achieve serious speed-ups is to implement the algorithm on parallel machines, see for instance [43]. Rather recently interest in the Lanczos method has risen again, see =-=[25, 3, 5, 10, 30]-=- and references therein. Most of these papers are based on the concept of an implicit restart proposed in [44] which is a polynomial acceleration approach that does not require additional matrix vecto... |

51 | Solving quadratic (0, 1)-problems by semidefinite programs and cutting planes
- Helmberg, Rendl
- 1998
(Show Context)
Citation Context .... Within short time several approximation algorithms relying on semidefinite programming were published, most of them based on the approach by Goemans and Williamson [8]. On the implementational side =-=[14, 16, 20]-=- cutting plane approaches for semidefinite relaxations of Konrad-Zuse-Zentrum fur Informationstechnik Berlin, Takustrae 7, 14195 Berlin, Germany. helmberg@zib.de, http://www.zib.de/helmberg y Universi... |

48 | Polynomial convergence of primal-dual algorithms for the second-order cone program based on the MZ-family of directions
- Monteiro, Tsuchiya
(Show Context)
Citation Context |

46 |
A interior point method for minimizing the maximum eigenvalue of a linear combination of matrices
- Jarre
- 1993
(Show Context)
Citation Context |

45 |
The minimization of certain nondifferentiable sums of eigenvalues of symmetric matrices
- Cullum, Donath, et al.
- 1975
(Show Context)
Citation Context ...vergence for restricted bundle sizes. In the extreme the bundle may consist of one new eigenvector to the maximal eigenvalue only. In contrast, the `classical' algorithms of Cullum, Donath, and Wolfe =-=[6] and -=-Polak and Wardi [38] require in each iteration the computation of all eigenvectors to eigenvalues within an "-distance of the maximal eigenvalue, thus close to the optimal solution this number is... |

30 |
Iterative methods for the computation of a few eigenvalues of a large symmetric matrix
- Baglama, Calvetti, et al.
- 1996
(Show Context)
Citation Context ...A straight forward approach to achieve serious speed-ups is to implement the algorithm on parallel machines, see for instance [43]. Rather recently interest in the Lanczos method has risen again, see =-=[25, 3, 5, 10, 30]-=- and references therein. Most of these papers are based on the concept of an implicit restart proposed in [44] which is a polynomial acceleration approach that does not require additional matrix vecto... |

28 | Preconditioning the Lanczos algorithm for sparse symmetric eigenvalue problems
- Morgan, Scott
- 1993
(Show Context)
Citation Context |

28 |
Nondifferentiable optimization algorithm for designing control systems having singular value inequalities
- POLAK, WARDI
- 1982
(Show Context)
Citation Context ...bundle sizes. In the extreme the bundle may consist of one new eigenvector to the maximal eigenvalue only. In contrast, the `classical' algorithms of Cullum, Donath, and Wolfe [6] and Polak and Wardi =-=[38] requ-=-ire in each iteration the computation of all eigenvectors to eigenvalues within an "-distance of the maximal eigenvalue, thus close to the optimal solution this number is at least as large as the... |

25 | Solving graph bisection problems with semidefinite programming
- Karish, Rendl, et al.
- 2000
(Show Context)
Citation Context .... Within short time several approximation algorithms relying on semidefinite programming were published, most of them based on the approach by Goemans and Williamson [8]. On the implementational side =-=[14, 16, 20]-=- cutting plane approaches for semidefinite relaxations of Konrad-Zuse-Zentrum fur Informationstechnik Berlin, Takustrae 7, 14195 Berlin, Germany. helmberg@zib.de, http://www.zib.de/helmberg y Universi... |

24 | Connections between semidefinite relaxations of the maxcut and stable set problems
- Laurent, Poljak, et al.
- 1997
(Show Context)
Citation Context ...) might look artificial, it does hold for SDP arising from quadratic 0-1 optimization. It also holds for many other SDP derived as relaxations of combinatorial optimization problems, see for instance =-=[1, 12, 24]-=-. 3. The bundle method. In this section we develop a new method for minimizingsf . We use two classical ingredients, the proximal point idea, and the bundle concept. The new contribution lies in the w... |

23 | Efficient approximation algorithms semidefinite programs arising from max-cut and coloring
- Klein, Lu
- 1996
(Show Context)
Citation Context ...to cope with dense matrices X and Z \Gamma1 . An alternative approach based on a combination of the power method with a generic optimization scheme of Plotkin, Shmoys, and Tardos [37] was proposed in =-=[22]-=- but seems to be purely theoretical. In Table 7.1 we compare the proposed bundle method to our semidefinite primaldual interior point code of [14] (called PDIP in the sequel) for graphs on n = m = 800... |

18 |
Cones of matrices and setfunctions and 0-1 optimization
- 'ASZ, SCHRIJVER
- 1991
(Show Context)
Citation Context ... the potential of this method. In combinatorial optimization semidefinite relaxations where introduced in [27]. At that time they were mainly considered a theoretical tool for obtaining strong bounds =-=[11, 28, 40]-=-. With the development of interior point methods hopes soared high that these relaxations could be of practical value. Within short time several approximation algorithms relying on semidefinite progra... |

15 | The U-Lagrangian of the maximum eigenvalue function
- Oustry
- 1999
(Show Context)
Citation Context ...ithm of Overton [35] each step is computed from a complete spectral decomposition of the matrix and a guess of the exact multiplicity of the maximal eigenvalue in the optimal solution. In recent work =-=[33, 34]-=- Oustry reinterprets the algorithm of Overton within the framework of the U-Lagrangian introduced in [26] and embeds it in a first order method to ensure global convergence. Again, for global converge... |

10 | Quadratic knapsack relaxations using cutting planes and semidefinite programming
- Helmberg, Rendl, et al.
- 1996
(Show Context)
Citation Context .... Within short time several approximation algorithms relying on semidefinite programming were published, most of them based on the approach by Goemans and Williamson [8]. On the implementational side =-=[14, 16, 20]-=- cutting plane approaches for semidefinite relaxations of Konrad-Zuse-Zentrum fur Informationstechnik Berlin, Takustrae 7, 14195 Berlin, Germany. helmberg@zib.de, http://www.zib.de/helmberg y Universi... |

9 |
ut unc u: On the Nesterov-Todd direction in semidefinite programming
- Todd, Toh, et al.
- 1998
(Show Context)
Citation Context |

8 |
Semi-definite programming: a path-following algorithm for a linear-quadratic functional” (to appear in SIAM J. Optimization ) L. Faybusovich, ‘Application of a reduction method to the analysis of linear dynamical systems with phase constraints
- Faybusovich
- 1982
(Show Context)
Citation Context ...puted efficiently. We have already seen in x3 that this task is equivalent to solving the quadratic semidefinite program (3.4). Problems of this kind can be solved by interior point methods, see e.g. =-=[7, 23]-=-. Dropping the iteration index k and the constants in (3.4) we obtain for y = x k min 1 2u hAW ; AW i \Gamma 1 u hb; AW i \Gamma\Omega C \Gamma A T (y); W ff s.t. W = ffW + PV P T ff + tr V = 1 ffs0; ... |

7 |
On the Shannon Capacity of a Graph
- asz
- 1979
(Show Context)
Citation Context ... proposed in [4] which offers some possibilities to exploit sparsity. It is too early to judge the potential of this method. In combinatorial optimization semidefinite relaxations where introduced in =-=[27]-=-. At that time they were mainly considered a theoretical tool for obtaining strong bounds [11, 28, 40]. With the development of interior point methods hopes soared high that these relaxations could be... |

6 | Node and edge relaxations of the max-cut problem
- Poljak, Rendl
- 1994
(Show Context)
Citation Context ...ns and Williamson [8] says, that there is always a cut within :878 of the optimal value of the relaxation. One of the first attempts to approximate (DMC) using eigenvalue optimization is contained in =-=[39]-=-. The authors use the Bundle code of Schramm and Zowe [42] with a limited number of bundle iterations, and so do not solve (DMC) exactly. So far the only practical algorithms for computing the optimal... |

5 |
Implementation of an implicitly restarted block Arnoldi method
- Lehoucq, Maschhoff
- 1997
(Show Context)
Citation Context |

4 | Incorporating inequality constraints in the spectral bundle method
- Helmberg, Kiwiel, et al.
(Show Context)
Citation Context ...ht consider active set methods but these entail the danger of destroying convergence. Together with K.C. Kiwiel we are currently working on alternative methods for incorporating sign constraints on y =-=[13]-=-. The backbone of the method is an efficient routine for computing the maximal eigenvalue of huge structured symmetric matrices. Although our own implementation A SPECTRAL BUNDLE METHOD 21 Table 7.7 A... |

2 |
Geometric Algorithms and
- otschel, asz, et al.
- 1988
(Show Context)
Citation Context ... the potential of this method. In combinatorial optimization semidefinite relaxations where introduced in [27]. At that time they were mainly considered a theoretical tool for obtaining strong bounds =-=[11, 28, 40]-=-. With the development of interior point methods hopes soared high that these relaxations could be of practical value. Within short time several approximation algorithms relying on semidefinite progra... |

2 |
Implementing Lanczos-like algorithms on hypercube architectures
- Scott
- 1989
(Show Context)
Citation Context ...to work sufficiently stable there is certainly much room for improvement. A straight forward approach to achieve serious speed-ups is to implement the algorithm on parallel machines, see for instance =-=[43]-=-. Rather recently interest in the Lanczos method has risen again, see [25, 3, 5, 10, 30] and references therein. Most of these papers are based on the concept of an implicit restart proposed in [44] w... |

1 |
Fixing variables in semidefinite relaxations, in Algorithms
- Helmberg
- 1997
(Show Context)
Citation Context ...) might look artificial, it does hold for SDP arising from quadratic 0-1 optimization. It also holds for many other SDP derived as relaxations of combinatorial optimization problems, see for instance =-=[1, 12, 24]-=-. 3. The bundle method. In this section we develop a new method for minimizingsf . We use two classical ingredients, the proximal point idea, and the bundle concept. The new contribution lies in the w... |

1 |
abal, The U-Lagrangian of a convex function, technical report
- echal, Oustry, et al.
- 1996
(Show Context)
Citation Context ...f the exact multiplicity of the maximal eigenvalue in the optimal solution. In recent work [33, 34] Oustry reinterprets the algorithm of Overton within the framework of the U-Lagrangian introduced in =-=[26] and -=-embeds it in a first order method to ensure global convergence. Again, for global convergence the approach relies on the spectrum of all eigenvalues within "-distance of the maximal eigenvalue an... |

1 |
The block Chebyshev-Lanczos method for solving large symmetric eigenvalue problems
- Zhou, Dai
- 1989
(Show Context)
Citation Context ...be worth to include several of these approximate eigenvectors as well. In our algorithm we use a block Lanczos code of our own that is based on a Fortran code of Hua (we guess that this is Hua Dai of =-=[47]-=-). It works with complete orthogonalization and employs Chebyshev iterations for acceleration. The choice of the blocksize is based on the approximate eigenvalues produced by previous evaluations but ... |