Results 1  10
of
44
Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
, 1998
"... SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..."
Abstract

Cited by 736 (3 self)
 Add to MetaCart
SeDuMi is an addon for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox.
Solving semidefinitequadraticlinear programs using SDPT3
 MATHEMATICAL PROGRAMMING
, 2003
"... This paper discusses computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs). Many test problems of this type are solved using a new release of SDPT3, a Matlab implementation of infeasible primaldual pathfollowing algorithm ..."
Abstract

Cited by 139 (18 self)
 Add to MetaCart
This paper discusses computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs). Many test problems of this type are solved using a new release of SDPT3, a Matlab implementation of infeasible primaldual pathfollowing algorithms. The software developed by the authors uses Mehrotratype predictorcorrector variants of interiorpoint methods and two types of search directions: the HKM and NT directions. A discussion of implementation details is provided and computational results on problems from the SDPLIB and DIMACS Challenge collections are reported.
A spectral technique for correspondence problems using pairwise constraints
 In ICCV
, 2005
"... We present an efficient spectral method for finding consistent correspondences between two sets of features. We build the adjacency matrix M of a graph whose nodes represent the potential correspondences and the weights on the links represent pairwise agreements between potential correspondences. Co ..."
Abstract

Cited by 131 (9 self)
 Add to MetaCart
We present an efficient spectral method for finding consistent correspondences between two sets of features. We build the adjacency matrix M of a graph whose nodes represent the potential correspondences and the weights on the links represent pairwise agreements between potential correspondences. Correct assignments are likely to establish links among each other and thus form a strongly connected cluster. Incorrect correspondences establish links with the other correspondences only accidentally, so they are unlikely to belong to strongly connected clusters. We recover the correct assignments based on how strongly they belong to the main cluster of M, by using the principal eigenvector of M and imposing the mapping constraints required by the overall correspondence mapping (onetoone or onetomany). The experimental evaluation shows that our method is robust to outliers, accurate in terms of matching rate, while being much faster than existing methods. 1.
Modified Cholesky Factorizations In InteriorPoint Algorithms For Linear Programming
 SIAM Journal on Optimization
"... . We investigate a modified Cholesky algorithm typical of those used in most interiorpoint codes for linear programming. Choleskybased interiorpoint codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take f ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
. We investigate a modified Cholesky algorithm typical of those used in most interiorpoint codes for linear programming. Choleskybased interiorpoint codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interiorpoint steps even when the coefficient matrix of the main linear system to be solved for the step components is illconditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach. Key words. interiorpoint algorithms and software, Cholesky factorization, matrix p...
On the Identification of Zero Variables in an InteriorPoint Framework
, 1998
"... We consider column sufficient linear complementarity problems and study the problem of identifying those variables that are zero at a solution. To this end we propose a new, computationally inexpensive technique that is based on growth functions. We analyze in detail the theoretical properties of th ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider column sufficient linear complementarity problems and study the problem of identifying those variables that are zero at a solution. To this end we propose a new, computationally inexpensive technique that is based on growth functions. We analyze in detail the theoretical properties of the identification technique and test it numerically. The identification technique is particularly suited to interiorpoint methods but can be applied to a wider class of methods.
On a PrimalDual Analytic Center Cutting Plane Method for Variational Inequalities
 Computational Optimization and Applications
, 1998
"... . We present an algorithm for variational inequalities V I(F ; Y ) that uses a primaldual version of the Analytic Center Cutting Plane Method. The pointtoset mapping F is assumed to be monotone, or pseudomonotone. Each computation of a new analytic center requires at most four Newton iterations, ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
. We present an algorithm for variational inequalities V I(F ; Y ) that uses a primaldual version of the Analytic Center Cutting Plane Method. The pointtoset mapping F is assumed to be monotone, or pseudomonotone. Each computation of a new analytic center requires at most four Newton iterations, in theory, and in practice one or sometimes two. Linear equalities that may be included in the definition of the set Y are taken explicitly into account. We report numerical experiments on several wellknown variational inequality problems as well as on one where the functional results from the solution of large subproblems. The method is robust and competitive with algorithms which use the same information as this one. Keywords: variational inequalities; analytic center; cutting plane method; monotone mappings; interior points methods; Newton's method; primaldual. * Research supported in part by McGill University Fellowships ** Research supported by NSERC grant OPG0004152 and by the FCAR...
On Mehrotratype predictorcorrector algorithms
, 2005
"... In this paper we discuss the polynomiality of a feasible version of Mehrotra’s predictorcorrector algorithm whose variants have been widely used in several IPM based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper we discuss the polynomiality of a feasible version of Mehrotra’s predictorcorrector algorithm whose variants have been widely used in several IPM based optimization packages. A numerical example is given that shows that the adaptive choice of centering parameter and correction terms in this algorithm may lead to small steps being taken in order to keep the iterates in a large neighborhood of the central path, which is important to proving polynomial complexity properties of this method. Motivated by this example, we introduce a safeguard in Mehrtora’s algorithm that keeps the iterates in the prescribed neighborhood and allows us to obtain a positive lower bound on the step size. This safeguard strategy is also used when the affine scaling direction performs poorly. We prove that the safeguarded algorithm will terminate after at most O(n2 log (x0) T s0 ɛ) iteration. By modestly modifying the corrector direction, we reduce the iteration complexity to O(n log (x0) T s0 ɛ). To ensure fast asymptotic convergence of the algorithm, we changed Mehrotra’s updating scheme of the centering parameter slightly while keeping the safeguard. The new algorithms have the same order of iteration complexity as the safeguarded algorithms, but enjoy superlinear convergence as well. Numerical results using the McIPM and LIPSOL software packages are reported.
Improved SmoothingType Methods for the Solution of Linear Programs
 Numerische Mathematik
"... . We consider a smoothingtype method for the solution of linear programs. Its main idea is to reformulate the primaldual optimality conditions as a nonlinear and nonsmooth system of equations, and to apply a Newtontype method to a smooth approximation of this nonsmooth system. The method presente ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
. We consider a smoothingtype method for the solution of linear programs. Its main idea is to reformulate the primaldual optimality conditions as a nonlinear and nonsmooth system of equations, and to apply a Newtontype method to a smooth approximation of this nonsmooth system. The method presented here is a predictorcorrector method, and is closely related to some methods recently proposed by Burke and Xu on the one hand, and by the authors on the other hand. However, here we state stronger global and/or local convergence properties. Moreover, we present quite promising numerical results for the whole netlib test problem collection. Key Words. Linear programs, smoothing, predictorcorrector method, Newton's method, global convergence, quadratic convergence. 1 This research was supported by the DFG (Deutsche Forschungsgemeinschaft). 1 Introduction In this paper we describe an algorithm for the solution of linear programs given either in primal form min c T x s.t. Ax = b; x 0...