Results 1 
7 of
7
Continuous Relaxations for Constrained MaximumEntropy Sampling
 In Integer Programming and Combinatorial Optimization
, 1996
"... . We consider a new nonlinear relaxation for the Constrained Maximum Entropy Sampling Problem  the problem of choosing the s \Theta s principal submatrix with maximal determinant from a given n \Theta n positive definite matrix, subject to linear constraints. We implement a branchandbound algo ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider a new nonlinear relaxation for the Constrained Maximum Entropy Sampling Problem  the problem of choosing the s \Theta s principal submatrix with maximal determinant from a given n \Theta n positive definite matrix, subject to linear constraints. We implement a branchandbound algorithm for the problem, using the new relaxation. The performance on test problems is far superior to a previous implementation using an eigenvaluebased relaxation. 1 Introduction Let n be a positive integer. For N := f1; : : : ; ng, let YN := fY j j j 2 Ng be a set of n random variables, with jointdensity function g N (\Delta). Let s be an integer satisfying 0 ! s n. For S ae N , j S j = s, let YS := fY j j j 2 Sg, and denote the marginal jointdensity function of YS by gS (\Delta). The entropy of S is defined by h(S) := \GammaE[ln gS (YS )]: Let m be a nonnegative integer, and let M := f1; 2; : : : mg. The Constrained MaximumEntropy Sampling Problem is then the problem of choosing a s...
Using Continuous Nonlinear Relaxations to Solve Constrained MaximumEntropy Sampling Problems
 Mathematical Programming, Series A
, 1998
"... We consider a new nonlinear relaxation for the Constrained MaximumEntropy Sampling Problem  the problem of choosing the s × s principal submatrix with maximal determinant from a given n × n positive definite matrix, subject to linear constraints. We implement a branchandbound algori ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
We consider a new nonlinear relaxation for the Constrained MaximumEntropy Sampling Problem  the problem of choosing the s × s principal submatrix with maximal determinant from a given n × n positive definite matrix, subject to linear constraints. We implement a branchandbound algorithm for the problem, using the new relaxation. The performance on test problems is far superior to a previous implementation using an eigenvaluebased relaxation. A parallel implementation of the algorithm exhibits approximately linear speedup for up to 8 processors, and has successfully solved problem instances that were heretofore intractable.
Logarithmic Barrier Decomposition Methods for SemiInfinite Programming
, 1996
"... A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solv ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A computational study of some logarithmic barrier decomposition algorithms for semiinfinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier cutting plane algorithm which was presented recently by den Hertog et al., to solve semiinfinite programming problems. Usually decomposition (cutting plane methods) use cutting planes to improve the localization of the given problem. In this paper we propose an extension which uses linear cuts to solve large scale, difficult real world problems. This algorithm uses both static and (doubly) dynamic enumeration of the parameter space and allows for multiple cuts to be simultaneously added for larger/difficult problems. The algorithm is implemented both on sequential and parallel computers. Implementation issues and parallelization strategies are discussed and encouraging computational results are presented. Keywords: column generation, convex programming, cutting plane met...
A Long Step Barrier Method for Convex Quadratic Programming
 Algorithmica
, 1990
"... In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the cent ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In this paper we propose a longstep logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations is O( p nL) or O(nL), dependent on how the barrier parameter is updated. Key Words: convex quadratic programming, interior point method, logarithmic barrier function, polynomial algorithm. 1 Introduction Karmarkar's [14] invention of the projective method for linear programming has given rise to active research in interior point algorithms. At this moment, the variants can roughly be categorized into four classes: projective, affine scaling, pathfollowing and potential reduction methods. Researchers have also extended interior point methods to other problems, including convex qu...
A Unifying Investigation of InteriorPoint Methods for Convex Programming
 Faculty of Mathematics and Informatics, TU Delft, NL2628 BL
, 1992
"... In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpo ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In the recent past a number of papers were written that present low complexity interiorpoint methods for different classes of convex programs. Goal of this article is to show that the logarithmic barrier function associated with these programs is selfconcordant, and that the analyses of interiorpoint methods for these programs can thus be reduced to the analysis of interiorpoint methods with selfconcordant barrier functions. Key words: interiorpoint method, barrier function, dual geometric programming, (extended) entropy programming, primal and dual l p programming, relative Lipschitz condition, scaled Lipschitz condition, selfconcordance. 1 Introduction The efficiency of a barrier method for solving convex programs strongly depends on the properties of the barrier function used. A key property that is sufficient to prove fast convergence for barrier methods is the property of selfconcordance introduced in [17]. This condition not only allows a proof of polynomial convergen...
Improving Complexity of Structured Convex Optimization Problems Using SelfConcordant Barriers
, 2001
"... The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma from [3] is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
SelfConcordant Functions in Structured Convex Optimization
, 2000
"... This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration ..."
Abstract
 Add to MetaCart
This paper provides a selfcontained introduction to the theory of selfconcordant functions [8] and applies it to several classes of structured convex optimization problems. We describe the classical shortstep interiorpoint method and optimize its parameters to provide its best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy, how they react to addition and scaling and which one is the best to fix. A lemma from [2] is improved and allows us to review several classes of structured convex optimization problems and evaluate their algorithmic complexity, using the selfconcordancy of the associated logarithmic barriers.