Results 1  10
of
11
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 463 (16 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
The interiorpoint revolution in optimization: history, recent developments, and lasting consequences
 Bull. Amer. Math. Soc. (N.S
, 2005
"... Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental pro ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a condensed,
Constrained MinimumBER Multiuser Detection
 IEEE Trans. Signal Processing
, 2000
"... Abstract—A new linear multiuser detector that directly minimizes the biterror rate (BER) subject to a set of reasonable constraints is proposed. It is shown that the constrained BER cost function has a unique global minimum. This allows us to develop an efficient barrier Newton method for finding t ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract—A new linear multiuser detector that directly minimizes the biterror rate (BER) subject to a set of reasonable constraints is proposed. It is shown that the constrained BER cost function has a unique global minimum. This allows us to develop an efficient barrier Newton method for finding the coefficients of the proposed detector using information about timing, amplitudes, channels, and the signature signals of all users. Although the new detector cannot be shown to be optimal among linear multiuser detectors without the constraints imposed, extensive simulations demonstrate that it achieves the lowest BER. Furthermore, in some cases, the BER of the proposed detector can be significantly lower than that of the decorrelating and MMSE detectors. Index Terms—Biterror rate minimization, interiorpoint numerical optimization, multiuser detection. I.
Determining Subspace Information from the Hessian of a Barrier Function
 Manuscript, AT&T Bell Laboratories
, 1992
"... Karmarkar's 1984 paper has produced a resurgence of interest in logarithmic barrier methods for linear and nonlinear programming. Although barrier methods for nonlinear inequalityconstrained optimization were widely applied throughout the 1960's, they were largely abandoned in favor of other approa ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Karmarkar's 1984 paper has produced a resurgence of interest in logarithmic barrier methods for linear and nonlinear programming. Although barrier methods for nonlinear inequalityconstrained optimization were widely applied throughout the 1960's, they were largely abandoned in favor of other approaches during the 1970's. This shift was partly caused by the inefficiency of "blackbox" methods in solving the unconstrained subproblems. Some of these difficulties were explained by results of Murray and Lootsma showing that barrier Hessian matrices become increasingly illconditioned along the trajectory of approach to the solution. This paper discusses several aspects of the Hessian matrix of the logarithmic barrier function. We first show that, except in two special cases, the barrier Hessian is illconditioned in an entire region near the solution. At points in a more restricted region (including the barrier trajectory itself), this illconditioning displays a special structure connecte...
A primaldual augmented Lagrangian
 Computational Optimization and Applications
, 2010
"... Abstract. Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmente ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method. Key words. Nonlinear programming, nonlinear inequality constraints, augmented Lagrangian methods, bound constrained Lagrangian methods, linearly constrained Lagrangian methods, primaldual methods. AMS subject classifications. 49J20, 49J15, 49M37, 49D37, 65F05, 65K05, 90C30
Geometric Reasoning About Translational Motions
, 2000
"... This thesis addresses the problems of planning highly coordinated collisionfree translational motions of multiple objects (general polygons or polyhedra) and finding sequences of translations that allow for the removal of one or more objects. These problems are known to be PSPACEhard in general. O ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This thesis addresses the problems of planning highly coordinated collisionfree translational motions of multiple objects (general polygons or polyhedra) and finding sequences of translations that allow for the removal of one or more objects. These problems are known to be PSPACEhard in general. Our main focus is therefore on the design and analysis of adaptive algorithms that can cope with practically relevant cases such as tightly interlaced placements. Exact and complete algorithms are important, for example in the field of computer aided mechanical assembly planning: a valid plan for separating a given placement of rigid non deformable objects (parts) can be reversed to assemble the individual components. Our algorithms are exact and complete and can prove in practical cases that no separating motions exist.
Solving Matrix Inequalities whose Unknowns are Matrices to appear
 SIAM Journal of Optimization
"... Abstract. This paper provides algorithms for numerical solution of convex matrix inequalities in which the variables naturally appear as matrices. This includes, for instance, many systems and control problems. To use these algorithms, no knowledge of linear matrix inequalities (LMIs) is required. H ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This paper provides algorithms for numerical solution of convex matrix inequalities in which the variables naturally appear as matrices. This includes, for instance, many systems and control problems. To use these algorithms, no knowledge of linear matrix inequalities (LMIs) is required. However, as tools, they preserve many advantages of the linear matrix inequality framework. Our method has two components: 1) a numerical algorithm that solves a large class of matrix optimization problems; 2) a symbolic “Convexity Checker ” that automatically provides a region which, if convex, guarantees that the solution from (1) is a global optimum on that region. The algorithms are partly numerical and partly symbolic and since they aim at exploiting the matrix structure of the unknowns, the symbolic part requires the development of new computer techniques for treating noncommutative algebra.
A MAJORIZEMINIMIZE LINE SEARCH ALGORITHM FOR BARRIER FUNCTION OPTIMIZATION
"... Many signal and image estimation problems such as maximum entropy reconstruction and positron emission tomography, require the minimization of a criterion containing a barrier function i.e., an unbounded function at the boundary of the feasible solution domain. This function has to be carefully hand ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Many signal and image estimation problems such as maximum entropy reconstruction and positron emission tomography, require the minimization of a criterion containing a barrier function i.e., an unbounded function at the boundary of the feasible solution domain. This function has to be carefully handled in the optimization algorithm. When an iterative descent method is used for the minimization, a search along the line supported by the descent direction is usually performed at each iteration. However, standard line search strategies tend to be inefficient in this context. In this paper, we propose an original line search algorithm based on the majorizeminimize principle. A tangent majorant function is built to approximate a scalar criterion containing a barrier function. This leads to a simple line search ensuring the convergence of several classical descent optimization strategies, including the most classical variants of nonlinear conjugate gradient. The practical efficiency of the proposal scheme is illustrated by means of two examples of signal and image reconstruction. 1.
A PRIMALDUAL AUGMENTED LAGRANGIAN ∗
, 2008
"... Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangi ..."
Abstract
 Add to MetaCart
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method. Key words. Nonlinear programming, nonlinear inequality constraints, augmented Lagrangian methods, bound constrained Lagrangian methods, linearly constrained Lagrangian methods, primaldual methods.