Results 1  10
of
17
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 566 (15 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
The interiorpoint revolution in optimization: history, recent developments, and lasting consequences
 Bull. Amer. Math. Soc. (N.S
, 2005
"... Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental pro ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a condensed,
Adaptive Use of Iterative Methods in PredictorCorrector Interior Point Methods for Linear Programming
 NUMERICAL ALGORITHMS
, 1999
"... ..."
Constrained MinimumBER Multiuser Detection
 IEEE Trans. Signal Processing
, 2000
"... Abstract—A new linear multiuser detector that directly minimizes the biterror rate (BER) subject to a set of reasonable constraints is proposed. It is shown that the constrained BER cost function has a unique global minimum. This allows us to develop an efficient barrier Newton method for finding t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—A new linear multiuser detector that directly minimizes the biterror rate (BER) subject to a set of reasonable constraints is proposed. It is shown that the constrained BER cost function has a unique global minimum. This allows us to develop an efficient barrier Newton method for finding the coefficients of the proposed detector using information about timing, amplitudes, channels, and the signature signals of all users. Although the new detector cannot be shown to be optimal among linear multiuser detectors without the constraints imposed, extensive simulations demonstrate that it achieves the lowest BER. Furthermore, in some cases, the BER of the proposed detector can be significantly lower than that of the decorrelating and MMSE detectors. Index Terms—Biterror rate minimization, interiorpoint numerical optimization, multiuser detection. I.
A PRIMALDUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
, 2003
"... This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. T ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved using a secondderivative Newtontype method that employs a combined trust region and line search strategy to ensure global convergence. It is shown that the trustregion step can be computed by factorizing a sequence of systems with diagonallymodified primaldual structure, where the inertia of these systems can be determined without recourse to a special factorization method. This has the benefit that offtheshelf linear system software can be used at all times, allowing the straightforward extension to largescale problems. Numerical results are given for problems in the COPS test collection.
A PRIMALDUAL AUGMENTED LAGRANGIAN
, 2008
"... Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangi ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method.
Determining subspace information from the Hessian of a barrier function
 Manuscript, AT&T Bell Laboratories
, 1992
"... ..."
(Show Context)
Solving Matrix Inequalities whose Unknowns are Matrices to appear
 SIAM Journal of Optimization
"... Abstract. This paper provides algorithms for numerical solution of convex matrix inequalities in which the variables naturally appear as matrices. This includes, for instance, many systems and control problems. To use these algorithms, no knowledge of linear matrix inequalities is required. However, ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper provides algorithms for numerical solution of convex matrix inequalities in which the variables naturally appear as matrices. This includes, for instance, many systems and control problems. To use these algorithms, no knowledge of linear matrix inequalities is required. However, as tools, they preserve many advantages of the linear matrix inequality framework. Our method has two components: (1) a numerical algorithm that solves a large class of matrix optimization problems and (2) a symbolic “convexity checker ” that automatically provides a region which, if convex, guarantees that the solution from (1) is a global optimum on that region. The algorithms are partly numerical and partly symbolic and since they aim at exploiting the matrix structure of the unknowns, the symbolic part requires the development of new computer techniques for treating noncommutative algebra.
Geometric Reasoning About Translational Motions
, 2000
"... This thesis addresses the problems of planning highly coordinated collisionfree translational motions of multiple objects (general polygons or polyhedra) and finding sequences of translations that allow for the removal of one or more objects. These problems are known to be PSPACEhard in general. O ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This thesis addresses the problems of planning highly coordinated collisionfree translational motions of multiple objects (general polygons or polyhedra) and finding sequences of translations that allow for the removal of one or more objects. These problems are known to be PSPACEhard in general. Our main focus is therefore on the design and analysis of adaptive algorithms that can cope with practically relevant cases such as tightly interlaced placements. Exact and complete algorithms are important, for example in the field of computer aided mechanical assembly planning: a valid plan for separating a given placement of rigid non deformable objects (parts) can be reversed to assemble the individual components. Our algorithms are exact and complete and can prove in practical cases that no separating motions exist.