Results 1  10
of
69
Atomic decomposition by basis pursuit
 SIAM Journal on Scientific Computing
, 1998
"... Abstract. The timefrequency and timescale communities have recently developed a large number of overcomplete waveform dictionaries — stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several meth ..."
Abstract

Cited by 1637 (43 self)
 Add to MetaCart
Abstract. The timefrequency and timescale communities have recently developed a large number of overcomplete waveform dictionaries — stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an “optimal ” superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as illposed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to largescale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interiorpoint methods. We obtain reasonable success with a primaldual logarithmic barrier method and conjugategradient solver.
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 463 (16 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 103 (18 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
On the Riemannian geometry defined by selfconcordant barriers and interiorpoint methods
 Found. Comput. Math
"... We consider the Riemannian geometry defined on a convex set by the Hessian of a selfconcordant barrier function, and its associated geodesic curves. These provide guidance for the construction of efficient interiorpoint methods for optimizing a linear function over the intersection of the set with ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
We consider the Riemannian geometry defined on a convex set by the Hessian of a selfconcordant barrier function, and its associated geodesic curves. These provide guidance for the construction of efficient interiorpoint methods for optimizing a linear function over the intersection of the set with an affine manifold. We show that algorithms that follow the primaldual central path are in some sense close to optimal. The same is true for methods that follow the shifted primaldual central path among certain infeasibleinteriorpoint methods. We also compute the geodesics in several simple sets. ∗ Copyright (C) by SpringerVerlag. Foundations of Computational Mathewmatics 2 (2002), 333–361.
Why a Pure Primal Newton Barrier Step May Be Infeasible
 SIAM Journal on Optimization
, 1993
"... Modern barrier methods for constrained optimization are sometimes portrayed conceptually as a sequence of inexact minimizations, with only a very few Newton iterations (perhaps just one) for each value of the barrier parameter. Unfortunately, this rosy image does not accurately reflect reality when ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Modern barrier methods for constrained optimization are sometimes portrayed conceptually as a sequence of inexact minimizations, with only a very few Newton iterations (perhaps just one) for each value of the barrier parameter. Unfortunately, this rosy image does not accurately reflect reality when the barrier parameter is reduced at a reasonable rate. We present local analysis showing why a pure Newton step in a longstep barrier method for nonlinearly constrained optimization may be seriously infeasible, even when taken from an apparently favorable point. The features described are illustrated numerically and connected to known theoretical results for convex problems satisfying selfconcordancy assumptions. We also indicate the contrasting nature of an approximate step to the desired minimizer of the barrier function. 1. Introduction 1.1. Background Interior methods, most commonly based on barrier functions, have been applied with great practical success in recent years to many con...
Two Numerical Methods for Optimizing Matrix Stability
 Linear Algebra Appl
, 2001
"... Consider the ane matrix family A(x) = A 0 + k=1 x k A k , mapping a design vector x 2 R into the space of n n real matrices. ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Consider the ane matrix family A(x) = A 0 + k=1 x k A k , mapping a design vector x 2 R into the space of n n real matrices.
The InteriorPoint Revolution in Constrained Optimization
 of Appl. Optim
, 1998
"... Interior methods are a central, striking feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were widely used during the 1960s to solve nonlinearly constrained problems. However, their use for linear ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
Interior methods are a central, striking feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were widely used during the 1960s to solve nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded by newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in the mid1980s. In 1984, Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, the new incarnations of interior methods ha...
InexactRestoration Algorithm for Constrained Optimization
 Journal of Optimization Theory and Applications
, 1999
"... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and largescale implementations of the resulting method are possible. We prove that under suitable conditions, that ...