Results 1  10
of
56
Basis Pursuit
, 1994
"... The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decompos ..."
Abstract

Cited by 119 (15 self)
 Add to MetaCart
The TimeFrequency and TimeScale communities have recently developed an enormous number of overcomplete signal dictionaries  wavelets, wavelet packets, cosine packets, wilson bases, chirplets, warped bases, and hyperbolic cross bases being a few examples. Basis Pursuit is a technique for decomposing a signal into an "optimal" superposition of dictionary elements. The optimization criterion is the l 1 norm of coefficients. The method has several advantages over Matching Pursuit and Best Ortho Basis, including superresolution and stability. 1 Introduction Over the last five years or so, there has been an explosion of awareness of alternatives to traditional signal representations. Instead of just representing objects as superpositions of sinusoids (the traditional Fourier representation) we now have available alternate dictionaries  signal representation schemes  of which the Wavelets dictionary is only the most wellknown. Wavelet dictionaries, Gabor dictionaries, Multiscale...
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity", Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations ..."
Abstract

Cited by 87 (21 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Interior Methods for Constrained Optimization
 Acta Numerica
, 1992
"... Interior methods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linear programming because of the dominance of the simplex method. Barrier methods fell from favour during the 1970s for a variety of reasons, includ ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Interior methods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linear programming because of the dominance of the simplex method. Barrier methods fell from favour during the 1970s for a variety of reasons, including their apparent inefficiency compared with the best available alternatives. In 1984, Karmarkar's announcement of a fast polynomialtime interior method for linear programming caused tremendous excitement in the field of optimization. A formal connection can be shown between his method and classical barrier methods, which have consequently undergone a renaissance in interest and popularity. Most papers published since 1984 have concentrated on issues of computational complexity in interior methods for linear programming. During the same period, implementations of interior methods have displayed great efficiency in solving many large linear programs of everincreasing size. Interior methods...
Implementation of Interior Point Methods for Large Scale Linear Programming
 in Interior Point Methods in Mathematical Programming
, 1996
"... In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on bot ..."
Abstract

Cited by 70 (22 self)
 Add to MetaCart
In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on both sides. The significant difference between interior point and simplex based methods is reflected not only in the theoretical background but also in the practical implementation. In this paper we give an overview of the most important characteristics of advanced implementations of interior point methods. First, we present the infeasibleprimaldual algorithm which is widely considered the most efficient general purpose IPM. Our discussion includes various algorithmic enhancements of the basic algorithm. The only shortcoming of the "traditional" infeasibleprimaldual algorithm is to detect a possible primal or dual infeasibility of the linear program. We discuss how this problem can be solve...
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
The Common Optimization INterface for Operations Research: Promoting opensource software in the operations research community
, 2003
"... ..."
An Implementation Of Karmarkar's Algorithm For Linear Programming
 Mathematical Programming
, 1986
"... . This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implement ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
. This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex code MINOS 4.0. 1. INTRODUCTION We describe in this paper a family of interior point power series affine scaling algorithms based on the linear programming algorithm presented by Karmarkar (1984). Two algorithms from this family, corresponding to first and second order pow...
MaxSolver: An efficient exact algorithm for (weighted) maximum satisfiability
 Artificial Intelligence
, 2005
"... Artificial Intelligence, to appear Maximum Boolean satisfiability (maxSAT) is the optimization counterpart of Boolean satisfiability (SAT), in which a variable assignment is sought to satisfy the maximum number of clauses in a Boolean formula. A branch and bound algorithm based on the DavisPutnam ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Artificial Intelligence, to appear Maximum Boolean satisfiability (maxSAT) is the optimization counterpart of Boolean satisfiability (SAT), in which a variable assignment is sought to satisfy the maximum number of clauses in a Boolean formula. A branch and bound algorithm based on the DavisPutnamLogemannLoveland procedure (DPLL) is one of the most competitive exact algorithms for solving maxSAT. In this paper, we propose and investigate a number of strategies for maxSAT. The first strategy is a set of unit propagation or unit resolution rules for maxSAT. We summarize three existing unit propagation rules and propose a new one based on a nonlinear programming formulation of maxSAT. The second strategy is an effective lower bound based on linear programming (LP). We show that the LP lower bound can be made effective as the number of clauses increases. The third strategy consists of a a binaryclause first rule and a dynamicweighting variable ordering rule, which are motivated by a thorough analysis of two existing wellknown variable orderings. Based on the analysis of these strategies, we develop an exact solver for both maxSAT and weighted maxSAT. Our experimental results on random problem instances and many instances from the maxSAT libraries show that our new solver outperforms most of the existing exact maxSAT solvers, with orders of magnitude of improvement in many cases.
The Many Facets of Linear Programming
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction A ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949 on Activity Analysis of Production and Allocation [64]; the fortyfifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twentyfifth of the awarding of the 1975 Nobe...
Data Structures and Programming Techniques for the Implementation of Karmarkar's Algorithm
, 1989
"... This paper describes data structures and programming techniques used in an implementation of Karmarkar's algorithm for linear programming. Most of oar discussion focuses on applying Gaussian elimination toward the solution of a sequence of sparse symmetric positive dermite systems of linear equation ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
This paper describes data structures and programming techniques used in an implementation of Karmarkar's algorithm for linear programming. Most of oar discussion focuses on applying Gaussian elimination toward the solution of a sequence of sparse symmetric positive dermite systems of linear equations, the main requirement in Karmarkar's algorithm. Oar approach relies on a direct factorization scheme, with an extensive symbolic factodzation step performed in a preparatory stage of the linear programming algorithm. An interpretatire version of Gaussian elimination makes use of the symbolic information to perform the actual numerical computations at each iteration of algorithm. We also discuss ordering algorithms that attempt to reduce the mount offilldn in the LU factors, a procedare to build the linear system solved at each iteration, the use of a dense window data structure in the Gaussian elimination method, a preprecesslng procedare designed to increase the sparsity of the linear programming coefficient matrix, and the special treatment of dense columns in the coefficient matrix