Results 1  10
of
21
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 153 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 144 (13 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
Handbook of semidefinite programming
"... Semidefinite programming (or SDP) has been one of the most exciting and active research areas in optimization during the 1990s. It has attracted researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, con ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
Semidefinite programming (or SDP) has been one of the most exciting and active research areas in optimization during the 1990s. It has attracted researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, control theory, and statistics. This tremendous research activity was spurred by the discovery of important applications in combinatorial optimization and control theory, the development of efficient interiorpoint algorithms for solving SDP problems, and the depth and elegance of the underlying optimization theory. This book includes nineteen chapters on the theory, algorithms, and applications of semidefinite programming. Written by the leading experts on the subject, it offers an advanced and broad overview of the current state of the field. The coverage is somewhat less comprehensive, and the overall level more advanced, than we had planned at the start of the project. In order to finish the book in a timely fashion, we have had to abandon hopes for separate chapters on some important topics (such as a discussion of SDP algorithms in the
Protein Structure Prediction by Linear Programming
, 2003
"... If the primary sequence of a protein is given, what is its threedimensional structure? This is one of the most important and dicult problems in molecular biology and has tremendous implication to proteomics. Over the last three decades, this issue has been intensely researched. Protein threading re ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
If the primary sequence of a protein is given, what is its threedimensional structure? This is one of the most important and dicult problems in molecular biology and has tremendous implication to proteomics. Over the last three decades, this issue has been intensely researched. Protein threading represents one of the most promising techniques. So far, there are many protein structure prediction computer programs based on protein threading; however, almost none incorporates the pairwise contact (interaction) potential explicitly in its energy function, although scientists believe that pairwise interactions are important for fold recognition targets. The underlying reason for ignoring the pairwise potential is that the protein threading problem is NPhard (i.e., it is unlikely to have a polynomialtime algorithm), if the pairwise interactions are treated rigorously.
Using LOQO To Solve SecondOrder Cone Programming Problems
 PRINCETON UNIVERSITY
, 1998
"... Many nonlinear optimization problems can be cast as secondorder cone programming problems. In this paper, we discuss a broad spectrum of such applications. For each application, we consider various formulations, some convex some not, and study which ones are amenable to solution using a generalpur ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Many nonlinear optimization problems can be cast as secondorder cone programming problems. In this paper, we discuss a broad spectrum of such applications. For each application, we consider various formulations, some convex some not, and study which ones are amenable to solution using a generalpurpose interiorpoint solver LOQO. We also compare with other commonly available nonlinear programming solvers and specialpurpose codes for secondorder cone programming.
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.
A New and Efficient LargeUpdate InteriorPoint Method for Linear Optimization
, 2001
"... Recently, in [10], the authors presented a new largeupdate primaldual method for Linear Optimization, whose O(n 2 3 log n " ) iteration bound substantially improved the classical bound for such methods, which is O n log n " . In this paper we present an improved analysis of the new method. ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Recently, in [10], the authors presented a new largeupdate primaldual method for Linear Optimization, whose O(n 2 3 log n " ) iteration bound substantially improved the classical bound for such methods, which is O n log n " . In this paper we present an improved analysis of the new method. The analysis uses some new mathematical tools, partially developed in [11], where we consider a whole family of interiorpoint methods which contains the method considered in this paper. The new analysis yields an O p n log n log n " iteration bound for largeupdate methods. Since we concentrate on one specic member of the family considered in [11], the analysis is signicantly simpler than in [11]. The new bound further improves the iteration bound for largeupdate methods, and is quite close to the currently best iteration bound known for interiorpoint methods, namely O p n log n " . Hence, the existing gap between the iteration bounds for smallupdate and largeupdate met...
IntervalRank — Isotonic Regression with Listwise and Pairwise Constraints
"... Ranking a set of retrieved documents according to their relevance to a given query has become a popular problem at the intersection of web search, machine learning, and information retrieval. Recent work on ranking focused on a number of different paradigms, namely pointwise, pairwise and listwis ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Ranking a set of retrieved documents according to their relevance to a given query has become a popular problem at the intersection of web search, machine learning, and information retrieval. Recent work on ranking focused on a number of different paradigms, namely pointwise, pairwise and listwise approaches. Each of those paradigms focuses on a different aspect of the dataset while largely ignoring others. The current paper shows how a combination of them can lead to improved ranking performance and moreover, how it can be implemented in loglinear time. The basic idea of the algorithm is to use isotonic regression, with adaptive bandwidth selection per relevance grade. This results in an implicitlydefined loss function which can be minimized efficiently by a subgradient descent procedure.
Symmetrization Of Binary Random Variables
 BERNOULLI
, 1999
"... A random variable Y is called an independent symmetrizer of a given random variable X if (a) it is independent of X and (b) the distribution of X Y is symmetric about 0. In cases where the distribution of X is symmetric about its mean, it is easy to see that the constant random variable Y i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A random variable Y is called an independent symmetrizer of a given random variable X if (a) it is independent of X and (b) the distribution of X Y is symmetric about 0. In cases where the distribution of X is symmetric about its mean, it is easy to see that the constant random variable Y is a minimumvariance independent symmetrizer. Taking