Results 1  10
of
91
Benchmarking Optimization Software with Performance Profiles
, 2001
"... We propose performance profiles  distribution functions for a performance metric  as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation. 1 Introduction The benchmarking of optimi ..."
Abstract

Cited by 386 (8 self)
 Add to MetaCart
(Show Context)
We propose performance profiles  distribution functions for a performance metric  as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation. 1 Introduction The benchmarking of optimization software has recently gained considerable visibility. Hans Mittlemann's [13] work on a variety of optimization software has frequently uncovered deficiencies in the software and has generally led to software improvements. Although Mittelmann's efforts have gained the most notice, other researchers have been concerned with the evaluation and performance of optimization codes. As recent examples, we cite [1, 2, 3, 4, 6, 12, 17]. The interpretation and analysis of the data generated by the benchmarking process are the main technical issues addressed in this paper. Most benchmarking efforts involve tables displaying the performance of each solver on each problem for a set of metrics such...
Domain adaptation for statistical classifiers
 J. Artif. Int. Res
, 2006
"... ar ..."
(Show Context)
Trust region Newton method for largescale logistic regression
 In Proceedings of the 24th International Conference on Machine Learning (ICML
, 2007
"... Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in ..."
Abstract

Cited by 98 (22 self)
 Add to MetaCart
Largescale logistic regression arises in many applications such as document classification and natural language processing. In this paper, we apply a trust region Newton method to maximize the loglikelihood of the logistic regression model. The proposed method uses only approximate Newton steps in the beginning, but achieves fast convergence in the end. Experiments show that it is faster than the commonly used quasi Newton approach for logistic regression. We also compare it with existing linear SVM implementations. 1
TrustRegion InteriorPoint Algorithms For Minimization Problems With Simple Bounds
 SIAM J. CONTROL AND OPTIMIZATION
, 1995
"... Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are c ..."
Abstract

Cited by 55 (17 self)
 Add to MetaCart
Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are consistently scaled. The second algorithm proposed here uses an unscaled trust region. A global convergence result for these algorithms is given and dogleg and conjugategradient algorithms to compute trial steps are introduced. Some numerical examples that show the advantages of the second algorithm are presented.
On the resolution of monotone complementarity problems
 Comput. Optim. Appl
, 1996
"... Abstract. A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is already a solution of NCP if the mapping F involved in NCP is continuously differentiable ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
(Show Context)
Abstract. A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is already a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone. A descent algorithm is described which uses only function values of F. Some numerical results are given.
Automatic preconditioning by limited memory quasiNewton updating.
 SIAM Journal on Optimization,
, 2000
"... This paper deals with the preconditioning of truncated Newton methods for the solution of large scale nonlinear unconstrained optimization problems. We focus on preconditioners which can be naturally embedded in the framework of truncated Newton methods, i.e. which can be built without storing the ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
(Show Context)
This paper deals with the preconditioning of truncated Newton methods for the solution of large scale nonlinear unconstrained optimization problems. We focus on preconditioners which can be naturally embedded in the framework of truncated Newton methods, i.e. which can be built without storing the Hessian matrix of the function to be minimized, but only based upon information on the Hessian obtained by the product of the Hessian matrix times a vector. In particular we propose a diagonal preconditioning which enjoys this feature and which enables us to examine the effect of diagonal scaling on truncated Newton methods. In fact, this new preconditioner carries out a scaling strategy and it is based on the concept of equilibration of the data in linear systems of equations. An extensive numerical testing has been performed showing that the diagonal preconditioning strategy proposed is very effective. In fact, on most problems considered, the resulting diagonal preconditioned truncated Newton method performs better than both the unpreconditioned method and the one using an automatic preconditioner based on limited memory quasiNewton updating (PREQN) recently proposed by Morales and Nocedal [Morales,
LimitedMemory Matrix Methods with Applications
, 1997
"... Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introdu ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introduce a general algebraic form for the matrix update in limited�memory quasi� Newton methods. Many well�known methods such as limited�memory Broyden Family meth� ods satisfy the general form. We are able to prove several results about methods which sat� isfy the general form. In particular � we show that the only limited�memory Broyden Family method �using exact line searches � that is guaranteed to terminate within n iterations on an n�dimensional strictly convex quadratic is the limited�memory BFGS method. Further� more � we are able to introduce several new variations on the limited�memory BFGS method that retain the quadratic termination property. We also have a new result that shows that full�memory Broyden Family methods �using exact line searches � that skip p updates to the quasi�Newton matrix will terminate in no more than n�p steps on an n�dimensional strictly convex quadratic. We propose several new variations on the limited�memory BFGS method
Numerical methods for electronic structure calculations of materials
, 2006
"... The goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scienti£c computing audience. For this reason, we assume the r ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
The goal of this article is to give an overview of numerical problems encountered when determining the electronic structure of materials and the rich variety of techniques used to solve these problems. The paper is intended for a diverse scienti£c computing audience. For this reason, we assume the reader does not have an extensive background in the related physics. Our overview focuses on the nature of the numerical problems to be solved, their origin, and on the methods used to solve the resulting linear algebra or nonlinear optimization problems. It is common knowledge that the behavior of matter at the nanoscale is, in principle, entirely determined by the Schrödinger equation. In practice, this equation in its original form is not tractable. Successful, but approximate, versions of this equation, which allow one to study nontrivial systems, took about £ve or six decades to develop. In particular, the last two decades saw a ¤urry of activity in developing effective software. One of the main practical variants of the Schrödinger equation is based on what is referred to as Density Functional Theory (DFT). The combination of DFT with pseudopotentials allows one to obtain in an ef£cient way the ground state con£guration for many materials. This article will emphasize pseudopotentialdensity
MODEL PROBLEMS FOR THE MULTIGRID OPTIMIZATION OF SYSTEMS GOVERNED BY DIFFERENTIAL EQUATIONS
, 2005
"... We discuss a multigrid approach to the optimization of systems governed by differential equations. Such optimization problems appear in many applications and are of a different nature than systems of equations. Our approach uses an optimizationbased multigrid algorithm in which the multigrid algori ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We discuss a multigrid approach to the optimization of systems governed by differential equations. Such optimization problems appear in many applications and are of a different nature than systems of equations. Our approach uses an optimizationbased multigrid algorithm in which the multigrid algorithm relies explicitly on nonlinear optimization models as subproblems on coarser grids. Our goal is not to argue for a particular optimizationbased multigrid algorithm, but instead to demonstrate how multigrid can be used to accelerate nonlinear programming algorithms. Furthermore, using several model problems we give evidence (both theoretical and numerical) that the optimization setting is well suited to multigrid algorithms. Some of the model problems show that the optimization problem may be more amenable to multigrid than the governing differential equation. In addition, we relate the multigrid approach to more traditional optimization methods as further justification for the use of an optimizationbased multigrid algorithm.