Results 1  10
of
111
Interior methods for nonlinear optimization
 SIAM Review
, 2002
"... Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their ..."
Abstract

Cited by 105 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 103 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
DAKOTA, A Multilevel Parallel ObjectOriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis  Version 4.0 Reference Manual
, 2006
"... ..."
(Show Context)
A new conjugate gradient method with guaranteed descent and an efficient line search
 SIAM J. OPTIM
, 2005
"... A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme sat ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
(Show Context)
A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition gT k dk ≤ − 7 8 ‖gk‖2. Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions. A new line search scheme is developed that is efficient and highly accurate. Efficiency is achieved by exploiting properties of linear interpolants in a neighborhood of a local minimizer. High accuracy is achieved by using a convergence criterion, which we call the “approximate Wolfe ” conditions, obtained by replacing the sufficient decrease criterion in the Wolfe conditions with an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion. Numerical comparisons are given with both LBFGS and conjugate gradient methods using the unconstrained optimization problems in the CUTE library.
LBFGSB  Fortran Subroutines for LargeScale Bound Constrained Optimization
, 1994
"... LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained pr ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
(Show Context)
LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained problems, and in this case performs similarly to its predecessor, algorithm LBFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.
OPT++: An objectoriented class library for nonlinear optimization
 Sandia Report SAND948225, Sandia National Laboratories
, 1994
"... Issued by Sandia National Laboratories, operated for the United States Department of Energy by Sandia Corporation. NOTICE: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of th ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
Issued by Sandia National Laboratories, operated for the United States Department of Energy by Sandia Corporation. NOTICE: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, nor any of the contractors, subcontractors, or their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any Information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply Its endorsement, recommendation, or favoring by the United States Government, any agency thereof or any of their contractors or subconractors. The views and opinions expressed herein do not necessarily state or reflect those of the United States Government, any agency thereof or any of their contractors or subcontractors. This report has been reproduced from the best available copy. Available to DOE and DOE contractors from:
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. OPTIM
, 1999
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with different right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with different right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coefficient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to offer when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The effectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in finite element models.
On Centroidal Voronoi Tessellation  Energy Smoothness and Fast Computation
, 2008
"... Centroidal Voronoi tessellation (CVT) is a fundamental geometric structure that finds many applications in ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
Centroidal Voronoi tessellation (CVT) is a fundamental geometric structure that finds many applications in
Allatonce Optimization for Coupled Matrix and Tensor Factorizations
, 1105
"... Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers ’ social networks (e.g., Facebook friendships) and restaurant categories information (e.g., Thai or Italian) can also be used to make better recommendations. The task of fusing data, however, is challenging since data sets can be incomplete and heterogeneous, i.e., data consist of both matrices, e.g., the person by person social network matrix or the restaurant by category matrix, and higherorder tensors, e.g., the “ratings ” tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outerproduct models to higherorder tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose an allatonce optimization approach called CMTFOPT (CMTFOPTimization), which is a gradientbased optimization approach for joint analysis of matrices and higherorder tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate that the proposed allatonce approach is more accurate than the alternating least squares approach.