Results 1  10
of
367
An introduction to the conjugate gradient method without the agonizing pain
, 1994
"... ..."
(Show Context)
GLOBAL CONVERGENCE PROPERTIES OF CONJUGATE GRADIENT METHODS FOR OPTIMIZATION
, 1992
"... This paper explores the convergence ofnonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes ofmethods that are globally convergent on smooth, nonconvex functions. Some properties of the FletcherReeves method play an important role ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
This paper explores the convergence ofnonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes ofmethods that are globally convergent on smooth, nonconvex functions. Some properties of the FletcherReeves method play an important role in the first family, whereas the second family shares an important property with the PolakRibire method. Numerical experiments are presented.
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is ..."
Abstract

Cited by 108 (21 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Optimization techniques on riemannian manifolds
 Fields Institute Communications
, 1994
"... Abstract. The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are general ..."
Abstract

Cited by 90 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The techniques and analysis presented in this paper provide new methods to solve optimization problems posed on Riemannian manifolds. A new point of view is offered for the solution of constrained optimization problems. Some classical optimization techniques on Euclidean space are generalized to Riemannian manifolds. Several algorithms are presented and their convergence properties are analyzed employing the Riemannian structure of the manifold. Specifically, two apparently new algorithms, which can be thought of as Newton’s method and the conjugate gradient method on Riemannian manifolds, are presented and shown to possess, respectively, quadratic and superlinear convergence. Examples of each method on certain Riemannian manifolds are given with the results of numerical experiments. Rayleigh’s quotient defined on the sphere is one example. It is shown that Newton’s method applied to this function converges cubically, and that the Rayleigh quotient iteration is an efficient approximation of Newton’s method. The Riemannian version of the conjugate gradient method applied to this function gives a new algorithm for finding the eigenvectors corresponding to the extreme eigenvalues of a symmetric matrix. Another example arises from extremizing the function tr ΘTQΘN on the special orthogonal group. In a similar example, it is shown that Newton’s method applied to the sum of the squares of the offdiagonal entries of a symmetric matrix converges cubically.
Probabilistic Reasoning in Terminological Logics
, 1994
"... In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usua ..."
Abstract

Cited by 89 (5 self)
 Add to MetaCart
In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual modeltheoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end. 1 INTRODUCTION Terminological knowledge representation languages (concept languages, terminological logics) are used to describe hierarchies of concepts. While the expressive power of the various languages that have been defined (e.g. KLONE [BS85] ALC [SSS91]) varies greatly in that ...
The NEWUOA software for unconstrained optimization with derivatives
, 2004
"... Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the ..."
Abstract

Cited by 85 (2 self)
 Add to MetaCart
Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the variables. When Q is revised, the new Q interpolates F at m points, the value m=2n+1 being recommended. The remaining freedom in the new Q is taken up by minimizing the Frobenius norm of the change to ∇ 2 Q. Only one interpolation point is altered on each iteration. Thus, except for occasional origin shifts, the amount of work per iteration is only of order (m+n) 2, which allows n to be quite large. Many questions were addressed during the development of NEWUOA, for the achievement of good accuracy and robustness. They include the choice of the initial quadratic model, the need to maintain enough linear independence in the interpolation conditions in the presence of computer rounding errors, and the stability of the updating of certain matrices that allow the fast revision of Q. Details are given of the techniques that answer all the questions that occurred. The software was tried on several test problems. Numerical results for nine of them are reported and discussed, in order to demonstrate the performance of the software for up to 160 variables.
A new conjugate gradient method with guaranteed descent and an efficient line search
 SIAM J. OPTIM
, 2005
"... A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme sat ..."
Abstract

Cited by 68 (6 self)
 Add to MetaCart
(Show Context)
A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition gT k dk ≤ − 7 8 ‖gk‖2. Moreover, a global convergence result is established when the line search fulfills the Wolfe conditions. A new line search scheme is developed that is efficient and highly accurate. Efficiency is achieved by exploiting properties of linear interpolants in a neighborhood of a local minimizer. High accuracy is achieved by using a convergence criterion, which we call the “approximate Wolfe ” conditions, obtained by replacing the sufficient decrease criterion in the Wolfe conditions with an approximation that can be evaluated with greater precision in a neighborhood of a local minimum than the usual sufficient decrease criterion. Numerical comparisons are given with both LBFGS and conjugate gradient methods using the unconstrained optimization problems in the CUTE library.
A survey of nonlinear conjugate gradient methods
 Pacific Journal of Optimization
, 2006
"... ..."
(Show Context)
A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
 SIAM J. Optim
, 1999
"... . Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, w ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
(Show Context)
. Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. However, the strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, which are similar to that required by the Zoutendijk condition. Key words. unconstrained optimization, new conjugate gradient method, Wolfe conditions, global convergence. AMS subject classifications. 65k, 90c 1. Introduction. Our problem is to minimize a function of n variables min f(x); (1.1) where f is smooth and its gradient g(x) is available. Conjugate gradient methods for solving (1.1) are iterative methods of the form x k+1 = x k + ff k d k ; (1.2) where ff k ? 0 is a steplength, d k is a search direction. Normally the search direction at...
Descent property and global convergence of the Fletcher–Reeves method with inexact line search
 IMA Journal of Numerical Analysis
, 1985
"... If an inexact line search which satisfies certain standard conditions is used, then it is proved that the FletcherReeves method has a descent property and is globally convergent in a certain sense. THE FLETCHERREEVES (1964) method (with or without resetting) is known to have a descent property whe ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
If an inexact line search which satisfies certain standard conditions is used, then it is proved that the FletcherReeves method has a descent property and is globally convergent in a certain sense. THE FLETCHERREEVES (1964) method (with or without resetting) is known to have a descent property when the step size is found by an exact line search. Powell (1984) has shown that the global convergence for this method holds when an exact line search is used. In this note we show that both the descent property and the global convergence property of the FletcherReeves method still hold for an inexact line search when the step size satisfies certain standard conditions. The FletcherReeves method aims to solve the unconstrained optimization problem minimize f[x \ x e R " (1) by a sequence of line searches jcW+D^xW + e W (2) from a user supplied estimate x*1'. If the line search is exact the step size a*k) is defined by a»> = arg min f[xf » + as'*'). (3) a In practice an exact line search is not usually possible (for a general problem) and any value of a(t) is accepted which satisfies certain standard conditions. Fletcher (1980) suggests that alk) is such that x<*+1) satisfies the condition