Results 11  20
of
29
A Unified Description of Iterative Algorithms for Traffic Equilibria
 European Journal of Operational Research
, 1992
"... The purpose of this paper is to provide a unified description of iterative algorithms for the solution of traffic equilibrium problems. We demonstrate that a large number of well known solution techniques can be described in a unified manner through the concept of partial linearization, and establis ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
The purpose of this paper is to provide a unified description of iterative algorithms for the solution of traffic equilibrium problems. We demonstrate that a large number of well known solution techniques can be described in a unified manner through the concept of partial linearization, and establish close relationships with other algorithmic classes for nonlinear programming and variational inequalities. In the case of nonseparable travel costs, the class of partial linearization algorithms are shown to yield new results in the theory of finitedimensional variational inequalities. The possibility of applying truncated algorithms within the framework is also discussed.
Automatic Differentiation And Spectral Projected Gradient Methods For Optimal Control Problems
, 1998
"... this paper is to show the application of these canonical formulas to optimal control processes being integrated by the RungeKutta family of numerical methods. There are many papers concerning numerical comparisions between automatic differentiation, finite differences and symbolic differentiation. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
this paper is to show the application of these canonical formulas to optimal control processes being integrated by the RungeKutta family of numerical methods. There are many papers concerning numerical comparisions between automatic differentiation, finite differences and symbolic differentiation. See, for example, [1, 2, 6, 7, 21] among others. Another objective is to test the behavior of the spectral projected gradient methods introduced in [5]. These methods combine the classical projected gradient with two recently developed ingredients in optimization: (i) the nonmonotone line search schemes of Grippo, Lampariello and Lucidi ([24]), and (ii) the spectral steplength (introduced by Barzilai and Borwein ([3]) and analyzed by Raydan ([30, 31])). This choice of the steplength requires little computational work and greatly speeds up the convergence of gradient methods. The numerical experiments presented in [5], showing the high performance of these fast and easily implementable methods, motivate us to combine the spectral projected gradient methods with automatic differentiation. Both tools are used in this work for the development of codes for numerical solution of optimal control problems. In Section 2 of this paper, we apply the canonical formulas to the discrete version of the optimal control problem. In Section 3, we give a concise survey about spectral projected gradient algorithms. Section 4 presents some numerical experiments. Some final remarks are presented in Section 5. 2 CANONICAL FORMULAS The basic optimal control problem can be described as follows: Let a process governed by a system of ordinary differential equations be dx(t) dt = f(x(t); u(t); ); T 0 t T f ; (1) where x : [T 0 ; T f ] ! IR nx , u : [T 0 ; T f ] ! U ` IR nu , U compact, and 2 V ...
Solution Of Optimal Control Problems By A Pointwise Projected Newton Method
 SIAM J. Control Optim
, 1995
"... . In the context of optimal control of ordinary differential equations, we prove local superlinear convergence and constraint identification results for an extension of the projected Newton method of Bertsekas. The estimates are also valid for discretized versions of the methodproblem pair. Key wo ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
. In the context of optimal control of ordinary differential equations, we prove local superlinear convergence and constraint identification results for an extension of the projected Newton method of Bertsekas. The estimates are also valid for discretized versions of the methodproblem pair. Key words. projected Newton iteration, optimal control AMS(MOS) subject classifications. 47H17, 49K15, 49M15, 65J15, 65K10 1. Introduction. In many areas of optimal control the problems are formulated with simple constraints on the control. For this type of problems, the gradient projection type algorithms have proven to be quite successful, because they are able to take into account the structure of the underlying optimization problem. Another interesting feature of these methods is that they often can be formulated in infinite dimensional spaces which is important for the application to optimal control problems. In general, let H denote a Hilbert space and for some closed convex subset U 2 H c...
Cost approximation: A unified framework of descent algorithms for nonlinear programs
 SIAM Journal on Optimization
, 1994
"... . The paper describes and analyzes the cost approximation algorithm. This class of iterative descent algorithms for nonlinear programs and variational inequalities places a large number of algorithms within a common framework and provides a means for analyzing relationships among seemingly unrelated ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
. The paper describes and analyzes the cost approximation algorithm. This class of iterative descent algorithms for nonlinear programs and variational inequalities places a large number of algorithms within a common framework and provides a means for analyzing relationships among seemingly unrelated methods. A common property of the methods included in the framework is that their subproblems may be characterized by monotone mappings, which replace an additive part of the original cost mapping in an iterative manner; alternately, a step is taken in the direction obtained in order to reduce the value of a merit function for the original problem. The generality of the framework is illustrated through examples, and the convergence characteristics of the algorithm are analyzed for applications to nondifferentiable optimization. The convergence results are applied to some example methods, demonstrating the strength of the analysis compared to existing results. Key Words. Nondifferentiable o...
Convergence analysis of perturbed feasible descent methods
 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
, 1997
"... We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. There ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. Therefore, a new approach is needed. We show that, in the presence of perturbations, a certain eapproximate solution can be obtained, where e depends linearly on the level of perturbations. Applications to the gradient projection, proximal minimization, extragradient and incremental gradient algorithms are described.
A LimitedMemory Algorithm for Bound Constrained Optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1994
"... An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memor ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
InteriorPoint Gradient Methods with DiagonalScalings for SimpleBound Constrained Optimization
, 2004
"... In this paper, we study diagonally scaled gradient methods for simplebound constrained optimization in a framework almost identical to that for unconstrained optimization, except that iterates are kept within the interior of the feasible region. We establish a satisfactory global convergence the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper, we study diagonally scaled gradient methods for simplebound constrained optimization in a framework almost identical to that for unconstrained optimization, except that iterates are kept within the interior of the feasible region. We establish a satisfactory global convergence theory for such interiorpoint gradient methods applied to Lipschitz continuously di#erentiable functions without any further assumption. Moreover,
Partial Spectral Projected Gradient Method with ActiveSet Strategy for Linearly Constrained Optimization
, 2009
"... A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted t ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithms. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango
Stability Analysis of GradientBased Neural Networks for Optimization Problems
 J. Global Optim
, 2000
"... The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new method, we investigate the stability properties of the general gradientbased neural network model for optimization problems. Our discussion includes both isolated ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new method, we investigate the stability properties of the general gradientbased neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that a) any trajectory of the gradientbased neural network converges to an equilibrium point, and b) the Lyapunov stability is equivalent to the asymptotical stability in the gradientbased neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradientbased neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradientbase...
Nonmonotone And Perturbed Optimization
, 1995
"... The primary purpose of this research is the analysis of nonmonotone optimization algorithms to which standard convergence analysis techniques do not apply. We consider methods that are inherently nonmonotone, as well as nonmonotonicity induced by data perturbations or inexact subproblem solution. On ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The primary purpose of this research is the analysis of nonmonotone optimization algorithms to which standard convergence analysis techniques do not apply. We consider methods that are inherently nonmonotone, as well as nonmonotonicity induced by data perturbations or inexact subproblem solution. One of the principal applications of our results is the analysis of gradienttype methods that process the data incrementally. The computational significance of these algorithms is well documented in the neural networks literature. Such algorithms are known to be particularly wellsuited for large data sets, as well as for realtime applications. One of the most important methods of this type is the classical online backpropagation (BP) algorithm for training artificial neural networks. Neural networks constitute a large interdisciplinary area of research within the broader area of machine learning that has found applications in many branches of science and technology. However, much of the wor...