Results 1 
7 of
7
Training Simultaneous Recurrent Neural Network with Resilient Propagation for Combinatorial Optimization
 Nos 3
, 2002
"... This paper proposes a nonrecurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxationmode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper proposes a nonrecurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxationmode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the nonrecurrent training algorithm, resilient backpropagation, are formulated through an algebraic approach. Performance of the proposed neurooptimizer on a wellknown static combinatorial optimization problem, the Traveling Salesman problem, is evaluated on the basis of computational complexity measures and, subsequently, compared to performance of the Simultaneous Recurrent Neural network trained with the standard backpropagation, and recurrent backpropagation for the same static optimization problem. Simulation results indicate that the Simultaneous Recurrent Neural network trained with the resilient backpropagation algorithm is able to locate superior quality solutions through comparable amount of computational effort for the Traveling Salesman problem.
Simultaneous recurrent neural network trained with nonrecurrent Backpropagation algorithm for static optimisation
 Neural Computing and Applications
, 2003
"... Abstract – This paper explores feasibility of employing the nonrecurrent backpropagation training algorithm for a recurrent neural network, Simultaneous Recurrent Neural network, for static optimization. A simplifying observation that maps the recurrent network dynamics, which is configured to oper ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract – This paper explores feasibility of employing the nonrecurrent backpropagation training algorithm for a recurrent neural network, Simultaneous Recurrent Neural network, for static optimization. A simplifying observation that maps the recurrent network dynamics, which is configured to operate in relaxation mode as a static optimizer, to feedforward network dynamics is leveraged to facilitate application of a nonrecurrent training algorithm such as the standard backpropagation and its variants. A simulation study that aims to assess feasibility, optimizing potential, and computational efficiency of training the Simultaneous Recurrent Neural network with nonrecurrent backpropagation is conducted. A comparative computational complexity analysis between the Simultaneous Recurrent Neural network trained with nonrecurrent backpropagation algorithm and the same network trained with the recurrent backpropagation algorithm is performed. Simulation results demonstrate that it is feasible to apply the nonrecurrent backpropagation to train the Simultaneous Recurrent Neural network. The optimality and computational complexity analysis fails to demonstrate any advantage on behalf of the nonrecurrent backpropagation versus the recurrent backpropagation for the optimization problem considered. However, considerable future potential that is yet to be explored exists given that computationally efficient versions of the backpropagation training algorithm, namely quasiNewton and conjugate gradient descent among others, are also applicable for the neural network proposed for static optimization in this paper.
Search for A Lyapunov Function through Empirical Approximation by Artificial Neural Nets: Theoretical Framework
"... An artificial neural network is proposed as a function approximator for empirical modeling of a Lyapunov function for a nonlinear dynamic system that projects stable behavior as potentially observable in its state space. Theoretical framework for the methodology of designing the socalled Lyapunov ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
An artificial neural network is proposed as a function approximator for empirical modeling of a Lyapunov function for a nonlinear dynamic system that projects stable behavior as potentially observable in its state space. Theoretical framework for the methodology of designing the socalled Lyapunov neural network, which empirically models a Lyapunov function, is described. Algorithms for training the Lyapunov neural network for a neurodynamics system are presented.
A heuristic and its mathematical analogue within artificial neural network adaptation context. Neural Netw World 15:129–136 uncorrected proof Journal
 NEPL MS: NEPL396 CMS: 11063_2007_9055_Article TYPESET DISK LE CP Disp.:2007/10/24 Pages: 15 Layout: Small
, 2005
"... Abstract – This paper presents an observation on adaptation of Hopfield neural network dynamics configured as a relaxationbased search algorithm for static optimization. More specifically, two adaptation rules, one heuristically formulated and the second being gradient descent based, for updating c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract – This paper presents an observation on adaptation of Hopfield neural network dynamics configured as a relaxationbased search algorithm for static optimization. More specifically, two adaptation rules, one heuristically formulated and the second being gradient descent based, for updating constraint weighting coefficients of Hopfield neural network dynamics are discussed. Application of two adaptation rules for constraint weighting coefficients is shown to lead to an identical form for the update equations. This finding suggests that the heuristicallyformulated rule and the gradient descent based rule are analogues of each other. Accordingly, in the current context, common sense reasoning by a domain expert appears to possess a corresponding mathematical framework. 1.
Computational Promise of Simultaneous Recurrent Network with A Stochastic Search Mechanism
"... Abstract – This paper explores the computational promise of enhancing Simultaneous Recurrent Neural networks with a stochastic search mechanism as static optimizers. Successful application of Simultaneous Recurrent Neural networks to static optimization problems, where the training had been achieved ..."
Abstract
 Add to MetaCart
Abstract – This paper explores the computational promise of enhancing Simultaneous Recurrent Neural networks with a stochastic search mechanism as static optimizers. Successful application of Simultaneous Recurrent Neural networks to static optimization problems, where the training had been achieved through one of a number of deterministic gradient descent algorithms including Recurrent Backpropagation, Backpropagation and Resilient Propagation, was recently reported in the literature. Accordingly at the present time, it became highly desirable to assess if enhancing the neural optimization algorithm with a stochastic search mechanism would be of substantial utility and value, which is the focus of the study reported in this paper. Two techniques are employed to assess the added value of a potential enhancement through a stochastic search mechanism: one method entails comparison of SRN performance with a stochastic search algorithm, the Genetic Algorithm, and the second method leverages estimation for the quality of optimal solutions through HeldKarp bounds. The Traveling Salesman Problem is employed as the benchmark for the simulation study reported herein. Simulation results suggest that there is likely to be significant improvement possible in the quality of solutions for the Traveling Salesman problem, and potentially other static optimization problems, if the Simultaneous Recurrent Neural network is augmented with a stochastic search mechanism. I.
Enhancing Computational Promise of Neural Optimization for GraphTheoretic Problems in RealTime Environments
, 2007
"... This paper demonstrates enhanced utility of neural static optimization algorithms for graphtheoretic problems in realtime environments under the assumption that fast computation cycles for nearoptimal solutions are desirable. It assumes that a hardware realization of the neural optimization algo ..."
Abstract
 Add to MetaCart
This paper demonstrates enhanced utility of neural static optimization algorithms for graphtheoretic problems in realtime environments under the assumption that fast computation cycles for nearoptimal solutions are desirable. It assumes that a hardware realization of the neural optimization algorithm, which is then likely to fully exploit the highdegree of parallelism inherent to such optimization problems, is feasible. Accordingly, the paper discusses the application of an adaptive neural optimization scheme, which is based on a known model and training algorithm, on the shortest path computation for digraphs with unit edge costs, which proved to be “difficult ” for neural optimization algorithms that were nonadaptive, i.e. Hopfield network and its stochastic derivatives. A simulation study demonstrates that the presented neural optimization scheme is able to compute nearoptimal solutions for large instances of the problem, i.e. 1000vertex graphs. The study concludes with the finding that a hardware realization of the presented neural optimization algorithm is poised to compute nearoptimal solutions for a class of problems entailing graph search and its rich set of variants within a realtime environment.
STABILITY OF SIMULTANEOUS RECURRENT NEURAL NETWORK DYNAMICS for . . .
, 2002
"... A new trainable and recurrent neural optimization algorithm, which has potentially superior capabilities compared to existing neural search algorithms to compute high quality solutions of static optimization problems in a computationally efficient manner, is studied. Specifically, local stability a ..."
Abstract
 Add to MetaCart
A new trainable and recurrent neural optimization algorithm, which has potentially superior capabilities compared to existing neural search algorithms to compute high quality solutions of static optimization problems in a computationally efficient manner, is studied. Specifically, local stability analysis of the dynamics of a relaxationbased recurrent neural network, the Simultaneous Recurrent Neural network, for static optimization problems is presented. The results of theoretical as well as its correlated simulation study lead to the conjecture that the Simultaneous Recurrent Neural network dynamics appears to demonstrate desirable stability characteristics. Dynamics often converge to fixed points upon conclusion of a relaxation cycle, which facilitates adaptation of weights through one of many fixedpoint training algorithms. The trainability of this neural algorithm results relatively high quality solutions to be computed for largescale problem instances with computational efficiency, particularly when compared to solutions computed by the Hopfield network and its derivative algorithms including those with stochastic search control mechanisms.