Results 1  10
of
35
MetaLearning Evolutionary Artificial Neural Networks
 Journal, Elsevier Science, Netherlands
, 2003
"... In this paper, we present MLEANN (MetaLearning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its param ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
In this paper, we present MLEANN (MetaLearning Evolutionary Artificial Neural Network), an automatic computational framework for the adaptive optimization of artificial neural networks wherein the neural network architecture, activation function, connection weights; learning algorithm and its parameters are adapted according to the problem. We explored the performance of MLEANN and conventionally designed artificial neural networks for function approximation problems. To evaluate the comparative performance, we used three different wellknown chaotic time series. We also present the state of the art popular neural network learning algorithms and some experimentation results related to convergence speed and generalization performance. We explored the performance of backpropagation algorithm; conjugate gradient algorithm, quasiNewton algorithm and LevenbergMarquardt algorithm for the three chaotic time series. Performances of the different learning algorithms were evaluated when the activation functions and architecture were changed. We further present the theoretical background, algorithm, design strategy and further demonstrate how effective and inevitable is the proposed MLEANN framework to design a neural network, which is smaller, faster and with a better generalization performance.
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
Incremental Gradient Algorithms with Stepsizes Bounded Away From Zero
 Computational Opt. and Appl
, 1998
"... Abstract. We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationall ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Abstract. We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationally important case. In particular, we show that a certain εapproximate solution can be obtained and establish the linear dependence of ε on the stepsize limit. Incremental gradient methods are particularly wellsuited for large neural network training problems where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the context of neural networks, the approach presented here is related to the principle of tolerant training. Our results justify numerous stepsize rules that were derived on the basis of extensive numerical experimentation but for which no theoretical analysis was previously available. In addition, convergence to (exact) stationary points is established when the gradient satisfies a certain growth property.
A realtime blind source separation scheme and its application to reverberant and noisy acoustic environments
, 2006
"... ..."
On Gradient Adaptation With UnitNorm Constraints
, 1999
"... In this correspondence, we describe gradientbased adaptive algorithms within parameter spaces that are specified by jjwjj = 1, where jj \Delta jj is any vector norm. We provide several algorithm forms and relate them to true gradient procedures via their geometric structures. We also give algorithm ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
In this correspondence, we describe gradientbased adaptive algorithms within parameter spaces that are specified by jjwjj = 1, where jj \Delta jj is any vector norm. We provide several algorithm forms and relate them to true gradient procedures via their geometric structures. We also give algorithms that mitigate an inherent numerical instability for L 2 normconstrained optimization tasks. Simulations showing the performance of the techniques for independent component analysis are provided. submitted to IEEE TRANSACTIONS ON SIGNAL PROCESSING SP Paper No. 10482  Revised August 29, 1999 Please address correspondence to: Scott C. Douglas, Department of Electrical Engineering, School of Engineering and Applied Science, Southern Methodist University, P.O. Box 750338, Dallas, TX 75275 USA. Voice: (214) 7683113. FAX: (214) 7683573. Electronic mail address: douglas@seas.smu.edu. World Wide Web URL: http://www.seas.smu.edu/ee/. 0 This work was supported in part by the Office of Re...
The Simultaneous Recurrent Neural Network Addressing the Scaling Problem in Static Optimization
 International Journal of Neural Systems
, 2001
"... A trainable recurrent neural network, Simultaneous Recurrent Neural network, is proposed to address the scaling problem faced by neural network algorithms in static optimization. The proposed algorithm derives its computational power to address the scaling problem through its ability to "learn ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
A trainable recurrent neural network, Simultaneous Recurrent Neural network, is proposed to address the scaling problem faced by neural network algorithms in static optimization. The proposed algorithm derives its computational power to address the scaling problem through its ability to "learn " compared to existing recurrent neural algorithms, which are not trainable. Recurrent backpropagation algorithm is employed to train the recurrent, relaxationbased neural network in order to associate fixed points of the network dynamics with locally optimal solutions of the static optimization problems. Performance of the algorithm is tested on the NPhard Traveling Salesman Problem in the range of 100 to 600 cities. Simulation results indicate that the proposed algorithm is able to consistently locate highquality solutions for all problem sizes tested. In other words, the proposed algorithm scales demonstrably well with the problem size with respect to quality of solutions and at the expense of increased computational cost for large problem sizes.
Iterative Equalization with Adaptive Soft Feedback
 IEEE Trans. on Commun
, 2000
"... In this letter, a novel equalization algorithm applying softdecision feedback and designed for binary transmission is introduced. In contrast to conventional decisionfeedback equalization (DFE), iterations are necessary, because a simple matched filter serves as feedforward filter, which collects ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
In this letter, a novel equalization algorithm applying softdecision feedback and designed for binary transmission is introduced. In contrast to conventional decisionfeedback equalization (DFE), iterations are necessary, because a simple matched filter serves as feedforward filter, which collects signal energy, but creates noncausal intersymbol interference. The rule for generating soft decisions is adapted continuously to the current state of the algorithm. In most cases, standard DFE methods are clearly outperformed. For a class of certain channel impulse responses, performance of maximumlikelihood sequence estimation is attained, in principle. The high performance of the scheme is explained using results from neural network theory.
Recurrent neural networks for computing pseudoinverses of rankdeficient matrices
 Yu Wen Wang and Run Jie Wang, Pseudoinverse and twoobjective optimal control in Banach spaces, BIBLIOGRAPHY 59 Funct
, 1997
"... Abstract. Three recurrent neural networks are presented for computing the pseudoinverses of rankdeficient matrices. The first recurrent neural network has the dynamical equation similar to the one proposed earlier for matrix inversion and is capable of Moore–Penrose inversion under the condition of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Three recurrent neural networks are presented for computing the pseudoinverses of rankdeficient matrices. The first recurrent neural network has the dynamical equation similar to the one proposed earlier for matrix inversion and is capable of Moore–Penrose inversion under the condition of zero initial states. The second recurrent neural network consists of an array of neurons corresponding to a pseudoinverse matrix with decaying selfconnections and constant connections in each row or column. The third recurrent neural network consists of two layers of neuron arrays corresponding, respectively, to a pseudoinverse matrix and a Lagrangian matrix with constant connections. All three recurrent neural networks are also composed of a number of independent subnetworks corresponding to the rows or columns of a pseudoinverse. The proposed recurrent neural networks are shown to be capable of computing the pseudoinverses of rankdeficient matrices. Key words. neural networks, dynamical systems, generalized inverses
Determination of Weights for Relaxation Recurrent Neural Networks
 NEUROCOMPUTING
, 2000
"... A theorem which establishes the solutions of a given optimization problem as stable points in the state space of singlelayer relaxationtype recurrent neural networks is proposed. This theorem establishes the necessary conditions for the neural network to converge to a solution by proposing certai ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A theorem which establishes the solutions of a given optimization problem as stable points in the state space of singlelayer relaxationtype recurrent neural networks is proposed. This theorem establishes the necessary conditions for the neural network to converge to a solution by proposing certain values for the constraint weight parameters of the network. Convergence performance of the discrete Hopfield network with the proposed bounds on constraint weight parameters is tested on a set of constraint satisfaction and optimization problems including the Traveling Salesman Problem, the Assignment Problem, the Weighted Matching Problem, the NQueens Problem and the Graph Path Search Problem. Simulation and stability analysis results indicate that the set of solutions become a subset of the set of stable points in the state space as a result of the suggested bounds. For the cases of the Traveling Salesman, Assignment and Weighted Matching Problems, two sets are equal leading to convergence to a solution after each relaxation. Convergence to a solution after each relaxation is not guaranteed for the NQueens and the Graph Path Search Problems since the solution set is a proper subset of the stable point set. Furthermore the simulation results indicate that the discrete Hopfield network converged to mostly average quality solutions as expected from a gradientdescent search algorithm. In conclusion, the suggested bounds on weight parameters guarantee that the discrete Hopfield network will locate a solution after each relaxation for a class of optimization problems of any size, although the solutions will be average quality rather than optimum.
Sliding modes in solving convex programming problems
 SIAM J. CONTR. OPTIMIZ
, 1998
"... Sliding modes are used to analyze a class of dynamical systems that solve convex programming problems. The analysis is carried out using concepts from the theory of differential equations with discontinuous righthand sides and Lyapunov stability theory. It is shown that the equilibrium points of ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Sliding modes are used to analyze a class of dynamical systems that solve convex programming problems. The analysis is carried out using concepts from the theory of differential equations with discontinuous righthand sides and Lyapunov stability theory. It is shown that the equilibrium points of the system coincide with the minimizers of the convex programming problem, and that irrespective of the initial state of the system the state trajectory converges to the solution set of the problem. The dynamic behavior of the systems is illustrated by two numerical examples.