Results 1 - 10
of
554
Evolving Artificial Neural Networks
, 1999
"... This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; ..."
Abstract
-
Cited by 574 (6 self)
- Add to MetaCart
This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA
Distortion invariant object recognition in the dynamic link architecture
- IEEE TRANSACTIONS ON COMPUTERS
, 1993
"... We present an object recognition system based on the Dynamic Link Architecture, which is an extension to classical Artificial Neural Networks. The Dynamic Link Architecture ex-ploits correlations in the fine-scale temporal structure of cellular signals in order to group neurons dynamically into hig ..."
Abstract
-
Cited by 637 (80 self)
- Add to MetaCart
We present an object recognition system based on the Dynamic Link Architecture, which is an extension to classical Artificial Neural Networks. The Dynamic Link Architecture ex-ploits correlations in the fine-scale temporal structure of cellular signals in order to group neurons dynamically
A Model of Saliency-based Visual Attention for Rapid Scene Analysis
, 1998
"... A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing salie ..."
Abstract
-
Cited by 1748 (72 self)
- Add to MetaCart
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing
A scaled conjugate gradient algorithm for fast supervised learning
- NEURAL NETWORKS
, 1993
"... A supervised learning algorithm (Scaled Conjugate Gradient, SCG) with superlinear convergence rate is introduced. The algorithm is based upon a class of optimization techniques well known in numerical analysis as the Conjugate Gradient Methods. SCG uses second order information from the neural netwo ..."
Abstract
-
Cited by 451 (0 self)
- Add to MetaCart
and avoids a time consuming line-search, which CGB and BFGS uses in each iteration in order to determine an appropriate step size.
Incorporating problem dependent structural information in the architecture of a neural network often lowers the overall complexity. The smaller the complexity of the neural
An Evolutionary Algorithm that Constructs Recurrent Neural Networks
- IEEE TRANSACTIONS ON NEURAL NETWORKS
"... Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, whi ..."
Abstract
-
Cited by 266 (14 self)
- Add to MetaCart
Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation
Context-Dependent Pre-trained Deep Neural Networks for Large Vocabulary Speech Recognition
- IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING
, 2012
"... We propose a novel context-dependent (CD) model for large vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pretrained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to pr ..."
Abstract
-
Cited by 254 (50 self)
- Add to MetaCart
We propose a novel context-dependent (CD) model for large vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pretrained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN
A Review of Evolutionary Artificial Neural Networks
, 1993
"... Research on potential interactions between connectionist learning systems, i.e., artificial neural networks (ANNs), and evolutionary search procedures, like genetic algorithms (GAs), has attracted a lot of attention recently. Evolutionary ANNs (EANNs) can be considered as the combination of ANNs and ..."
Abstract
-
Cited by 202 (23 self)
- Add to MetaCart
Research on potential interactions between connectionist learning systems, i.e., artificial neural networks (ANNs), and evolutionary search procedures, like genetic algorithms (GAs), has attracted a lot of attention recently. Evolutionary ANNs (EANNs) can be considered as the combination of ANNs
Learning Deep Architectures for AI
"... Theoretical results suggest that in order to learn the kind of complicated functions that can represent highlevel abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as i ..."
Abstract
-
Cited by 183 (30 self)
- Add to MetaCart
, such as in neural nets with many hidden layers or in complicated propositional formulae re-using many sub-formulae. Searching the parameter space of deep architectures is a difficult task, but learning algorithms such as those for Deep Belief Networks have recently been proposed to tackle this problem with notable
Evolutionary induction of sparse neural trees
- Evolutionary Computation
, 1997
"... This paper is concerned with the automatic induction of parsimonious neural networks. In contrast to other program induction situations, network induction entails parametric learning as well as structural adaptation. We present a novel representation scheme called neural trees that allows efficient ..."
Abstract
-
Cited by 40 (15 self)
- Add to MetaCart
learning of both network architectures and parameters by genetic search. A hybrid evolutionary method is developed for neural tree induction that combines genetic programming and the breeder genetic algorithm under the unified framework of the minimum description length principle. The method
Probable networks and plausible predictions - - a review of practical Bayesian methods for supervised neural networks
, 1995
"... Bayesian probabilily theory provides a unifying framework for dara modelling. In this framework the overall aims are to find models that are well-matched to, the &a, and to use &se models to make optimal predictions. Neural network laming is interpreted as an inference of the most probable ..."
Abstract
-
Cited by 178 (6 self)
- Add to MetaCart
parameters for Ihe model, given the training data The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizes and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given
Results 1 - 10
of
554