• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 49,606
Next 10 →

Evolving Artificial Neural Networks

by Xin Yao , 1999
"... This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; ..."
Abstract - Cited by 574 (6 self) - Add to MetaCart
This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANN's and EA's can lead to significantly better intelligent systems than relying on ANN's or EA's alone

Neural Network-Based Face Detection

by Henry A. Rowley, Shumeet Baluja, Takeo Kanade - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1998
"... We present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present ..."
Abstract - Cited by 1206 (22 self) - Add to MetaCart
We present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present

Imagenet classification with deep convolutional neural networks.

by Alex Krizhevsky , Ilya Sutskever , Geoffrey E Hinton - In Advances in the Neural Information Processing System, , 2012
"... Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the pr ..."
Abstract - Cited by 1010 (11 self) - Add to MetaCart
Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than

Evolving Neural Networks through Augmenting Topologies

by Kenneth O. Stanley, Risto Miikkulainen - Evolutionary Computation
"... An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task ..."
Abstract - Cited by 536 (112 self) - Add to MetaCart
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning

Neural network ensembles, cross validation, and active learning

by Anders Krogh, Jesper Vedelsby - Neural Information Processing Systems 7 , 1995
"... Learning of continuous valued functions using neural network en-sembles (committees) can give improved accuracy, reliable estima-tion of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members aver-aged over unlabeled data, so it qua ..."
Abstract - Cited by 479 (6 self) - Add to MetaCart
Learning of continuous valued functions using neural network en-sembles (committees) can give improved accuracy, reliable estima-tion of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members aver-aged over unlabeled data, so

Learning and development in neural networks: The importance of starting small

by Jeffrey L. Elman - Cognition , 1993
"... It is a striking fact that in humans the greatest learnmg occurs precisely at that point in time- childhood- when the most dramatic maturational changes also occur. This report describes possible synergistic interactions between maturational change and the ability to learn a complex domain (language ..."
Abstract - Cited by 531 (17 self) - Add to MetaCart
(language), as investigated in con-nectionist networks. The networks are trained to process complex sentences involving relative clauses, number agreement, and several types of verb argument structure. Training fails in the case of networks which are fully formed and ‘adultlike ’ in their capacity. Training

Regularization Theory and Neural Networks Architectures

by Federico Girosi, Michael Jones, Tomaso Poggio - Neural Computation , 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract - Cited by 395 (32 self) - Add to MetaCart
Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead

A Learning Algorithm for Continually Running Fully Recurrent Neural Networks

by Ronald J. Williams, David Zipser , 1989
"... The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract - Cited by 534 (4 self) - Add to MetaCart
the retention of information over time periods having either fixed or indefinite length. 1 Introduction A major problem in connectionist theory is to develop learning algorithms that can tap the full computational power of neural networks. Much progress has been made with feedforward networks, and attention

Neural networks

by Michael I. Jordan, Christopher M. Bishop , 1996
"... Neural networks have emerged as a field ..."
Abstract - Cited by 53 (0 self) - Add to MetaCart
Neural networks have emerged as a field

Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression

by John G. Daugman , 1988
"... A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a comp ..."
Abstract - Cited by 478 (8 self) - Add to MetaCart
A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a
Next 10 →
Results 1 - 10 of 49,606
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University