Results 11  20
of
86,635
On The Computational Power Of Neural Nets
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 1995
"... This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a linear combination of the previous states of all units. We prove that one may simulate all Turing Mach ..."
Abstract

Cited by 179 (23 self)
 Add to MetaCart
This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a linear combination of the previous states of all units. We prove that one may simulate all Turing
Arcing Classifiers
, 1998
"... Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a] ) Here, modified training sets are formed by resampling from the original training set, classifiers con ..."
Abstract

Cited by 345 (6 self)
 Add to MetaCart
Recent work has shown that combining multiple versions of unstable classifiers such as trees or neural nets results in reduced test set error. One of the more effective is bagging (Breiman [1996a] ) Here, modified training sets are formed by resampling from the original training set, classifiers
Generalization in Reinforcement Learning: Safely Approximating the Value Function
 Advances in Neural Information Processing Systems 7
, 1995
"... To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a genera ..."
Abstract

Cited by 307 (4 self)
 Add to MetaCart
generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce
Nonlinear Neural Networks: Principles, Mechanisms, and Architectures
, 1988
"... An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentiethcentury scientific movements. The nonlinear, nonstatio ..."
Abstract

Cited by 262 (21 self)
 Add to MetaCart
, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network researchthe binary, linear, and continuousnonlinear modelsare noted. The remainder of the article describes results about continuousnonlinear models: Many models of content
Contour enhancement, shortterm memory, and constancies in reverberating neural networks
 Studies in Applied Math
, 1973
"... A model of the nonlinear dynamics of reverberating oncenter offsurround networks of nerve cells, or of cell populations, is analysed. The oncenter offsurround anatomy allows patterns to be processed across populations without saturating the populations ' response to large inputs. The signal ..."
Abstract

Cited by 245 (93 self)
 Add to MetaCart
. The signals between populations are made sigmoid functions of population activity in order to quench network noise, and yet store sufficiently intense patterns in short term memory (STM). There exists a quenching threshold: a population's activity will be quenched along with network noise if it falls
Turing Computability With Neural Nets
 Applied Mathematics Letters
, 1991
"... . This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. Highorder connections are not required. 1. Introduction This paper a ..."
Abstract

Cited by 84 (15 self)
 Add to MetaCart
. This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. Highorder connections are not required. 1. Introduction This paper
Training a 3Node Neural Network is NPComplete
, 1992
"... We consider a 2layer, 3node, ninput neural network whose nodes compute linear threshold functions of their inputs. We show that it is NPcomplete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples. We ..."
Abstract

Cited by 231 (3 self)
 Add to MetaCart
We consider a 2layer, 3node, ninput neural network whose nodes compute linear threshold functions of their inputs. We show that it is NPcomplete to decide whether there exist weights and thresholds for this network so that it produces output consistent with a given set of training examples. We
Rich feature hierarchies for accurate object detection and semantic segmentation
"... Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The bestperforming methods are complex ensemble systems that typically combine multiple lowlevel image features with highlevel context. In this paper, we propose a simple and scala ..."
Abstract

Cited by 248 (23 self)
 Add to MetaCart
and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012—achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply highcapacity convolutional neural networks (CNNs) to bottomup region proposals
The Sample Complexity of Pattern Classification With Neural Networks: The Size of the Weights is More Important Than the Size of the Network
, 1997
"... Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the ne ..."
Abstract

Cited by 213 (15 self)
 Add to MetaCart
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters
Chaotically Oscillating Sigmoid Function in Feedforward Neural Network
"... Abstract—In this study, we propose the feedforward neural network with chaotically oscillating sigmoid function. By computer simulations, we confirm that the proposed neural network can find good solutions in early time of the back propagation learning process. 1. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—In this study, we propose the feedforward neural network with chaotically oscillating sigmoid function. By computer simulations, we confirm that the proposed neural network can find good solutions in early time of the back propagation learning process. 1.
Results 11  20
of
86,635