• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 51,226
Next 10 →

Feedforward Multilayer Neural Networks

by Feedforward Multilayer Neural, J J(w
"... tput) is very commonly used to approximate unknown mappings. If the output layer is linear, such a network may have a structure similar to an RBF network. 5.1 Multilayer perceptrons Multilayer perceptrons are commonly used to approximate complex nonlinear mappings. In general, it is possible to sho ..."
Abstract - Add to MetaCart
tput) is very commonly used to approximate unknown mappings. If the output layer is linear, such a network may have a structure similar to an RBF network. 5.1 Multilayer perceptrons Multilayer perceptrons are commonly used to approximate complex nonlinear mappings. In general, it is possible

multilayer neural networks

by M Ahr, M Biehl, E Schlösser
"... Abstract. We investigate layered neural networks with differentiable activation function and student vectors without normalization constraint by means of equilibrium statistical physics. We consider the learning of perfectly realizable rules and find that the length of student vectors becomes infini ..."
Abstract - Add to MetaCart
Abstract. We investigate layered neural networks with differentiable activation function and student vectors without normalization constraint by means of equilibrium statistical physics. We consider the learning of perfectly realizable rules and find that the length of student vectors becomes

Multilayer Neural Networks

by Richard Ward Strom, Richard Ward Strom
"... I would first and foremost like to thank my advisor, Dr. Russ Abbott, for his guidance in the completion of this thesis. His input not only helped guide me to meet my goals, but encouraged me to look for a creative path towards those goals. Dr. Abbott’s unique teaching style and fondness for explora ..."
Abstract - Add to MetaCart
I would first and foremost like to thank my advisor, Dr. Russ Abbott, for his guidance in the completion of this thesis. His input not only helped guide me to meet my goals, but encouraged me to look for a creative path towards those goals. Dr. Abbott’s unique teaching style and fondness for exploration and creativity have inspired me since I began this program, and have constantly reminded me of the reasons why I chose to study mathematics and computer science in the first place. I would also like to thank Dr. Valentino Crespi for reconnecting me with the joys of mathematics and proof, as well as complicated (but impressive looking) mathematical notation. And to Dr. Chengyu Sun for not only contributing to my education, but to my business as well. And finally I would like to thank my wife, Yelena, for her support, and for listening to me go on endlessly about things that surely aren’t as interesting to her as they are to me.

Noise robustness in multilayer neural networks

by M. Copelli, R. Eichhorn, O. Kinouchi, M. Biehl, R. Simonetti, P. Riegler, N. Caticha - EUROPHYSICS LETTERS , 1997
"... The training of multilayered neural networks in the presence of different types of noise is studied. We consider the learning of realizable rules in nonoverlapping architectures. Achieving optimal generalization depends on the knowledge of the noise level, however its misestimation may lead to part ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
The training of multilayered neural networks in the presence of different types of noise is studied. We consider the learning of realizable rules in nonoverlapping architectures. Achieving optimal generalization depends on the knowledge of the noise level, however its misestimation may lead

Multilayer Neural Networks and Polyhedral Dichotomies

by C. Kenyon, H. Paugam-Moisy , 1997
"... We study the number of hidden layers required by a multilayer neural network with threshold units to compute a dichotomy f from R d to f0; 1g, defined by a finite set of hyperplanes. We show that this question is far more intricate than computing Boolean functions, although this well-known problem ..."
Abstract - Add to MetaCart
We study the number of hidden layers required by a multilayer neural network with threshold units to compute a dichotomy f from R d to f0; 1g, defined by a finite set of hyperplanes. We show that this question is far more intricate than computing Boolean functions, although this well

MAXIMIZING MARGINS OF MULTILAYER NEURAL NETWORKS

by Takahiro Nishikawa, Shigeo Abe
"... According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three layers without mis-classification. In this paper, we propose to train multilayer neural network classifiers based on the CARVE algorithm. In hidden layer training, we find a hyperplane that sepa-rates a ..."
Abstract - Add to MetaCart
According to the CARVE algorithm, any pattern classifica-tion problem can be synthesized in three layers without mis-classification. In this paper, we propose to train multilayer neural network classifiers based on the CARVE algorithm. In hidden layer training, we find a hyperplane that sepa

Voice Disorders Identification Using Multilayer Neural Network

by Lotfi Salhi, Talbi Mourad, Adnene Cherif , 2008
"... Abstract: In this paper we present a new method for voice disorders classification based on multilayer neural network. The processing algorithm is based on a hybrid technique which uses the wavelets energy coefficients as input of the multilayer neural network. The training step uses a speech databa ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Abstract: In this paper we present a new method for voice disorders classification based on multilayer neural network. The processing algorithm is based on a hybrid technique which uses the wavelets energy coefficients as input of the multilayer neural network. The training step uses a speech

Optimal Learning in Multilayer Neural Networks

by O. Winther, B. Lautrup, J-b. Zhang
"... The generalization performance of two learning algorithms, Bayes algorithm and the "optimal learning" algorithm on two classification tasks is studied theoretically. In the first example the task is defined by a restricted two-layer network, a committee machine, and in the second the task ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
The generalization performance of two learning algorithms, Bayes algorithm and the "optimal learning" algorithm on two classification tasks is studied theoretically. In the first example the task is defined by a restricted two-layer network, a committee machine, and in the second the task

Encoding Strategies in Multilayer Neural Networks

by E. Elizalde, S. Gomez, A. Romeo - J. PHYS. A: MATH. GEN. 24 (1991) 5617--5638 , 1991
"... Neural networks capable of encoding sets of patterns are analysed. Solutions are found by theoretical treatment instead of by supervised learning. The behaviour for 2 R (R 2 N) input units is studied and its characteristic features are discussed. The accessibilities for non-spurious patterns are c ..."
Abstract - Add to MetaCart
Neural networks capable of encoding sets of patterns are analysed. Solutions are found by theoretical treatment instead of by supervised learning. The behaviour for 2 R (R 2 N) input units is studied and its characteristic features are discussed. The accessibilities for non-spurious patterns

Effect of batch learning in multilayer neural networks

by Kenji Fukumizu - In Proceedings of ICONIP'98 , 1998
"... We discuss the dynamics of batch learning of multilayer neural networks in the asymptotic limit, where the number of trining data is much larger than the number of parameters, emphasizing on the parameterization redundancy in overrealizable cases. In addition to showing experimental results on overt ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
We discuss the dynamics of batch learning of multilayer neural networks in the asymptotic limit, where the number of trining data is much larger than the number of parameters, emphasizing on the parameterization redundancy in overrealizable cases. In addition to showing experimental results
Next 10 →
Results 1 - 10 of 51,226
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University