• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 71
Next 10 →

Local and Global Convergence of On-Line Learning

by Barkai Seung, H. S. Seung, H. Sompolinsky - Physical Review Letters , 1995
"... We study the performance of an on-line algorithm for learning dichotomies, with a dynamical error-dependent learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges to the optim ..."
Abstract - Cited by 9 (0 self) - Add to MetaCart
We study the performance of an on-line algorithm for learning dichotomies, with a dynamical error-dependent learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges

On-Line Adaptive Learning Rate BP Algorithm For MLP And Application To An Identification Problem

by Daohang Sha, Vladimir B. Bajic, Vladimir B. %dmlü , 1999
"... An on-line algorithm that uses an adaptive learning rate is proposed. Its development is based on the analysis of the convergence of the conventional gradient descent method for threelayer BP neural networks. The effectiveness of the proposed algorithm applied to the identification and prediction ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
An on-line algorithm that uses an adaptive learning rate is proposed. Its development is based on the analysis of the convergence of the conventional gradient descent method for threelayer BP neural networks. The effectiveness of the proposed algorithm applied to the identification

Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

by O. L. Mangasarian, M. V. Solodov , 1994
"... The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method . Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established ..."
Abstract - Cited by 10 (5 self) - Add to MetaCart
The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method . Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established

The Dynamics of On-line Learning in Radial Basis Function Networks

by Jason A. S. Freeman, David Saad , 1997
"... On-line learning is examined for the Radial Basis Function Network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be anal ..."
Abstract - Add to MetaCart
, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal

Convergence Analysis of Multi-innovation Learning Algorithm Based on PID Neural Network

by Gang Ren , 2013
"... Abstract: In order to improve the identification accuracy of dynamic system, multi-innovation learning algorithm based on PID neural networks is presented, which can improve the online identification performance of the networks. The multi-innovation gradient type algorithms use the current data and ..."
Abstract - Add to MetaCart
Abstract: In order to improve the identification accuracy of dynamic system, multi-innovation learning algorithm based on PID neural networks is presented, which can improve the online identification performance of the networks. The multi-innovation gradient type algorithms use the current data

On-Line Stochastic Functional Smoothing Optimization for Neural Network Training

by Chuan Wang, Jose C. Principe, Dr. Jose, C. Principe, Ph. D , 1997
"... : A set of new algorithms based on an on-line implementation of a well known global optimization strategy based on stochastic functional smoothing are proposed for training neural networks. These algorithms are different from other on-line global optimization approaches because they use not only fi ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
, momentum learning and conjugate gradients in order to claim their consistent and global convergence abilities; and are compared with conventional stochastic global optimization scheme in order to claim their faster learning rate. Computer simulation results are presented to support the analysis. Keywords

Smooth Imitation Learning for Online Sequence Prediction

by Hoang M Le , Andrew Kang , Yisong Yue , Peter Carr
"... Abstract We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context to b ..."
Abstract - Add to MetaCart
Abstract We study the problem of smooth imitation learning for online sequence prediction, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment in response to online, sequential context input. Since the mapping from context

Perspective An Online Bioinformatics Curriculum

by David B. Searls
"... Abstract: Online learning initia-tives over the past decade have become increasingly comprehen-sive in their selection of courses and sophisticated in their presen-tation, culminating in the recent announcement of a number of consortium and startup activities that promise to make a university educat ..."
Abstract - Add to MetaCart
Abstract: Online learning initia-tives over the past decade have become increasingly comprehen-sive in their selection of courses and sophisticated in their presen-tation, culminating in the recent announcement of a number of consortium and startup activities that promise to make a university

Communicated by John Platt Online Learning in Radial Basis Function Networks

by Jason A. S. Freeman, David Saad
"... An analytic investigation of the average case learning and generalization properties of radial basis function (RBFs) networks is presented, utiliz-ing online gradient descent as the learning rule. The analytic method em-ployed allows both the calculation of generalization error and the exami-nation ..."
Abstract - Add to MetaCart
-nation of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and some over-realizable cases are studied

Improved Neural Network-based Interpretation of Colonoscopy Images Through On-Line Learning and Evolution

by George D. Magoulas, Vassilis P. Plagianakos, Michael N. Vrahatis - IN EUNITE 2001 CONFERENCE , 2001
"... In this work we explore on-line training of neural networks for interpreting colonoscopy images through tracking the changing location of an approximate solution of a pattern-based, and, thus, dynamically changing, error function. We have developed a memory-based adaptation of the learning rate for ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
In this work we explore on-line training of neural networks for interpreting colonoscopy images through tracking the changing location of an approximate solution of a pattern-based, and, thus, dynamically changing, error function. We have developed a memory-based adaptation of the learning rate
Next 10 →
Results 1 - 10 of 71
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University