• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,871
Next 10 →

Hidden-Unit Conditional Random Fields

by Laurens Maaten, Max Welling, Lawrence K. Saul
"... The paper explores a generalization of conditional random fields (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units ..."
Abstract - Cited by 10 (1 self) - Add to MetaCart
The paper explores a generalization of conditional random fields (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units

Sequence learning with hidden units in

by Johanni Brea, Walter Senn, Jean-pascal Pfister
"... spiking neural networks ..."
Abstract - Add to MetaCart
spiking neural networks

Network Inference with Hidden Units

by Joanna Tyrcha
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

LOGICAL STRUCTURE OF HIDDEN UNIT SPACE

by Janet Wiles, Mark Ollila
"... Hidden units in multi-layer networks form a representation space in which each region can be identified with a class of equivalent outputs (Elman, 1989) or a logical state in a finite state machine (Cleeremans, Servan-Schreiber & McClelland, 1989; Giles, Sun, Chen, Lee, & Chen, 1990). We ex ..."
Abstract - Add to MetaCart
Hidden units in multi-layer networks form a representation space in which each region can be identified with a class of equivalent outputs (Elman, 1989) or a logical state in a finite state machine (Cleeremans, Servan-Schreiber & McClelland, 1989; Giles, Sun, Chen, Lee, & Chen, 1990). We

Q-Learning with Hidden-Unit Restarting

by Charles W. Anderson - Advances in Neural Information Processing Systems 5 , 1993
"... Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to "restart" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm is tes ..."
Abstract - Cited by 29 (7 self) - Add to MetaCart
Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to "restart" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm

The cascade-correlation learning architecture

by Scott E. Fahlman, Christian Lebiere - Advances in Neural Information Processing Systems 2 , 1990
"... Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creatin ..."
Abstract - Cited by 801 (6 self) - Add to MetaCart
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one

A distributed, developmental model of word recognition and naming

by Mark S. Seidenberg, James L. McClelland - PSYCHOLOGICAL REVIEW , 1989
"... A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back-propagatio ..."
Abstract - Cited by 706 (49 self) - Add to MetaCart
A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back

Learning symmetry groups with hidden units: Beyond the perceptron

by Terrence J. Sejnowski, Paul K. Kienker, Geoffrey E. Hinton - Physica , 1986
"... Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massively-parallel network models. These symmetries cannot be learned by first-order perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from ..."
Abstract - Cited by 21 (5 self) - Add to MetaCart
Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massively-parallel network models. These symmetries cannot be learned by first-order perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden

Neural Network `Surgery': Transplantation of Hidden Units

by Axel J. Pinz, Horst Bischof - Proceedings ECAI'92 , 1992
"... We present a novel method to combine the knowledge of several neural networks by replacement of hidden units. Applying neural networks to digital image analysis, the underlying spatial structure of the image can be propagated into the network and used to visualize its weights (WV-diagrams). This vis ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
We present a novel method to combine the knowledge of several neural networks by replacement of hidden units. Applying neural networks to digital image analysis, the underlying spatial structure of the image can be propagated into the network and used to visualize its weights (WV

Forecasting: Neural Tree and Necessary Number of Hidden Units

by J. Kumaran Kumar
"... Abstract- This paper introduces a flexible neural tree (FNT) with necessary number of hidden units and is generated initially as a flexible multi-layer feed-forward neural network evolved using an evolutionary procedure and also considers the approximation of sufficiently smooth multivariable functi ..."
Abstract - Add to MetaCart
Abstract- This paper introduces a flexible neural tree (FNT) with necessary number of hidden units and is generated initially as a flexible multi-layer feed-forward neural network evolved using an evolutionary procedure and also considers the approximation of sufficiently smooth multivariable
Next 10 →
Results 1 - 10 of 1,871
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University