Results 1 - 10
of
1,871
Hidden-Unit Conditional Random Fields
"... The paper explores a generalization of conditional random fields (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
The paper explores a generalization of conditional random fields (CRFs) in which binary stochastic hidden units appear between the data and the labels. Hidden-unit CRFs are potentially more powerful than standard CRFs because they can represent nonlinear dependencies at each frame. The hidden units
LOGICAL STRUCTURE OF HIDDEN UNIT SPACE
"... Hidden units in multi-layer networks form a representation space in which each region can be identified with a class of equivalent outputs (Elman, 1989) or a logical state in a finite state machine (Cleeremans, Servan-Schreiber & McClelland, 1989; Giles, Sun, Chen, Lee, & Chen, 1990). We ex ..."
Abstract
- Add to MetaCart
Hidden units in multi-layer networks form a representation space in which each region can be identified with a class of equivalent outputs (Elman, 1989) or a logical state in a finite state machine (Cleeremans, Servan-Schreiber & McClelland, 1989; Giles, Sun, Chen, Lee, & Chen, 1990). We
Q-Learning with Hidden-Unit Restarting
- Advances in Neural Information Processing Systems 5
, 1993
"... Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to "restart" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm is tes ..."
Abstract
-
Cited by 29 (7 self)
- Add to MetaCart
Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to "restart" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm
The cascade-correlation learning architecture
- Advances in Neural Information Processing Systems 2
, 1990
"... Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creatin ..."
Abstract
-
Cited by 801 (6 self)
- Add to MetaCart
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one
A distributed, developmental model of word recognition and naming
- PSYCHOLOGICAL REVIEW
, 1989
"... A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back-propagatio ..."
Abstract
-
Cited by 706 (49 self)
- Add to MetaCart
A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back
Learning symmetry groups with hidden units: Beyond the perceptron
- Physica
, 1986
"... Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massively-parallel network models. These symmetries cannot be learned by first-order perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from ..."
Abstract
-
Cited by 21 (5 self)
- Add to MetaCart
Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massively-parallel network models. These symmetries cannot be learned by first-order perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden
Neural Network `Surgery': Transplantation of Hidden Units
- Proceedings ECAI'92
, 1992
"... We present a novel method to combine the knowledge of several neural networks by replacement of hidden units. Applying neural networks to digital image analysis, the underlying spatial structure of the image can be propagated into the network and used to visualize its weights (WV-diagrams). This vis ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We present a novel method to combine the knowledge of several neural networks by replacement of hidden units. Applying neural networks to digital image analysis, the underlying spatial structure of the image can be propagated into the network and used to visualize its weights (WV
Forecasting: Neural Tree and Necessary Number of Hidden Units
"... Abstract- This paper introduces a flexible neural tree (FNT) with necessary number of hidden units and is generated initially as a flexible multi-layer feed-forward neural network evolved using an evolutionary procedure and also considers the approximation of sufficiently smooth multivariable functi ..."
Abstract
- Add to MetaCart
Abstract- This paper introduces a flexible neural tree (FNT) with necessary number of hidden units and is generated initially as a flexible multi-layer feed-forward neural network evolved using an evolutionary procedure and also considers the approximation of sufficiently smooth multivariable
Results 1 - 10
of
1,871