Results 1  10
of
21,398
Localist Attractor Networks
"... Attractor networks, which map an input space to a discrete output space, are useful for pattern completioncleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spuri ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Attractor networks, which map an input space to a discrete output space, are useful for pattern completioncleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce
Advanced Review Attractor networks
"... An attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for longterm memory, shortterm memory, attention, and decision making. The article then show ..."
Abstract
 Add to MetaCart
An attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for longterm memory, shortterm memory, attention, and decision making. The article
Optimal computation with attractor networks
, 2003
"... We investigate the ability of multidimensional attractor networks to perform reliable computations with noisy population codes. We show that such networks can perform computations as reliably as possible––meaning they can reach the CramerRao bound–– so long as the noise is small enough. ‘‘Small en ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We investigate the ability of multidimensional attractor networks to perform reliable computations with noisy population codes. We show that such networks can perform computations as reliably as possible––meaning they can reach the CramerRao bound–– so long as the noise is small enough. ‘‘Small
Attractor Networks for Shape Recognition
 Neural Computation
, 1999
"... We describe a system of thousands of binary perceptrons with coarse oriented edges as input which is able to successfully recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feedforward connections from the input layer and form a recurrent network among ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
We describe a system of thousands of binary perceptrons with coarse oriented edges as input which is able to successfully recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feedforward connections from the input layer and form a recurrent network among
Progressive Attractor Selection in Latent Attractor Networks
 Proc. IJCNN'2001
, 2001
"... Latent attractor networks are recurrent neural networks with weak embedded attractors. The attractors bias the network's response to external inputs without becoming fully manifested themselves. Latent attractor networks have been used to model contextdependent spatial representations in the h ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Latent attractor networks are recurrent neural networks with weak embedded attractors. The attractors bias the network's response to external inputs without becoming fully manifested themselves. Latent attractor networks have been used to model contextdependent spatial representations
Learning in sparse attractor networks with inhibition
 In International conference on cognitive neurodynamics iccn’07
, 2007
"... Abstract. Attractor networks are important models for brain functions on a behavioral and physiological level, but learning on sparse patterns has not been fully explained. Here we show that the inclusion of the activity dependent effect of an inhibitory pool in Hebbian learning can accomplish learn ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. Attractor networks are important models for brain functions on a behavioral and physiological level, but learning on sparse patterns has not been fully explained. Here we show that the inclusion of the activity dependent effect of an inhibitory pool in Hebbian learning can accomplish
2 Optimal computation with attractor networks
"... 8 We investigate the ability of multidimensional attractor networks to perform reliable computations with noisy population codes. 9 We show that such networks can perform computations as reliably as possible––meaning they can reach the CramerRao bound–– 10 so long as the noise is small enough. ‘‘S ..."
Abstract
 Add to MetaCart
8 We investigate the ability of multidimensional attractor networks to perform reliable computations with noisy population codes. 9 We show that such networks can perform computations as reliably as possible––meaning they can reach the CramerRao bound–– 10 so long as the noise is small enough
Semantic and Associative Priming in a Distributed Attractor Network
, 1995
"... A distributed attractor network is trained on an abstract version of the task of deriving the meanings of written words. When processing a word, the network starts from the final activity pattern of the previous word. Two words are semantically related if they overlap in their semantic features, ..."
Abstract

Cited by 81 (7 self)
 Add to MetaCart
A distributed attractor network is trained on an abstract version of the task of deriving the meanings of written words. When processing a word, the network starts from the final activity pattern of the previous word. Two words are semantically related if they overlap in their semantic features
Results 1  10
of
21,398