Results 1  10
of
6,387
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
, 1989
"... The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract

Cited by 534 (4 self)
 Add to MetaCart
the retention of information over time periods having either fixed or indefinite length. 1 Introduction A major problem in connectionist theory is to develop learning algorithms that can tap the full computational power of neural networks. Much progress has been made with feedforward networks, and attention
A Model of Saliencybased Visual Attention for Rapid Scene Analysis
, 1998
"... A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing salie ..."
Abstract

Cited by 1748 (72 self)
 Add to MetaCart
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing
RealTime Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
"... A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrateandfire neurons in realtime. We propose a new computational model for realtime computing on timevar ..."
Abstract

Cited by 469 (38 self)
 Add to MetaCart
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrateandfire neurons in realtime. We propose a new computational model for realtime computing on time
Finding structure in time
 COGNITIVE SCIENCE
, 1990
"... Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a pro ..."
Abstract

Cited by 2071 (23 self)
 Add to MetaCart
proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context
Probabilistic Inference Using Markov Chain Monte Carlo Methods
, 1993
"... Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over highdimensional spaces. R ..."
Abstract

Cited by 736 (24 self)
 Add to MetaCart
. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The "Metropolis algorithm" has been used to solve difficult problems in statistical
The Computational Brain.
, 1994
"... Keywords: reductionism, neural networks, distributed coding, Karl Pribram, computational neuroscience, receptive field 1.1 The broad goal of this book, expressed at the start, is ``to understand how neurons give rise to a mental life.'' A mental reductionism is assumed in this seductively ..."
Abstract

Cited by 450 (7 self)
 Add to MetaCart
PDP (parallel distributed processing, or artificial neural network) models were too biologically unrealistic to provide viable interpretations of the singlecell data. Churchland and Sejnowski show how distributed models can now attack this problem, providing significant insights into brain function
On Positive Harris Recurrence of Multiclass Queueing Networks: A Unified Approach Via Fluid Limit Models
 Annals of Applied Probability
, 1995
"... It is now known that the usual traffic condition (the nominal load being less than one at each station) is not sufficient for stability for a multiclass open queueing network. Although there has been some progress in establishing the stability conditions for a multiclass network, there is no unified ..."
Abstract

Cited by 357 (27 self)
 Add to MetaCart
, there is no unified approach to this problem. In this paper, we prove that a queueing network is positive Harris recurrent if the corresponding fluid limit model eventually reaches zero and stays there regardless of the initial system configuration. As an application of the result, we prove that single class networks
Greedy layerwise training of deep networks
, 2006
"... Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multilayer neural networks have many levels of nonlinearities allow ..."
Abstract

Cited by 394 (48 self)
 Add to MetaCart
introduced a greedy layerwise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success
Learning LongTerm Dependencies with Gradient Descent is Difficult
 TO APPEAR IN THE SPECIAL ISSUE ON RECURRENT NETWORKS OF THE IEEE TRANSACTIONS ON NEURAL NETWORKS
"... Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in th ..."
Abstract

Cited by 389 (37 self)
 Add to MetaCart
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present
Dynamic binding in a neural network for shape recognition
 Psychological Review
, 1992
"... Given a single view of an object, humans can readily recognize that object from other views that preserve the parts in the original view. Empirical evidence suggests that this capacity reflects the activation of a viewpointinvariant structural description specifying the object's parts and the ..."
Abstract

Cited by 329 (33 self)
 Add to MetaCart
and the relations among them. This article presents a neural network that generates such a description. Structural description is made possible through a solution to the dynamic binding problem: Temporary conjunctions of attributes (parts and relations) are represented by synchronized oscillatory activity among
Results 1  10
of
6,387