Results 1  10
of
17
Finding structure in time
 COGNITIVE SCIENCE
, 1990
"... Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a pro ..."
Abstract

Cited by 1573 (22 self)
 Add to MetaCart
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly contextdependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.
NonLinear Dimensionality Reduction
 Advances in Neural Information Processing Systems 5
, 1993
"... A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is ..."
Abstract

Cited by 108 (1 self)
 Add to MetaCart
A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is shown to result in minimum dimensional encodings with respect to allowable error in reconstruction. 1
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
A review of dimension reduction techniques
, 1997
"... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A cl ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Unsupervised Neural Network Learning Procedures . . .
, 1996
"... In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, densi ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, density estimation methods, and feature extraction methods. Each of these major sections concludes with a discussion of successful applications of the methods to realworld problems.
Computation in a Single Neuron: Hodgkin and Huxley Revisited
, 2003
"... A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Gener ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant lowdimensional features from experimental data, and information theory can be used to evaluate the quality of the low–dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin–Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space ” as a twodimensional linear subspace in the highdimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90 % of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single threshold.
Reduced Memory Representations For Music
, 1951
"... We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. Th ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and musictheoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for p...
The Representation of Structure in Sequence Prediction Tasks
 IN C. UMILTA AND M. MOSCOVITCH (EDS.). ATTENTION AND PERFORMANCE XV: CONSCIOUS
, 1994
"... Is knowledge acquired implicitly abstract or based on memory for exemplars? This question is at the heart of a current, but longstanding, controversy in the field of implicit learning (see Reber, 1989, for a review). For some authors, implicit knowledge is best characterized as rulelike. For others ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Is knowledge acquired implicitly abstract or based on memory for exemplars? This question is at the heart of a current, but longstanding, controversy in the field of implicit learning (see Reber, 1989, for a review). For some authors, implicit knowledge is best characterized as rulelike. For others, however, knowledge acquired implicitly is little more than knowledge about memorized exemplars, or at best, knowledge about elementary features of the material, such as the frequency of particular events. In this paper, I argue that the debate may be illposed, and that the two positions are not necessarily incompatible. Using simulation studies, I show that abstract knowledge about the stimulus material may emerge through the operation of elementary, associationist learning mechanisms of the kind that operate in connectionist networks. I focus on a sequence learning task first proposed by Kushner, Cleeremans & Reber (1991), during which subjects are exposed to random fixedlength sequence...
Autoassociative Neural Network Models for Speaker Verification
, 1999
"... KEYWORDS: speaker verification; autoassociative neural network; distribution estimation; matching technique; dimensionality reduction. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
KEYWORDS: speaker verification; autoassociative neural network; distribution estimation; matching technique; dimensionality reduction.