Results 1  10
of
15
Finding structure in time
 COGNITIVE SCIENCE
, 1990
"... Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a pro ..."
Abstract

Cited by 1533 (21 self)
 Add to MetaCart
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly contextdependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.
NonLinear Dimensionality Reduction
 Advances in Neural Information Processing Systems 5
, 1993
"... A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is ..."
Abstract

Cited by 106 (1 self)
 Add to MetaCart
A method for creating a non–linear encoder–decoder for multidimensional data with compact representations is presented. The commonly used technique of autoassociation is extended to allow non–linear representations, and an objective function which penalizes activations of individual hidden units is shown to result in minimum dimensional encodings with respect to allowable error in reconstruction. 1
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
A review of dimension reduction techniques
, 1997
"... The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A cl ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
The problem of dimension reduction is introduced as a way to overcome the curse of the dimensionality when dealing with vector data in highdimensional spaces and as a modelling tool for such data. It is defined as the search for a lowdimensional manifold that embeds the highdimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen’s maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projection pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Unsupervised Neural Network Learning Procedures . . .
, 1996
"... In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, densi ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, density estimation methods, and feature extraction methods. Each of these major sections concludes with a discussion of successful applications of the methods to realworld problems.
Computation in a Single Neuron: Hodgkin and Huxley Revisited
, 2003
"... A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Gener ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
A spiking neuron “computes” by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the lowdimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant lowdimensional features from experimental data, and information theory can be used to evaluate the quality of the low–dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin–Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering “feature space ” as a twodimensional linear subspace in the highdimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90 % of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as “integrate and fire,” the HH model is not an integrator nor is it well described by a single threshold.
Reduced Memory Representations For Music
, 1951
"... We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. Th ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and musictheoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for p...
Autoassociative Neural Network Models for Speaker Verification
, 1999
"... KEYWORDS: speaker verification; autoassociative neural network; distribution estimation; matching technique; dimensionality reduction. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
KEYWORDS: speaker verification; autoassociative neural network; distribution estimation; matching technique; dimensionality reduction.
SHOSLIFM: SHOSLIF for Motion Understanding (Phase I for Hand Sign Recognition)
, 1994
"... In this paper, we propose a new general framework for learning and recognizing spatiotemporal events (or patterns) from intensity image sequences. This scheme is general in that it does not impose any motion model on the input. A multiclass, multivariate discriminant analysis technique has been used ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this paper, we propose a new general framework for learning and recognizing spatiotemporal events (or patterns) from intensity image sequences. This scheme is general in that it does not impose any motion model on the input. A multiclass, multivariate discriminant analysis technique has been used to automatically select the most discriminating features (MDF) which is shown to be better suited for classification due to its capability to automatically discount factors that are irrelevant to classification. The space partition tree introduced here achieves a logarithmic time complexity for a database of n items. A general interpolation scheme is employed for inference and generalization in the MDF space based on a small number of training samples. The system is tested to recognize 28 different hand signs. The experimental results show that the learned system can achieve a 98% recognition rate for test sequences that have not been used in the training phase. 1 1 Introduction Temporal...