Results 1  10
of
88
Resonance and the Perception of Musical Meter
 CONNECTION SCIENCE
, 1994
"... Many connectionist approaches to musical expectancy and music composition let the question of "What next?" overshadow the equally important question of "When next?". One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the percep ..."
Abstract

Cited by 93 (3 self)
 Add to MetaCart
Many connectionist approaches to musical expectancy and music composition let the question of "What next?" overshadow the equally important question of "When next?". One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listener's internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequencylocking to periodic components of incoming rhythmic patterns. Networks of these units can selforganize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy.
DRAMA, a Connectionist Architecture for Control and Learning in Autonomous Robots
 Adaptive Behavior
, 1999
"... this paper gives ..."
Learning to Forget: Continual Prediction with LSTM
 NEURAL COMPUTATION
, 1999
"... Long ShortTerm Memory (LSTM, Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences w ..."
Abstract

Cited by 51 (25 self)
 Add to MetaCart
Long ShortTerm Memory (LSTM, Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indenitely and eventually cause the network to break down. Our remedy is a novel, adaptive \forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them in an elegant way.
Hybrid Neural Systems
, 2000
"... This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe rece ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
This chapter provides an introduction to the field of hybrid neural systems. Hybrid neural systems are computational systems which are based mainly on artificial neural networks but also allow a symbolic interpretation, or interaction with symbolic components. In this overview, we will describe recent results of hybrid neural systems. We will give a brief overview of the main methods used, outline the work that is presented here, and provide additional references. We will also highlight some important general issues and trends.
On the Analysis of Pattern Sequences by SelfOrganizing Maps
, 1994
"... This thesis is organized in three parts. In the first part, the SelfOrganizing Map algorithm is introduced. The discussion focuses on the analysis of the SelfOrganizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
This thesis is organized in three parts. In the first part, the SelfOrganizing Map algorithm is introduced. The discussion focuses on the analysis of the SelfOrganizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some trivial cases. In the second part the SelfOrganizing Map algorithm is applied to several patterns sequence analysis tasks. The first application is a voice quality analysis system. It is shown that the SelfOrganizing Map algorithm can be applied to voice analysis by providing the visualization of certain deviations. The key point in the applicability of SelfOrganizing Map algorithm is the topological nature of the mapping; similar voice samples are mapped to nearby locations in the map. The second application is a speech recognition system. Through several experiments it is demonstrated that by collecting some time dependent features and using them in conjunction with the basic SelfOrgan...
Dynamical Recurrent Neural Networks  Towards Environmental Time Series Prediction
, 1995
"... Dynamical Recurrent Neural Networks (DRNN) (Aussem 1994) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference e ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1994) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide historysensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporalrecurrentbackpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meteorological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hou...
Recurrent SOM with Local Linear Models in Time Series Prediction
 In 6th European Symposium on Artificial Neural Networks
, 1998
"... Recurrent SelfOrganizing Map (RSOM) is studied in three different time series prediction cases. RSOM is used to cluster the series into local data sets, for which corresponding local linear models are estimated. RSOM includes recurrent difference vector in each unit which allows storing context fro ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Recurrent SelfOrganizing Map (RSOM) is studied in three different time series prediction cases. RSOM is used to cluster the series into local data sets, for which corresponding local linear models are estimated. RSOM includes recurrent difference vector in each unit which allows storing context from the past input vectors. Multilayer perceptron (MLP) network and autoregressive (AR) model are used to compare the prediction results. In studied cases RSOM shows promising results.
Unsupervised Neural Network Learning Procedures . . .
, 1996
"... In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, densi ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
In this article, we review unsupervised neural network learning procedures which can be applied to the task of preprocessing raw data to extract useful features for subsequent classification. The learning algorithms reviewed here are grouped into three sections: informationpreserving methods, density estimation methods, and feature extraction methods. Each of these major sections concludes with a discussion of successful applications of the methods to realworld problems.
A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case
 Neural Computation
, 2003
"... Spatiotemporal connectionist networks (STCN's) comprise an important class of neural models that can deal with patterns distributed both in time and space. In this paper, we widen the application domain of the taxonomy for supervised STCN's recently proposed by Kremer (2001) to the unsupervised case ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
Spatiotemporal connectionist networks (STCN's) comprise an important class of neural models that can deal with patterns distributed both in time and space. In this paper, we widen the application domain of the taxonomy for supervised STCN's recently proposed by Kremer (2001) to the unsupervised case. This is possible through a reinterpretation of the state vector as a vector of latent (hidden) variables, as proposed by Meinicke (2000). The goal of this generalized taxonomy is then to provide a nonlinear generative framework for describing unsupervised spatiotemporal networks, making it easier to compare and contrast their representational and operational characteristics. Computational properties, representational issues and learning are also discussed and a number of references to the relevant source publications are provided. It is argued that the proposed approach is simple and more powerful than the previous attempts, from a descriptive and predictive viewpoint. We also discuss the relation of this taxonomy with automata theory and state space modeling, and suggest directions for further work.
Handling TimeWarped Sequences with Neural Networks
 Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior
, 1996
"... Being able to deal with timewarped sequences is crucial for a large number of tasks autonomous agents can be faced with in realworld environments, where robustness concerning natural temporal variability is required, and similar sequences of events should automatically be treated in a similar way. ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
Being able to deal with timewarped sequences is crucial for a large number of tasks autonomous agents can be faced with in realworld environments, where robustness concerning natural temporal variability is required, and similar sequences of events should automatically be treated in a similar way. Such tasks can easily be dealt with by natural animals, but equipping an animat with this capability is rather difficult. The presented experiments show how this problem can be solved with a neural network by ensuring slow state changes. An animat equipped with such a network not only adapts to the environment by learning from a number of examples, but also generalizes to yet unseen timewarped sequences. 1 Introduction For numerous tasks, autonomous agents have to be able to deal with timewarped sequences of events. Sequential patterns of variable length are common in realworld environments, and the number of available training examples are usually relatively small. Animats should not on...