Results 1 
5 of
5
Growing Cell Structures  A Selforganizing Network for Unsupervised and Supervised Learning
 Neural Networks
, 1993
"... We present a new selforganizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the m ..."
Abstract

Cited by 249 (11 self)
 Add to MetaCart
We present a new selforganizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned selforganizing network with the radial basis function (RBF) approach. In this model it is possible  in contrast to earlier approaches  to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the t...
Lyapunov Stability Analysis of Quantization Error for
 DCS Neural Networks,” International Joint Conference on Neural Networks
, 2003
"... Abstract — In this paper we show that the quantization error for Dynamic Cell Structures (DCS) Neural Networks (NN) as defined by Bruske and Sommer provides a measure of the Lyapunov stability of the weight centers of the neural net. We also show, however, that this error is insufficient in itself t ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract — In this paper we show that the quantization error for Dynamic Cell Structures (DCS) Neural Networks (NN) as defined by Bruske and Sommer provides a measure of the Lyapunov stability of the weight centers of the neural net. We also show, however, that this error is insufficient in itself to verify that DCS neural networks provide stable topological representation of a given fixed input feature manifold. While it is true that DCS generates a topology preserving feature map, it is unclear when and under what circumstances DCS will have achieved an accurate representation. This is especially important in safety critical systems where it is necessary to understand when the topological representation is complete and accurate. The stability analysis here shows that there exists a Lyapunov function for the weight adaptation of the DCS NN system applied to a fixed feature manifold. The Lyapunov function works in parallel during DCS learning, and is able to provide a measure of the effective placement of neural units during the NN’s approximation. It does not, however, guarantee the formation of an accurate representation of the feature manifold. Simulation studies from a selected CMUBenchmark involving the use of the constructed Lyapunov function indicate the existence of a Globally Asymptotically Stable (GAS) state for the placement of neural units, but an example is given where the topology of the constructed network fails to mirror that of the input manifold even though the quantization error continues to decrease monotonically. I.
Perceiving without Learning: from Spirals to Inside/Outside Relations
 in Advances in Neural Information Processing Systems
, 1997
"... As a benchmark task, the spiral problem is well known in neural networks. Unlike previous work that emphasizes learning, we approach the problem from a generic perspective that does not involve learning. We point out that the spiral problem is intrinsically connected to the inside /outside problem. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
As a benchmark task, the spiral problem is well known in neural networks. Unlike previous work that emphasizes learning, we approach the problem from a generic perspective that does not involve learning. We point out that the spiral problem is intrinsically connected to the inside /outside problem. A generic solution to both problems is proposed based on oscillatory correlation using a time delay network. Our simulation results are qualitatively consistent with human performance, and we interpret human limitations in terms of synchrony and time delays, both biologically plausible. As a special case, our network without time delays can always distinguish these figures regardless of shape, position, size, and orientation. We conjecture that visual perception will be effortful if local activation cannot be rapidly propagated, as synchrony would not be established in the presence of time delays. 1 INTRODUCTION The spiral problem refers to distinguishing between a connected single spiral a...
Perceiving Geometric Patterns: From Spirals to InsideOutside Relations
 IEEE Trans. Neural Netw
, 2001
"... Since first proposed by Minsky and Papert, the spiral problem is well known in neural networks. It receives much attention as a benchmark for various learning algorithms. Unlike previous work that emphasizes learning, we approach the problem from a different perspective. We point out that the spiral ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Since first proposed by Minsky and Papert, the spiral problem is well known in neural networks. It receives much attention as a benchmark for various learning algorithms. Unlike previous work that emphasizes learning, we approach the problem from a different perspective. We point out that the spiral problem is intrinsically connected to the insideoutside problem proposed by Ullman. We propose a solution to both problems based on oscillatory correlation using a timedelay network. Our simulation results are qualitatively consistent with human performance, and we interpret human limitations in terms of synchrony and time delays. As a special case, our network without time delays can always distinguish these figures regardless of shape, position, size, and orientation. Index TermsDesynchronization, geometric patterns, inside outside relations, LEGION, oscillatory correlation, spiral problem, synchronization, time delays, visual perception. I.
WSOM: Building Adaptive Wavelets with SelfOrganizing Maps
 In Proc. of 1998 IEEE International Joint Conference on Neural Networks
, 1998
"... The WSOM (Wavelet SelfOrganizing Map) model, a neural network for the creation of wavelet bases adapted to the distribution of input data, is introduced. The model provides an efficient online way to construct highdimensional wavelet bases. Simulations of a 1D function approximation problem illu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The WSOM (Wavelet SelfOrganizing Map) model, a neural network for the creation of wavelet bases adapted to the distribution of input data, is introduced. The model provides an efficient online way to construct highdimensional wavelet bases. Simulations of a 1D function approximation problem illustrate how WSOM adapts to nonuniformly distributed input data, outperforming the discrete wavelet transform. A speakerindependent vowel recognition benchmark task demonstrates how the model constructs highdimensional bases using lowdimensional wavelets. I. INTRODUCTION Wavelets offer an economical framework for the representation of signals, images, and functions [1], [2], [3]. Interest in wavelet theory and applications has recently accelerated with the introduction of efficient algorithms for analyzing, approximating, estimating, and compressing functions and signals. The most popular of these algorithms is the discrete wavelet transform (DWT) [4], which uses generalpurpose bases that...