Results 1  10
of
66
SelfOrganizing Maps: Ordering, Convergence Properties and Energy Functions
 Biological Cybernetics
, 1992
"... We investigate the convergence properties of the selforganizing feature map algorithm for a simple, but very instructive case: the formation of a topographic representation of the unit interval [0; 1] by a linear chain of neurons. We extend the proofs of convergence of Kohonen and of Cottrell and F ..."
Abstract

Cited by 100 (2 self)
 Add to MetaCart
We investigate the convergence properties of the selforganizing feature map algorithm for a simple, but very instructive case: the formation of a topographic representation of the unit interval [0; 1] by a linear chain of neurons. We extend the proofs of convergence of Kohonen and of Cottrell and Fort to hold in any case where the neighborhood function, which is used to scale the change in the weight values at each neuron, is a monotonically decreasing function of distance from the winner neuron. We prove that the learning dynamics cannot be described by a gradient descent on a single energy function, but may be described using a set of potential functions, one for each neuron, which are independently minimized following a stochastic gradient descent. We derive the correct potential functions for the one and multidimensional case, and show that the energy functions given by Tolat (1990) are an approximation which is no longer valid in the case of highly disordered maps or steep neig...
Energy Functions for SelfOrganizing Maps
, 1999
"... This paper is about the last issue. After people started to realize that there is no energy function for the Kohonen learning rule (in the continuous case), many attempts have been made to change the algorithm such that an energy can be defined, without drastically changing its properties. Here we w ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
This paper is about the last issue. After people started to realize that there is no energy function for the Kohonen learning rule (in the continuous case), many attempts have been made to change the algorithm such that an energy can be defined, without drastically changing its properties. Here we will review a simple suggestion, which has been proposed 2 and generalized in several different contexts. The advantage over some other attempts is its simplicity: we only need to redefine the determination of the winning ("best matching") unit. The energy function and corresponding learning algorithm are introduced in Section 2. We give two proofs that there is indeed a proper energy function. The first one, in Section 3, is based on explicit computation of derivatives. The second one, in Section 4 follows from a limiting case of a more general (free) energy function derived in a probabilistic setting. The energy formalism allows for a direct interpretation of disordered configurations in terms of local minima, two examples of which are treated in Section 5.
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). A ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Intrinsic Dimensionality Estimation with Optimally Topology Preserving Maps
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... A new method for analyzing the intrinsic dimensionality (ID) of low dimensional manifolds in high dimensional feature spaces is presented. The basic idea is to first extract a lowdimensional representation that captures the intrinsic topological structure of the input data and then to analyze this ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
A new method for analyzing the intrinsic dimensionality (ID) of low dimensional manifolds in high dimensional feature spaces is presented. The basic idea is to first extract a lowdimensional representation that captures the intrinsic topological structure of the input data and then to analyze this representation, i.e. estimate the intrinsic dimensionality. More specifically, the representation we extract is an optimally topology preserving feature map (OTPM) which is an undirected parametrized graph with a pointer in the input space associated with each node. Estimation of the intrinsic dimensionality is based on local PCA of the pointers of the nodes in the OTPM and their direct neighbors. The method has a number of important advantages compared with previous approaches: First, it can be shown to have only linear time complexity w.r.t. the dimensionality of the input space, in contrast to conventional PCA based approaches which have cubic complexity and hence become computational imp...
Theoretical aspects of the SOM algorithm
 Neurocomputing
, 1998
"... The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several aut ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several authors. In this paper, we pass in review the last results and provide some conjectures for the future work. Keywords: Selforganization, Kohonen algorithm, Convergence of stochastic processes, Vectorial quantization.
OnLine Learning Processes in Artificial Neural Networks
, 1993
"... We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. O ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We study online learning processes in artificial neural networks from a general point of view. Online learning means that a learning step takes place at each presentation of a randomly drawn training pattern. It can be viewed as a stochastic process governed by a continuoustime master equation. Online learning is necessary if not all training patterns are available all the time. This occurs in many applications when the training patterns are drawn from a timedependent environmental distribution. Studying learning in a changing environment, we encounter a conflict between the adaptability and the confidence of the network's representation. Minimization of a criterion incorporating both effects yields an algorithm for online adaptation of the learning parameter. The inherent noise of online learning makes it possible to escape from undesired local minima of the error potential on which the learning rule performs (stochastic) gradient descent. We try to quantify these often made cl...
Two or three things that we know about the Kohonen algorithm
, 1995
"... Many theoretical papers are published about the Kohonen algorithm. It is not easy to understand what is exactly proved, because of the great variety of mathematical methods. Despite all these efforts, many problems remain without solution. In this small review paper, we intend to sum up the situatio ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Many theoretical papers are published about the Kohonen algorithm. It is not easy to understand what is exactly proved, because of the great variety of mathematical methods. Despite all these efforts, many problems remain without solution. In this small review paper, we intend to sum up the situation.
On the Analysis of Pattern Sequences by SelfOrganizing Maps
, 1994
"... This thesis is organized in three parts. In the first part, the SelfOrganizing Map algorithm is introduced. The discussion focuses on the analysis of the SelfOrganizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
This thesis is organized in three parts. In the first part, the SelfOrganizing Map algorithm is introduced. The discussion focuses on the analysis of the SelfOrganizing Map algorithm. It is shown that the nonlinear nature of the algorithm makes it difficult to analyze the algorithm except in some trivial cases. In the second part the SelfOrganizing Map algorithm is applied to several patterns sequence analysis tasks. The first application is a voice quality analysis system. It is shown that the SelfOrganizing Map algorithm can be applied to voice analysis by providing the visualization of certain deviations. The key point in the applicability of SelfOrganizing Map algorithm is the topological nature of the mapping; similar voice samples are mapped to nearby locations in the map. The second application is a speech recognition system. Through several experiments it is demonstrated that by collecting some time dependent features and using them in conjunction with the basic SelfOrgan...
Predicting the Future of Discrete Sequences From Fractal Representations of the Past
, 2001
"... We propose a novel approach for building nite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by rst transforming the nblock structure of the training sequence into a geometric structure of points in a unit hypercube, such ..."
Abstract

Cited by 29 (10 self)
 Add to MetaCart
We propose a novel approach for building nite memory predictive models similar in spirit to variable memory length Markov models (VLMMs). The models are constructed by rst transforming the nblock structure of the training sequence into a geometric structure of points in a unit hypercube, such that the longer is the common sux shared by any two nblocks, the closer lie their point representations.
SelfOrganizing Maps: Stationary States, Metastability and Convergence Rate
 Biological Cybernetics
, 1992
"... We investigate the effect of various types of neighborhood functions on the convergence rates and the presence or absence of metastable stationary states of Kohonen's selforganizing feature map algorithm in one dimension. We demonstrate that the time necessary to form a topographic representation o ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
We investigate the effect of various types of neighborhood functions on the convergence rates and the presence or absence of metastable stationary states of Kohonen's selforganizing feature map algorithm in one dimension. We demonstrate that the time necessary to form a topographic representation of the unit interval [0; 1] may vary over several orders of magnitude depending on the range and also the shape of the neighborhood function by which the weight changes of the neurons in the neighborhood of the winning neuron are scaled. We will prove that for neighborhood functions which are convex on an interval given by the length of the Kohonen chain there exist no metastable states. For all other neighborhood functions, metastable states are present and may trap the algorithm during the learning process. For the widelyused Gaussian function there exists a threshold for the width above which metastable states cannot exist. Due to the presence or absence of metastable states, convergence ...