Results 1  10
of
17
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional map ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Theoretic aspects of the SOM algorithm
 in: Proceedings of Workshop on SelfOrganising Maps (WSOM’97
, 1997
"... ..."
(Show Context)
Kohonen Maps Versus Vector Quantization for Data Analysis
, 1997
"... Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst prope ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst properties in terms of quantization, speed of convergence, approximation of probability densities, clustering,... The purpose of this paper is to define more precisely some commonly encountered problems, and to try to give some answers through wellknown theoretical arguments or simulations on simple examples.
Explicit magnification control of selforganizing maps for ‘forbidden’ data
"... We examine the scope of validity of the explicit SOM magnification control scheme of Bauer, Der, and Herrmann [1], on data for which the theory does not guarantee success, namely data that are ndimensional, n ≥ 2 and whose components in the different dimensions are not statistically independent. Th ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
We examine the scope of validity of the explicit SOM magnification control scheme of Bauer, Der, and Herrmann [1], on data for which the theory does not guarantee success, namely data that are ndimensional, n ≥ 2 and whose components in the different dimensions are not statistically independent. The Bauer et al. algorithm is very attractive for the possibility of faithful representation of the pdf of a data manifold, or for discovery of rare events, among other properties. Since theoretically unsupported data of higher dimensionality and higher complexity would benefit most from the power of explicit magnification control, we conduct systematic simulations on “forbidden ” data. For the unsupported n = 2 cases that we investigate the simulations show that even though the magnification exponent αachieved achieved by magnification control is not the same as the desired αdesired, αachieved systematically follows αdesired with a slowly increasing positive offset. We show that for simple synthetic higherdimensional data information theoretically optimum pdf matching (α achieved = 1) can be achieved, and that negative magnification has the desired effect of improving the detectability of rare classes. In addition we further study theoretically unsupported cases with real data.
Magnification control in selforganizing maps and neural gas
 NEURAL COMPUTATION
, 2006
"... We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches ca ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concaveconvex learning, and winner relaxing learning. Thereby, the approach of concaveconvex learning in SOM is extended to a more general description, whereas the concaveconvex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the onedimensional case.
The SelfOrganizing Maps: Background, Theories, Extensions and Applications
"... For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problemsolving methods, but ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problemsolving methods, but
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Winnerrelaxing and winnerenhancing Kohonen maps: Maximal mutual information from enhancing the winner
 Complexity
, 2003
"... The magnification behaviour of a generalized family of selforganizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the onedimensional case, which can be obtained analytically. The WinnerEnhancing case allows to acheive a magnifi ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
The magnification behaviour of a generalized family of selforganizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the onedimensional case, which can be obtained analytically. The WinnerEnhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original SelfOrganizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.
VILLMANN T.: Magnification control for batch neural gas
 In ESANN (2006
"... Neural gas (NG) constitutes a very robust clustering algorithm which can be derived as stochastic gradient descent from a cost function closely connected to the quantization error. In the limit, an NG network samples the underlying data distribution. Thereby, the connection is not linear, rather, it ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Neural gas (NG) constitutes a very robust clustering algorithm which can be derived as stochastic gradient descent from a cost function closely connected to the quantization error. In the limit, an NG network samples the underlying data distribution. Thereby, the connection is not linear, rather, it follows a power law with magnification exponent different from the information theoretically optimum one in adaptive map formation. There exists a couple of schemes to explicitely control the exponent such as local learning which leads to a small change of the learning algorithm of NG. Batch NG constitutes a fast alternative optimization scheme for NG vector quantizers which has been derived from the same cost function and which constitutes a fast Newton optimization scheme. It possesses the same magnification factor (different from 1) as standard online NG. In this paper, we propose a method to integrate magnification control by local learning into batch NG. Thereby, the key observation is a link of local learning to an underlying cost function which opens the way towards alternative, e.g. batch optimization schemes. We validate the learning rule derived from this altered cost function in an artificial experimental setting and we demonstrate the benefit of magnification control to sample rare events for a real data set. 1
WinnerRelaxing SelfOrganizing Maps
 Neural Computation
, 2005
"... A new family of selforganizing maps, the WinnerRelaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A new family of selforganizing maps, the WinnerRelaxing Kohonen Algorithm, is introduced as a generalization of a variant given by Kohonen in 1991. The magnification behaviour is calculated analytically. For the original variant a magnification exponent of 4/7 is derived; the generalized version allows to steer the magnification in the wide range from exponent 1/2 to 1 in the onedimensional case, thus provides optimal mapping in the sense of information theory. The Winner Relaxing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement. 1