Results 1  10
of
11
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). A ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Theoretical aspects of the SOM algorithm
 Neurocomputing
, 1998
"... The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several aut ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
The SOM algorithm is very astonishing. On the one hand, it is very simple to write down and to simulate, its practical properties are clear and easy to observe. But, on the other hand, its theoretical properties still remain without proof in the general case, despite the great efforts of several authors. In this paper, we pass in review the last results and provide some conjectures for the future work. Keywords: Selforganization, Kohonen algorithm, Convergence of stochastic processes, Vectorial quantization.
Kohonen Maps Versus Vector Quantization for Data Analysis
, 1997
"... Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst prope ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst properties in terms of quantization, speed of convergence, approximation of probability densities, clustering,... The purpose of this paper is to define more precisely some commonly encountered problems, and to try to give some answers through wellknown theoretical arguments or simulations on simple examples.
Magnification control in selforganizing maps and neural gas, Neural Computation 18
, 2006
"... We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches ca ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concaveconvex learning, and winner relaxing learning. Thereby, the approach of concaveconvex learning in SOM is extended to a more general description, whereas the concaveconvex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the onedimensional case. 1
Explicit magnification control of selforganizing maps for ‘forbidden’ data
 IEEE Trans. Neural Net
, 2007
"... Abstract—In this paper, we examine the scope of validity of the explicit selforganizing map (SOM) magnification control scheme of Bauer et al. (1996) on data for which the theory does not guarantee success, namely data that are ndimensional, n 2, and whose components in the different dimensions ar ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Abstract—In this paper, we examine the scope of validity of the explicit selforganizing map (SOM) magnification control scheme of Bauer et al. (1996) on data for which the theory does not guarantee success, namely data that are ndimensional, n 2, and whose components in the different dimensions are not statistically independent. The Bauer et al. algorithm is very attractive for the possibility of faithful representation of the probability density function (pdf) of a data manifold, or for discovery of rare events, among other properties. Since theoretically unsupported data of higher dimensionality and higher complexity would benefit most from the power of explicit magnification control, we conduct systematic simulations on “forbidden ” data. For the unsupported =2 cases that we investigate, the simulations show that even n though the magnification exponent achieved achieved by magnification control is not the same as the desired desired, achieved systematically follows desired with a slowly increasing positive offset. We show that for simple synthetic higher dimensional data information, theoretically optimum pdf matching ( achieved =1) can be achieved, and that negative magnification has the desired effect of improving the detectability of rare classes. In addition, we further study theoretically unsupported cases with real data. Index Terms—Data mining, highdimensional data, map magnification, selforganizing maps (SOMs).
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Winnerrelaxing and winnerenhancing Kohonen maps: Maximal mutual information from enhancing the winner
 Complexity
, 2003
"... The magnification behaviour of a generalized family of selforganizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the onedimensional case, which can be obtained analytically. The WinnerEnhancing case allows to acheive a magnifi ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The magnification behaviour of a generalized family of selforganizing feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is analyzed by the magnification law in the onedimensional case, which can be obtained analytically. The WinnerEnhancing case allows to acheive a magnification exponent of one and therefore provides optimal mapping in the sense of information theory. A numerical verification of the magnification law is included, and the ordering behaviour is analyzed. Compared to the original SelfOrganizing Map and some other approaches, the generalized Winner Enforcing Algorithm requires minimal extra computations per learning step and is conveniently easy to implement.
Monitoring the Formation of KernelBased Topographic Maps with Application to Hierarchical Custering of Music Signals
"... When using topographic maps for clustering purposes, which is now being considered in the data mining community, it is crucial that the maps are free of topological defects. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
When using topographic maps for clustering purposes, which is now being considered in the data mining community, it is crucial that the maps are free of topological defects.
Magnification control for batch neural gas
"... Abstract. It is well known, that online neural gas (NG) possesses a magnification exponent different from the information theoretically optimum one in adaptive map formation. The exponent can explicitely be controlled by a small change of the learning algorithm. Batch NG constitutes a fast alternati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. It is well known, that online neural gas (NG) possesses a magnification exponent different from the information theoretically optimum one in adaptive map formation. The exponent can explicitely be controlled by a small change of the learning algorithm. Batch NG constitutes a fast alternative optimization scheme for NG vector quantizers which possesses the same magnification factor as standard online NG. In this paper, we propose a method to integrate magnification control by local learning into batch NG by linking magnification control to an underlying cost function. We validate the learning rule in an experimental setting. 1
The SelfOrganizing Maps: Background, Theories, Extensions and Applications
"... For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problemsolving methods, but ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
For many years, artificial neural networks (ANNs) have been studied and used to model information processing systems based on or inspired by biological neural structures. They not only can provide solutions with improved performance when compared with traditional problemsolving methods, but