Results 1 
3 of
3
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). A ..."
Abstract

Cited by 41 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
SelfOrganizing Feature Maps with SelfOrganizing Neighborhood Widths
 Proc. 1995 IEEE Intern. Conf. on Neur. Networks
, 1995
"... Selforganizing feature maps with selfdetermined local neighborhood widths are applied to construct principal manifolds of data distributions. This task exemplifies the problem of the learning of learning parameters in neural networks. The proposed algorithm is based upon analytical results on phas ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Selforganizing feature maps with selfdetermined local neighborhood widths are applied to construct principal manifolds of data distributions. This task exemplifies the problem of the learning of learning parameters in neural networks. The proposed algorithm is based upon analytical results on phase transitions in selforganizing feature maps available for idealized situations. By illustrative simulations it is demonstrated that deviations from the theoretically studied situation are compensated adaptively and that the capability of topology preservation is crucial for avoiding overfitting effects. Further, the relevance of the parameter learning scheme for hierarchical feature maps is stated. 1 Introduction Many learning algorithms are influenced by the choice and by the course of online modification of particular parameters. Whereas the learning algorithm itself represents a formalized principle such as the minimization of an error measure, principles to govern parameter settings ...
Optimal Magnification Factors in SelfOrganizing Feature Maps
 In Proc. ICANN'95
, 1995
"... Introduction Kohonen's selforganizing feature maps (SOFMs) [8] usually exhibit a selective magnification of often stimulated regions of their input space. This amounts to a larger transmission of information about the stimulus ensemble than in maps with a constant resolution. Such a selective magn ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Introduction Kohonen's selforganizing feature maps (SOFMs) [8] usually exhibit a selective magnification of often stimulated regions of their input space. This amounts to a larger transmission of information about the stimulus ensemble than in maps with a constant resolution. Such a selective magnification is not only observed in biological maps, but is also often regarded as a desirable design objective in technical contexts. For at least three reasons, the magnification properties of SOFMs deserve further investigation: 1. An analysis by Ritter and Schulten [10] demonstrated that the SOFM algorithm does not yield a maximum entropy map (i.e. does not transmit the maximum amount of information). 2. As a related argument we observe that it depends on the error criterion one applies which magnification properties are to be regarded as optimal. For example, a minimal worst case error is achieved by maps with all receptive fields (or Voronoy polygons) being of equal extension, i.