Results 1 
2 of
2
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). A ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Analyzing phase transitions in highdimensional selforganizing maps
 BIOL. CYB
, 1996
"... The SelfOrganizing Map (SOM), a widely used algorithm for the unsupervised learning of neural maps, can be formulated in a lowdimensional "feature map" variant which requires prespecified parameters ("features") for the description of receptive fields, or in a more general highdimensional variant ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The SelfOrganizing Map (SOM), a widely used algorithm for the unsupervised learning of neural maps, can be formulated in a lowdimensional "feature map" variant which requires prespecified parameters ("features") for the description of receptive fields, or in a more general highdimensional variant which allows to selforganize the structure of individual receptive fields as well as their arrangement in a map. We present here a new analytical method to derive conditions for the emergence of structure in SOMs which is particularly suited for the as yet inaccessible highdimensional SOM variant. Our approach is based on an evaluation of a map distortion function. It involves only an ansatz for the way stimuli are distributed among map neurons; the receptive fields of the map need not be known explicitely. Using this method we first calculate regions of stability for four possible states of SOMs projecting from a rectangular input space to a ring of neurons. We then analyze the transition from nonoriented to oriented receptive fields in a SOMbased model for the development of orientation maps. In both cases, the analytical results are well corroborated by the results of computer simulations.