Results 1  10
of
33
Intrinsic Dimensionality Estimation with Optimally Topology Preserving Maps
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... A new method for analyzing the intrinsic dimensionality (ID) of low dimensional manifolds in high dimensional feature spaces is presented. The basic idea is to first extract a lowdimensional representation that captures the intrinsic topological structure of the input data and then to analyze this ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
A new method for analyzing the intrinsic dimensionality (ID) of low dimensional manifolds in high dimensional feature spaces is presented. The basic idea is to first extract a lowdimensional representation that captures the intrinsic topological structure of the input data and then to analyze this representation, i.e. estimate the intrinsic dimensionality. More specifically, the representation we extract is an optimally topology preserving feature map (OTPM) which is an undirected parametrized graph with a pointer in the input space associated with each node. Estimation of the intrinsic dimensionality is based on local PCA of the pointers of the nodes in the OTPM and their direct neighbors. The method has a number of important advantages compared with previous approaches: First, it can be shown to have only linear time complexity w.r.t. the dimensionality of the input space, in contrast to conventional PCA based approaches which have cubic complexity and hence become computational imp...
Effects of Categorization and Discrimination Training on Auditory Perceptual Space
, 1999
"... Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorizatio ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training using nonspeech stimuli, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these...
Neural Maps and Topographic Vector Quantization
, 1999
"... Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantif ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantify. Yet, topography of a neural map is an advantageous property, e.g. in the presence of noise in a transmission channel, in data visualization, and in numerous other applications. In this paper we review some conceptual aspects of definitions of topography, and some recently proposed measures to quantify topography. We apply the measures first to neural maps trained on synthetic data sets, and check the measures for properties like reproducability, scalability, systematic dependence of the value of the measure on the topology of the map etc. We then test the measures on maps generated for four realworld data sets, a chaotic time series, speech data, and two sets of image data. The measures ...
Neural Maps in Remote Sensing Image Analysis
 Neural Networks
, 2003
"... We study the application of SelfOrganizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellitebased imaging spectrometers produce very highdimensional spectral signatures that provide key information to many scientific inves tigations about the surface and at ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
We study the application of SelfOrganizing Maps for the analyses of remote sensing spectral images. Advanced airborne and satellitebased imaging spectrometers produce very highdimensional spectral signatures that provide key information to many scientific inves tigations about the surface and atmosphere of Earth and other planets. These new, so phisticated data demand new and advanced approaches to cluster detection, visualization, and supervised classification. In this article we concentrate on the issue of faithful topo logical mapping in order to avoid false interpretations of cluster maps created by an SaM. We describe several new extensions of the standard SaM, developed in the past few years: the Growing SelfOrganizing Map, magnification control, and Generalized Relevance Learn ing Vector Quantization, and demonstrate their effect on both lowdimensional traditional multispectral imagery and 200dimensional hyperspectral imagery.
Representation of sound categories in auditory cortical maps
 Journal of Speech, Language and Hearing Research
, 2004
"... Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowe ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation than listening to nonprototypical examples. Experiments 2 and 3 investigated the effects of categorization training and discrimination training with novel nonspeech sounds on auditory cortical representations. The two training tasks were shown to have opposite effects on the auditory cortical representation of sounds experienced during training: discrimination training led to an increase in the amount of activation caused by the training stimuli, whereas categorization training led to decreased activation. These results indicate that the brain efficiently shifts neural resources away from regions of acoustic space where discrimination between sounds is not behaviorally important (e.g., near the center of a sound category) and toward regions where accurate discrimination is needed. The results also provide a straightforward neural account of learned aspects of perceptual distortion near sound categories: sounds from the center of a
Kohonen Maps Versus Vector Quantization for Data Analysis
, 1997
"... Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst prope ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Besides their topological properties, Kohonen maps are often used for vector quantization only. These autoorganised networks are often compared to other standard and/or adaptive vector quantization methods, and, according to the large literature on the subject, show either better or worst properties in terms of quantization, speed of convergence, approximation of probability densities, clustering,... The purpose of this paper is to define more precisely some commonly encountered problems, and to try to give some answers through wellknown theoretical arguments or simulations on simple examples.
Forbidden magnification? I
 European Symposium on Artificial Neural Networks 2004
, 2004
"... Abstract. This paper presents some interesting results obtained by the algorithm by Bauer, Der and Hermann (BDH) [1] for magnification control in SelfOrganizing Maps. Magnification control in SOMs refers to the modification of the relationship between the probability density functions of the input ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Abstract. This paper presents some interesting results obtained by the algorithm by Bauer, Der and Hermann (BDH) [1] for magnification control in SelfOrganizing Maps. Magnification control in SOMs refers to the modification of the relationship between the probability density functions of the input samples and their prototypes (SOM weights). The above mentioned algorithm enables explicit control of the magnification properties of a SOM, however, the available theory restricts its validity to 1D data or 2D data when the stimulus density separates. This discourages the use of the BDH algorithm for practical applications. In this paper we present results of careful simulations that show the scope of this algorithm when applied to more general, ”forbidden ” data. We also demonstrate the application of negative magnification to magnify rare classes in the data to enhance their detectability. 1
Neural and Statistical Methods for the Visualization of Multidimensional Data
 DISSERTATION, KATEDRA METOD KOMPUTEROWYCH UMK
, 2001
"... In many fields of engineering science we have to deal with multivariate numerical data. In order to choose the technique that is best suited to a given task, it is necessary to get an insight into the data and to "understand" them. Much information allowing the understanding of multivariate data, th ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In many fields of engineering science we have to deal with multivariate numerical data. In order to choose the technique that is best suited to a given task, it is necessary to get an insight into the data and to "understand" them. Much information allowing the understanding of multivariate data, that is the description of its global structure, the presence and shape of clusters or outliers, can be gained through data visualization. Multivariate data visualization can be realized through a reduction of the data dimensionality, which is often performed by mathematical and statistical tools that are well known. Such tools are Principal Components Analysis or Multidimensional Scaling. Artificial neural networks have developed and found applications mainly in the last two decades, and they are now considered as a mature field of research. This thesis investigates the use of existing algorithms as applied to multivariate data visualization. First an overview of existing neural and statistical techniques applied to data visualization is presented. Then a comparison is made between two chosen algorithms from the point of view of multivariate data visualization. The chosen neural network algorithm is Kohonen's SelfOrganizing Maps, and the statistical technique is Multidimensional Scaling. The advantages and drawbacks from the theoretical and practical viewpoints of both approaches are put into light. The preservation of data topology involved by those two mapping techniques is discussed. The multidimensional scaling method was analyzed in details, the importance of each parameter was determined, and the technique was implemented in metric and nonmetric versions. Improvements to the algorithm were proposed in order to increase the performance of the mapping process. A graphic...
Magnification control in selforganizing maps and neural gas, Neural Computation 18
, 2006
"... We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches ca ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We consider different ways to control the magnification in selforganizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concaveconvex learning, and winner relaxing learning. Thereby, the approach of concaveconvex learning in SOM is extended to a more general description, whereas the concaveconvex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the onedimensional case. 1
Supervised Neural Gas for Learning Vector Quantization
, 2002
"... In this contribution we combine approaches the generalized leraning vector quantization (GLVQ) with the neighborhood orientented learning in the neural gas network (NG). In this way we obtain a supervised version of the NG what we call supervised NG (SNG). We show that the SNG is more robust than th ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this contribution we combine approaches the generalized leraning vector quantization (GLVQ) with the neighborhood orientented learning in the neural gas network (NG). In this way we obtain a supervised version of the NG what we call supervised NG (SNG). We show that the SNG is more robust than the GLVQ because the neighborhood learning avoids numerically instabilities as it may occur for complicate classification tasks like in the case of multimodal data. 1