Results 1 
3 of
3
GTM: The generative topographic mapping
 Neural Computation
, 1998
"... Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper ..."
Abstract

Cited by 275 (5 self)
 Add to MetaCart
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of nonlinear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used SelfOrganizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multiphase oil pipeline. Copyright c○MIT Press (1998). 1
Hyperparameter selection for selforganizing maps
 Neural Computation
, 1997
"... The selforganizing map (SOM) algorithm for finite data is derived as an approximate MAP estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparameters ar ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
The selforganizing map (SOM) algorithm for finite data is derived as an approximate MAP estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparameters are obtained on the basis of empirical Bayesian estimation and crossvalidation, which are representative model selection methods. The properties of these criteria are compared by simulation experiments. These experiments show that the crossvalidation methods favor more complex structures than the expected log likelihood supports, which is a measure of compatibility between a model and data distribution. On the other hand, the empirical Bayesian methods have the opposite bias. 1
Self Organizing Map for Fast Cluster Detection in HighDimensional Spaces
, 1997
"... We study the problem of fast detection of dense, sufficiently separated clusters in (possibly) high dimensional spaces using selforganizing maps. To this end, we formulate the selforganizing map as a soft vector quantization procedure. Topological structure among representative vectors arises due ..."
Abstract
 Add to MetaCart
We study the problem of fast detection of dense, sufficiently separated clusters in (possibly) high dimensional spaces using selforganizing maps. To this end, we formulate the selforganizing map as a soft vector quantization procedure. Topological structure among representative vectors arises due to transition channel noise. We prove the convergence of this scheme and show that the introduction of a topology into the set of representative vectors results in a shift form Euclidean to a nonEuclidean dot product in attractive/repulsive terms of training equations. The nonEuclidean dot product is defined by the transition channel noise matrix and can be analyzed in terms of its eigenvector decomposition. In this context, we study the so called star topology of representative vectors as an appropriate topology for cluster detection applications and give an illustrative example. 1 Introduction An important subtask in data compression, signal coding and pattern classification is finding ...