Results

**1 - 7**of**7**### Continuous latent variable models for dimensionality reduction and sequential data reconstruction

, 2001

"... ..."

### The continuous latent variable modelling

"... formalism This chapter gives the theoretical basis for continuous latent variable models. Section 2.1 defines intuitively the concept of latent variable models and gives a brief historical introduction to them. Section 2.2 uses a simple example, inspired by the mechanics of a mobile point, to justif ..."

Abstract
- Add to MetaCart

formalism This chapter gives the theoretical basis for continuous latent variable models. Section 2.1 defines intuitively the concept of latent variable models and gives a brief historical introduction to them. Section 2.2 uses a simple example, inspired by the mechanics of a mobile point, to justify and explain latent variables. Section 2.3 gives a more rigorous definition, which we will use throughout this thesis. Section 2.6 describes the most important specific continuous latent variable models and section 2.7 defines mixtures of continuous latent variable models. The chapter discusses other important topics, including parameter estimation, identifiability, interpretability and marginalisation in high dimensions. Section 2.9 on dimensionality reduction will be the basis for part II of the thesis. Section 2.10 very briefly mentions some applications of continuous latent variable models for dimensionality reduction. Section 2.11 shows a worked example of a simple continuous latent variable model. Section 2.12 give some complementary mathematical results, in particular the derivation of a diagonal noise GTM model and of its EM algorithm. 2.1 Introduction and historical overview of latent variable models Latent variable models are probabilistic models that try to explain a (relatively) high-dimensional process in

### unknown title

"... The continuous latent variable modelling formalism This chapter gives the theoretical basis for continuous latent variable models. Section 2.1 defines intuitively the concept of latent variable models and gives a brief historical introduction to them. Section 2.2 uses a simple example, inspired by t ..."

Abstract
- Add to MetaCart

The continuous latent variable modelling formalism This chapter gives the theoretical basis for continuous latent variable models. Section 2.1 defines intuitively the concept of latent variable models and gives a brief historical introduction to them. Section 2.2 uses a simple example, inspired by the mechanics of a mobile point, to justify and explain latent variables. Section 2.3 gives a more rigorous definition, which we will use throughout this thesis. Section 2.6 describes the most important specific continuous latent variable models and section 2.7 defines mixtures of continuous latent variable models. The chapter discusses other important topics, including parameter estimation, identifiability, interpretability and marginalisation in high dimensions. Section 2.9 on dimensionality reduction will be the basis for part II of the thesis. Section 2.10 very briefly mentions some applications of continuous latent variable models for dimensionality reduction. Section 2.11 shows a worked example of a simple continuous latent variable model. Section 2.12 give some complementary mathematical results, in particular the derivation of a diagonal noise GTM model and of its EM algorithm. 2.1 Introduction and historical overview of latent variable models Latent variable models are probabilistic models that try to explain a (relatively) high-dimensional process in

### Continuous latent variable models for dimensionality reduction and sequential data reconstruction

, 2001

"... ..."

(Show Context)
### Chapter 4 Dimensionality reduction

"... This chapter introduces and defines the problem of dimensionality reduction, discusses the topics of the curse of the dimensionality and the intrinsic dimensionality and then surveys non-probabilistic methods for dimensionality reduction, that is, methods that do not define a probabilistic model for ..."

Abstract
- Add to MetaCart

This chapter introduces and defines the problem of dimensionality reduction, discusses the topics of the curse of the dimensionality and the intrinsic dimensionality and then surveys non-probabilistic methods for dimensionality reduction, that is, methods that do not define a probabilistic model for the data. These include linear methods (PCA, projection pursuit), nonlinear autoassociators, kernel methods, local dimensionality reduction, principal curves, vector quantisation methods (elastic net, self-organising map) and multidimensional scaling methods. One of these methods (the elastic net) does define a probabilistic model but not a continuous dimensionality reduction mapping. If one is interested in stochastically modelling the dimensionality reduction mapping then the natural choice are latent variable models, discussed in chapter 2. We close the chapter with a summary and with some thoughts on dimensionality reduction with discrete variables. Consider an application in which a system processes data in the form of a collection of real-valued vectors: speech signals, images, etc. Suppose that the system is only effective if the dimension of each individual vector—the number of components of the vector—is not too high, where high depends on the particular application. The problem of dimensionality reduction appears when the data are in fact of a higher dimension

### A Random Field Model and its Application in Industrial Production.

"... Let X be an abstract set. We consider a prior random field Yx = U + VWx, where U is a real random variable following a uniform distribution on an interval [−m,m], where V is a real and positive random variable following a uniform distribution on an interval [, 1/] and where (Wx)x∈X is a centered nor ..."

Abstract
- Add to MetaCart

Let X be an abstract set. We consider a prior random field Yx = U + VWx, where U is a real random variable following a uniform distribution on an interval [−m,m], where V is a real and positive random variable following a uniform distribution on an interval [, 1/] and where (Wx)x∈X is a centered normalized Gaussian field. Moreover, we suppose that U, V and (Wx)x∈X are independent. The parameter characterizing the Gaussian field (Wx)x∈X is the correlation function k (recall that the mean is zero and the variance is 1). We suppose here that n ≥ 3. Let x1, x2,..., xn ∈ X and y:= (y1, y2,..., yn) ∈ Rn. Denote Σ: = (k (xi, xj))1≤i,j≤n the matrix of correlations and k (x): = (k (x, xj))1≤j≤n the correlation vector. We suppose that we are in a generic position so that the matrix Σ is invertible. Theorem 3. The conditional distribution of the random field (Yx)x∈X knowing that (Yxi = yi)1≤i≤n is given by explicit formulae of densities of finite dimensional marginals. When the parameter m goes to infinity and goes to zero, for n> 2, this conditional distribution becomes a multivariate Student distribution. In particular, when m → ∞ and → 0, for n> 2, the univariate conditional distribution of the random variable Yx becomes a Student distribution with n − 2 degrees of freedom, with location parameter µ+ k (x)Σ−1 (y − µ1)T with µ: = yΣ −11T