## Learning nonlinear image manifolds by global alignment of local linear models

### Download From

IEEE### Download Links

- [www.science.uva.nl]
- [lear.inrialpes.fr]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Pattern Analysis and Machine Intell |

Citations: | 12 - 0 self |

### BibTeX

@ARTICLE{Verbeek_learningnonlinear,

author = {Jakob Verbeek},

title = {Learning nonlinear image manifolds by global alignment of local linear models},

journal = {IEEE Trans. Pattern Analysis and Machine Intell},

year = {},

volume = {28},

pages = {2006}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract—Appearance-based methods, based on statistical models of the pixel values in an image (region) rather than geometrical object models, are increasingly popular in computer vision. In many applications, the number of degrees of freedom (DOF) in the image generating process is much lower than the number of pixels in the image. If there is a smooth function that maps the DOF to the pixel values, then the images are confined to a low-dimensional manifold embedded in the image space. We propose a method based on probabilistic mixtures of factor analyzers to 1) model the density of images sampled from such manifolds and 2) recover global parameterizations of the manifold. A globally nonlinear probabilistic two-way mapping between coordinates on the manifold and images is obtained by combining several, locally valid, linear mappings. We propose a parameter estimation scheme that improves upon an existing scheme and experimentally compare the presented approach to self-organizing maps, generative topographic mapping, and mixtures of factor analyzers. In addition, we show that the approach also applies to finding mappings between different embeddings of the same manifold. Index Terms—Feature extraction or construction, machine learning, statistical image representation. 1

### Citations

8089 | Maximum likelihood from incomplete data via the EM algorithm
- Dempster, Laird, et al.
- 1977
(Show Context)
Citation Context ...1 c Þ; ð7Þ 1 > 1 Vc c þ c c c; ð8Þ mc cþ V 1 > 1 c c c ðx cÞ; ð9Þ which can be used to infer the coordinates z in the local subspace given a data point x and a mixture component c. The EM algorithm =-=[34]-=-, [35] can be used to estimate the parameters of the mixture to find a (local) maximum of the data log-likelihood: L XN n1 log pðxnÞ: ð10Þ The inverse and determinant of c, required to evaluate pðxj... |

3242 |
The self-organizing map
- Kohonen
- 1990
(Show Context)
Citation Context ...ide range of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps =-=[11]-=-, [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can r... |

2791 | Eigenfaces for recognition
- Turk, Pentland
- 1991
(Show Context)
Citation Context ...ce then gives an informative characterization of the actual image. Later, others proposed to use the projections on a low-dimensional linear subspace for tasks such as recognition and pose estimation =-=[5]-=-, [6]. The idea that the images are confined to a low, say d, dimensional subspace of the image space (say of dimension D) is supported by the following reasoning. If there are only d D degrees of fre... |

2014 |
Principal Component Analysis
- Jolliffe
- 2002
(Show Context)
Citation Context ...et of images considered in a given application is confined to a low-dimensional subspace of the high-dimensional image space. This idea was introduced in [3], where principal component analysis (PCA) =-=[4]-=- was . The author is with GRAVIR-INRIA, 655 avenue de l’Europe, 38330 Montbonnot, France. E-mail: verbeek@inrialpes.fr. Manuscript received 24 Feb. 2005; revised 25 Nov. 2005; accepted 19 Dec. 2005; p... |

1688 |
A Global Geometric Framework for Nonlinear Dimensionality
- Tenenbaum, Silva, et al.
(Show Context)
Citation Context ...tion with local minima that are not global minima and 2) there is no notion of the uncertainty in the estimated low-dimensional coordinates of the images. Recently, various techniques, such as Isomap =-=[17]-=-, Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting [20], Locality Preserving Projections [21], [22], Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been pro... |

1614 | Nonlinear dimensionality reduction by locally linear embedding
- Roweis, Saul
- 2000
(Show Context)
Citation Context ...lobal minima and 2) there is no notion of the uncertainty in the estimated low-dimensional coordinates of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) =-=[18]-=-, Kernel PCA [19], charting [20], Locality Preserving Projections [21], [22], Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been proposed that formalize manifold learnin... |

1048 | Nonlinear component analysis as a kernel eigenvalue problem
- Schölkopf, Smola, et al.
- 1998
(Show Context)
Citation Context ...2) there is no notion of the uncertainty in the estimated low-dimensional coordinates of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA =-=[19]-=-, charting [20], Locality Preserving Projections [21], [22], Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been proposed that formalize manifold learning as minimizing a... |

958 |
Visual Learning and Recognition of 3-D Objects from Appearance
- Murase, Nayar
- 1995
(Show Context)
Citation Context ...en gives an informative characterization of the actual image. Later, others proposed to use the projections on a low-dimensional linear subspace for tasks such as recognition and pose estimation [5], =-=[6]-=-. The idea that the images are confined to a low, say d, dimensional subspace of the image space (say of dimension D) is supported by the following reasoning. If there are only d D degrees of freedom ... |

446 |
Adaptive Control Processes: A Guided Tour
- Bellman
- 1961
(Show Context)
Citation Context ...g. Finding a global low-dimensional representation is also useful as a preprocessing stage for supervised learning problems since the low-dimensionality safeguards against the curse of dimensionality =-=[30]-=-. Thus, once a global lowdimensional representation is found based on many unsupervised examples, it is possible to learn functions on the manifold using relatively few supervised examples. 2. However... |

432 | Low-dimensional procedure for the characterization of human faces - Sirovich, Kirby - 1987 |

424 | Laplacian eigenmaps and spectral techniques for embedding and clustering
- Belkin, Niyogi
(Show Context)
Citation Context ... of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting [20], Locality Preserving Projections [21], [22], Laplacian Eigenmaps =-=[23]-=-, and a semidefinite programming approach [24], have been proposed that formalize manifold learning as minimizing a convex error function that encodes how well certain interpoint relationships are pre... |

397 | Mixtures of probabilistic principal component analysers
- Tipping, M
- 1999
(Show Context)
Citation Context ... subspace. Thus, the data can be clustered in such a manner that the data within each cluster can be accurately reconstructed from a two-dimensional PCA projection. Several authors, e.g., [26], [27], =-=[28]-=-, have reported that such a combination of clustering and PCA allows for significantly better reconstructions of images as compared to a reconstruction using one single linear PCA subspace. Others [29... |

295 |
Principal curves
- Hastie, Stuetzle
- 1989
(Show Context)
Citation Context ...he manifold are known in advance. A wide range of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces =-=[8]-=-, [9], [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following dr... |

280 |
Statistical Pattern Recognition
- Webb
- 1999
(Show Context)
Citation Context ...veral numbers of neighbors or to use bootstrap-like procedures as in [31]. Using many mixture components, CFA shows overfitting on both data sets in terms of log-likelihood. This effect is well known =-=[53]-=- and is caused by the fact that using many components more parameters need to be estimated. However, since the amount of available data is limited, the parameters cannot be accurately estimated and, a... |

268 | Unsupervised learning of finite mixture models
- Figueredo, Jain
- 2001
(Show Context)
Citation Context ...echniques [43], [44], [45]. Given a latent dimensionality d, the number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following [46], =-=[47]-=-. Alternatively, a variational Bayesian learning approach [48] can be taken to estimate both the latent-dimensionality and the number of components. 3 MAPPING BETWEEN MANIFOLD EMBEDDINGS In the previo... |

254 | Think globally, fit locally: unsupervised learning of low dimensional manifolds
- Saul, Roweis
(Show Context)
Citation Context ... basis of maximum likelihood) and, second, the local models are aligned on the basis of a second criterion. Such approaches thus do not allow modification of the local models to improve alignment. In =-=[39]-=-, a mixture of linear models is proposed, but this assumes that the global low-dimensional coordinates are known. Thus, in the latter approach, the local models can be adapted, but the global low-dime... |

225 | The EM algorithm for mixtures of factor analyzers
- Ghahramani, GE
- 1996
(Show Context)
Citation Context ... ð7Þ 1 > 1 Vc c þ c c c; ð8Þ mc cþ V 1 > 1 c c c ðx cÞ; ð9Þ which can be used to infer the coordinates z in the local subspace given a data point x and a mixture component c. The EM algorithm [34], =-=[35]-=- can be used to estimate the parameters of the mixture to find a (local) maximum of the data log-likelihood: L XN n1 log pðxnÞ: ð10Þ The inverse and determinant of c, required to evaluate pðxjcÞ, ar... |

186 | Face Recognition Using Laplacianfaces
- He, Yan, et al.
- 2005
(Show Context)
Citation Context ...ow-dimensional coordinates of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting [20], Locality Preserving Projections [21], =-=[22]-=-, Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been proposed that formalize manifold learning as minimizing a convex error function that encodes how well certain interp... |

184 | Supervised learning from incomplete data via an EM approach
- Ghahramani, Jordan
- 1994
(Show Context)
Citation Context ...or points without a correspondence, the first Dx or the last Dy coordinates are not observed. The EM algorithm can be used to estimate the parameters of an MFA in the presence of missing values [35], =-=[51]-=-. The difference with the CFA approach is that the MFA neglects the global manifold structure of the data; it does not relate the low-dimensional coordinate systems. Note that, if a mixture component ... |

168 | Unsupervised learning of image manifolds by semidefinite programming
- Weinberger, Saul
- 2004
(Show Context)
Citation Context ...such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting [20], Locality Preserving Projections [21], [22], Laplacian Eigenmaps [23], and a semidefinite programming approach =-=[24]-=-, have been proposed that formalize manifold learning as minimizing a convex error function that encodes how well certain interpoint relationships are preserved. Due to the convexity of the error func... |

161 | Charting a manifold
- Brand
- 2003
(Show Context)
Citation Context ...notion of the uncertainty in the estimated low-dimensional coordinates of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting =-=[20]-=-, Locality Preserving Projections [21], [22], Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been proposed that formalize manifold learning as minimizing a convex error f... |

148 | Variational inference for Bayesian mixtures of factor analysers
- Ghahramani, Beal
(Show Context)
Citation Context ...he number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following [46], [47]. Alternatively, a variational Bayesian learning approach =-=[48]-=- can be taken to estimate both the latent-dimensionality and the number of components. 3 MAPPING BETWEEN MANIFOLD EMBEDDINGS In the previous section, we presented the CFA model for nonlinear dimension... |

146 | Modeling the manifolds of images of handwritten digits
- Hinton, Dayan, et al.
- 1997
(Show Context)
Citation Context ...28], have reported that such a combination of clustering and PCA allows for significantly better reconstructions of images as compared to a reconstruction using one single linear PCA subspace. Others =-=[29]-=- successfully used such a “mixture of PCAs” (MPCA) to classify images of hand written digits; an MPCA model was learned on images of each digit i0; ...; 9, yielding density models piði0; ...; 9Þ. Th... |

97 |
Maximum likelihood estimation of intrinsic dimension
- Levina, Bickel
(Show Context)
Citation Context ... mixture components C and the latent dimensionality d have to be set. If the number of degrees of freedom in the data is not known, 6 then d may be estimated using a variety of techniques [43], [44], =-=[45]-=-. Given a latent dimensionality d, the number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following [46], [47]. Alternatively, a var... |

88 | Asurvey of dimension reduction techniques
- Fodor
- 2002
(Show Context)
Citation Context ...uch as autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], =-=[15]-=-, [16]. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can return, suboptimal solution since it is based on minimizing an error function with local minim... |

77 |
Out-of-Sample Extensions for
- Bengio, Paiement, et al.
- 2004
(Show Context)
Citation Context ...hat maps between the image space and the low-dimensional manifold. Nonparametric methods can be used to map between the spaces, but, in principle, they require storage and access to all training data =-=[25]-=-, which is costly for large high-dimensional data sets. If the goal is to find the manifold structure in a given training set only, e.g., for visualization, then this is not a problem. However, if we ... |

74 | Learning and Design of Principal Curves
- Kégl, Krzyzak, et al.
- 2000
(Show Context)
Citation Context ...nifold are known in advance. A wide range of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces [8], =-=[9]-=-, [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following drawbac... |

66 | Geodesic entropic graphs for dimension and entropy estimation in manifold learning
- Costa, Hero
(Show Context)
Citation Context ...ber of mixture components C and the latent dimensionality d have to be set. If the number of degrees of freedom in the data is not known, 6 then d may be estimated using a variety of techniques [43], =-=[44]-=-, [45]. Given a latent dimensionality d, the number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following [46], [47]. Alternatively,... |

57 |
Semisupervised alignment of manifolds
- Ham, Lee, et al.
- 2004
(Show Context)
Citation Context ... labeled “a”-“d.” how it applies to the case where multiple embeddings of the same underlying low-dimensional manifold are observed and we want to learn a mapping between these embeddings [31], [37], =-=[49]-=-. So, rather than one set of high-dimensional data points, we are now given two sets of high-dimensional data points:fx2IR Dx g andfy2IR Dy g. The two sets are related through a set of correspondences... |

53 | Intrinsic dimension estimation using packing numbers
- Kegl
- 2003
(Show Context)
Citation Context ...he number of mixture components C and the latent dimensionality d have to be set. If the number of degrees of freedom in the data is not known, 6 then d may be estimated using a variety of techniques =-=[43]-=-, [44], [45]. Given a latent dimensionality d, the number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following [46], [47]. Alternat... |

52 | Automatic alignment of local representations
- Teh, Roweis
- 2003
(Show Context)
Citation Context ..., AUGUST 2006 measure of the alignment. Therefore, the estimates of the local models can be modified to accommodate a better alignment. In contrast, other approaches for the alignment of local models =-=[36]-=-, [37], [38] proceed in two steps: First, a mixture of local linear models is estimated (typically, on the basis of maximum likelihood) and, second, the local models are aligned on the basis of a seco... |

46 |
The EM algorithm-an old folk-song sung to a fast new tune
- Meng, Dyk
- 1997
(Show Context)
Citation Context ...fine a learning criterion that combines the data log-likelihood and a 3. If the data contains outliers, robust estimates of the local subspaces can be obtained using a t-distribution noise model. See =-=[32]-=- for a derivation of the EM algorithm for t-distributions and [33] for an application of mixtures local linear models based on t-distributions to image manifold learning.s4 IEEE TRANSACTIONS ON PATTER... |

45 | Fast nonlinear dimension reduction
- Kambhatla, Leen
(Show Context)
Citation Context ...ional linear subspace. Thus, the data can be clustered in such a manner that the data within each cluster can be accurately reconstructed from a two-dimensional PCA projection. Several authors, e.g., =-=[26]-=-, [27], [28], have reported that such a combination of clustering and PCA allows for significantly better reconstructions of images as compared to a reconstruction using one single linear PCA subspace... |

42 | A unified model for probabilistic principal surfaces
- Chang, Ghosh
- 2001
(Show Context)
Citation Context ...d are known in advance. A wide range of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces [8], [9], =-=[10]-=-, self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following drawbacks: 1)... |

41 | Unsupervised Learning Using MML
- Oliver, Baxter, et al.
- 1996
(Show Context)
Citation Context ...y of techniques [43], [44], [45]. Given a latent dimensionality d, the number of mixture components C can be set by employing cross-validation or a penalized log-likelihood criterion, e.g., following =-=[46]-=-, [47]. Alternatively, a variational Bayesian learning approach [48] can be taken to estimate both the latent-dimensionality and the number of components. 3 MAPPING BETWEEN MANIFOLD EMBEDDINGS In the ... |

40 |
Data compression, feature extraction, and autoassociation in feedforward neural networks
- Oja
- 1991
(Show Context)
Citation Context ...of the images, the coordinates on the manifold are known in advance. A wide range of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks =-=[7]-=-, principal curves and surfaces [8], [9], [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer f... |

40 |
Combining self-organizing maps
- Ritter
- 1989
(Show Context)
Citation Context ...e, which induces the conditional densities pðxjyÞ and pðyjxÞ which are used to predict correspondences. The method described in this section is related to the parameterized self-organizing map (PSOM) =-=[50]-=-. The PSOM also produces a mapping between two high-dimensional spaces through an underlying low-dimensional representation. The basic idea is to first find a low-dimensional representation 7 of the s... |

31 | A.(1997), A review of dimension reduction techniques, in
- Carreira-Perpinan
(Show Context)
Citation Context ... autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see [14], [15], =-=[16]-=-. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can return, suboptimal solution since it is based on minimizing an error function with local minima that... |

24 | Non-linear CCA and PCA by alignment of local models
- Verbeek, Vlassis
- 2004
(Show Context)
Citation Context ...ST 2006 measure of the alignment. Therefore, the estimates of the local models can be modified to accommodate a better alignment. In contrast, other approaches for the alignment of local models [36], =-=[37]-=-, [38] proceed in two steps: First, a mixture of local linear models is estimated (typically, on the basis of maximum likelihood) and, second, the local models are aligned on the basis of a second cri... |

20 |
Locality Preserving
- He, Niyogi
- 2003
(Show Context)
Citation Context ...ated low-dimensional coordinates of the images. Recently, various techniques, such as Isomap [17], Locally Linear Embedding (LLE) [18], Kernel PCA [19], charting [20], Locality Preserving Projections =-=[21]-=-, [22], Laplacian Eigenmaps [23], and a semidefinite programming approach [24], have been proposed that formalize manifold learning as minimizing a convex error function that encodes how well certain ... |

17 |
Mixture models for clustering and dimension reduction
- Verbeek
- 2004
(Show Context)
Citation Context ...lds, such as autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps [11], [12], and generative topographic mapping [13]. For more extensive overviews, see =-=[14]-=-, [15], [16]. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can return, suboptimal solution since it is based on minimizing an error function with local... |

16 | Learning high dimensional correspondences from low dimensional manifolds
- Ham, Lee, et al.
- 2003
(Show Context)
Citation Context ...r estimation technique, which improves upon earlier work by Roweis et al. [2]. Then, in Section 3, we show how this approach applies to correspondence learning, a recently introduced learning problem =-=[31]-=-. The learning data consists of two sets of high-dimensional points related by correspondences, i.e., some points in one set are known to have the same low-dimensional coordinate as a point in the oth... |

15 |
GTM: The Generative Topographic
- Bishop, Svensén, et al.
- 1998
(Show Context)
Citation Context ...nsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps [11], [12], and generative topographic mapping =-=[13]-=-. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can return, suboptimal solution since it is based on... |

15 | Coordinating principal component analyzers
- Verbeek, Vlassis, et al.
- 2002
(Show Context)
Citation Context ...ested using a fixed-point algorithm to find the stationary points of with regard to the remaining parameters of the model: c, c, and c. However, the parameters can be found in closed-form as shown in =-=[42]-=- for a constrained version of the above model and in [14] for the unconstrained model. 5 In Section 4.1, we experimentally show that the closed-form algorithm is computationally more efficient than th... |

13 | Kullback proximal algorithms for maximum likelihood estimation
- Chrétien, Hero
- 1999
(Show Context)
Citation Context ...ihoods can be combined to bound the complete objective function: 4. It is possible to weight the penalty term by a factor different from one, yielding an objective function similar to the one used in =-=[40]-=- for accelerated maximum likelihood estimation. It is straightforward to derive an optimization algorithm similar to the one presented here if the penalty term is multiplied by a factor in0; 1Š. c1s... |

8 |
der Malsburg. How to measure the pose robustness of object views
- Peters, Zitova, et al.
- 2002
(Show Context)
Citation Context ...0 gray-scale images of 64 64 pixels of each of two toy puppets as viewed from different directions. In Fig. 9, some corresponding views of the two puppets are depicted. The images, originally used in =-=[54]-=-, were provided by Peters et al., who recorded them at the Institute for Neural Computation of the Ruhr-Universität-Bochum, Germany. Images of the objects were recorded while moving the camera over th... |

6 |
Leonardis A.: Kernel and subspace methods for computer vision
- Bischof
(Show Context)
Citation Context ...ge representation. 1 INTRODUCTION OVER the last two decades, appearance-based methods have become increasingly popular to solve computer vision problems such as object recognition and pose estimation =-=[1]-=-. These methods are based on statistical models of the pixel gray or color values rather than on geometrical models of the depicted objects. One can think of the images as vectors in a high-dimensiona... |

4 |
Robust subspace mixture models using t-distributions
- Ridder, Franc
- 2003
(Show Context)
Citation Context ...d a 3. If the data contains outliers, robust estimates of the local subspaces can be obtained using a t-distribution noise model. See [32] for a derivation of the EM algorithm for t-distributions and =-=[33]-=- for an application of mixtures local linear models based on t-distributions to image manifold learning.s4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 8, AUGUST 2006 m... |

3 | Self-organizing mixture models
- Verbeek, Vlassis, et al.
- 2005
(Show Context)
Citation Context ...nge of techniques has been proposed for unsupervised learning of nonlinear manifolds, such as autoencoder neural networks [7], principal curves and surfaces [8], [9], [10], self-organizing maps [11], =-=[12]-=-, and generative topographic mapping [13]. For more extensive overviews, see [14], [15], [16]. All these methods suffer from one or more of the following drawbacks: 1) parameter estimation can return,... |

3 |
Nonlinear Image Interpolation using
- Bregler, Omohundro
- 1995
(Show Context)
Citation Context ...linear subspace. Thus, the data can be clustered in such a manner that the data within each cluster can be accurately reconstructed from a two-dimensional PCA projection. Several authors, e.g., [26], =-=[27]-=-, [28], have reported that such a combination of clustering and PCA allows for significantly better reconstructions of images as compared to a reconstruction using one single linear PCA subspace. Othe... |