DMCA
Translated Poisson mixture model for stratification learning (2000)
Cached
Download Links
- [ima.umn.edu]
- [www.ima.umn.edu]
- [silver.ima.umn.edu]
- [www.ima.umn.edu]
- DBLP
Other Repositories/Bibliography
Venue: | Int. J. Comput. Vision |
Citations: | 24 - 2 self |
Citations
11956 | Maximum Likelihood from Incomplete Data via the EM Algorithm
- Dempster, Laird, et al.
- 1997
(Show Context)
Citation Context ...es, function of an expert (class), δ j t πj , and the parameters of each expert, mj and θj . Usually, problems involving a mixture of experts are solved by the Expectation Maximization (EM) algorithm =-=[10]-=- [21, Chap. 3]. The EM is based on the following decomposition of the log-likelihood (14): L(Y |ψ, H) = − T� t=1 j=1 T� t=1 j=1 J� h j (yt)log � p(yt|ψ j )π j� J� h j (yt)log � h j (yt) � , (16) where... |
2459 |
A global geometric framework for nonlinear dimensionality reduction
- Tenenbaum, Silva, et al.
- 2000
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
2413 | Nonlinear dimensionality reduction by locally linear embedding
- Roweis, Saul
- 2000
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
665 | Laplacian eigenmaps and spectral techniques for embedding and clustering
- Belkin, Niyogi
- 2001
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
540 | The information bottleneck method.
- Tishby, Pereira, et al.
- 1999
(Show Context)
Citation Context ...eloped), with a two step approach, where we first estimate the local dimensionality per point using the original Levina-Bickel approach, and then cluster following the information bottleneck approach =-=[34]-=-. This has been shown not only to be less elegant and mathematically funded than the approach here presented, but mush less robust, even when compared to the non-regularized and noise-transparent form... |
333 | Topological persistence and simplification.
- Edelsbrunner, Letscher, et al.
- 2002
(Show Context)
Citation Context ...though no explicit estimation of the clusters is performed and single maps into Euclidean space are performed for the whole data set. Recently, and following in part the theory of persistent topology =-=[12]-=-, a framework for studying stratas based on local homology has been introduced in [5]. These recent works have clearly shown the necessity to go beyond manifold learning, into “stratification learning... |
286 |
Diffusion maps
- COIFMAN, LAFON
- 2006
(Show Context)
Citation Context ... computational geometry perspective, a Voronoi-based technique to compute local dimensionality has been introduced in [11], and demonstrated for 3D point cloud data. The diffusion distance framework, =-=[8, 22]-=-, can work with stratifications, though no explicit estimation of the clusters is performed and single maps into Euclidean space are performed for the whole data set. Recently, and following in part t... |
206 | Charting a manifold,
- Brand
- 2003
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
206 | Generalized principal component analysis (GPCA
- Vidal, Ma, et al.
- 2005
(Show Context)
Citation Context ... 5 Experimental results We now present experimental results with synthetic and real data for the proposed R-TPMM and its variants. We also compare some of the results with the ones obtained with GPCA =-=[33]-=- and the Souvenir and Pless [30] algorithms. We fixed α and σ experimentally. For α we usually use values in the interval [0, 3] except for the video experiment with temporal regularization where we u... |
142 | Maximum likelihood estimation of intrinsic dimension
- Levina, Bickel
- 2005
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
106 | Segmentation of multivariate mixed data via lossy data coding and compression.
- Ma, Derksen, et al.
- 2007
(Show Context)
Citation Context ...ed Generalized PCA (GPCA), which also finds the number of linear subspaces and their intrinsic dimensions. An algorithm for clustering linear manifolds based on lossy coding was proposed by Ma et al. =-=[25]-=-. Goh and Vidal [14] extend [27] to cluster a union of J, non-intersecting, kconnected nonlinear manifolds. It is done with the vectors spanning the null space of the LLE matrix [28], which are a line... |
99 | Geodesic entropic graphs for dimension and entropy estimation in manifold learning,”
- Costa, Hero
- 2004
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
93 |
Random Point Processes,
- Snyder
- 1975
(Show Context)
Citation Context ...ess N(R, xt) is then given by L(m(xt), θ(xt)) = � R 0 log λ(r, xt)dN(r, xt)− � R 0 λ(r, xt)dr, where θ(xt) := log ρ(xt) is the density parameter and the first integral is a Riemann-Stieltjes integral =-=[28]-=-. The maximum likelihood estimators lead to a computation for the local dimension at point xt, m(xt), depending on all the neighbors within a distance R from xt [23]. In practice, it is more convenien... |
77 |
Another interpretation of the EM algorithm for mixture distributions. Stat. Prob. Letters 4 53-56 Hsiao I T, Rangarajan A, and Gindi G 2002a A provably convergent OS-EM like reconstruction algorithm for emission tomography
- Hathaway
- 1986
(Show Context)
Citation Context ...m in (16) is the expectation of (15) with respect to Z. Also notice that the second term is the entropy of the membership functions. An interesting interpretation of the EM algorithm is introduced in =-=[17]-=-, where the EM is seen as an alternate optimization algorithm of the log-likelihood (16). Then, the E-step is nothing else than the maximization of L(Y |ψ, H) with respect to H with the additional con... |
70 | Intrinsic dimension estimation using packing numbers.
- Kegl
- 2002
(Show Context)
Citation Context ... analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic structure of high dimensional data, this is commonly known as manifold learning, e.g., =-=[4, 6, 9, 20, 23, 27, 32]-=-. Often, points that live in a high dimensional space can be parametrized by a number of parameters much smaller than the ambient dimension. A representation (embedding) of the data in a lower dimensi... |
57 | Data fusion and multicue data matching by diffusion maps
- Lafon, Keller, et al.
- 2006
(Show Context)
Citation Context ... computational geometry perspective, a Voronoi-based technique to compute local dimensionality has been introduced in [11], and demonstrated for 3D point cloud data. The diffusion distance framework, =-=[8, 22]-=-, can work with stratifications, though no explicit estimation of the clusters is performed and single maps into Euclidean space are performed for the whole data set. Recently, and following in part t... |
52 | Using the fractal dimension to cluster datasets
- Barbará, Chen
- 2000
(Show Context)
Citation Context ...(noisy) point cloud data. This is the subject of this work. This problem, clustering-by-dimensionality and stratification learning, has recently been explored in a handful of works. Barbará and Chen, =-=[3]-=-, proposed a hard clustering technique based on the fractal dimension (box-counting). Starting from an initial clustering, they incrementally add points into the cluster for which the change in the fr... |
48 | Segmenting motions of different types by unsupervised manifold clustering, in:
- Goh, Vidal
- 2007
(Show Context)
Citation Context ...es with an algebraic geometric method based on polynomial differentiation, called Generalized PCA (GPCA), which also finds the number of linear subspaces and their intrinsic dimensions. Goh and Vidal =-=[14]-=- extend [26] to cluster a union of J, non-intersecting, k-connected nonlinear manifolds. It is done with the vectors spanning the null space of the LLE matrix [27], which are a linear combination of t... |
44 | Biometric authentication: A machine learning approach. Upper Saddle River, - Kung, Mak, et al. - 2005 |
40 | Grouping and dimensionality reduction by locally linear embedding,”
- Polito, Perona
- 2001
(Show Context)
Citation Context ...lgebraic geometric method based on polynomial differentiation, called Generalized PCA (GPCA), which also finds the number of linear subspaces and their intrinsic dimensions. Goh and Vidal [14] extend =-=[26]-=- to cluster a union of J, non-intersecting, k-connected nonlinear manifolds. It is done with the vectors spanning the null space of the LLE matrix [27], which are a linear combination of the membershi... |
34 | Multi-stage optimization for multi-body motion segmentation.
- Kanatani, Sugaya
- 2003
(Show Context)
Citation Context ...e. Finally, we tested the R-TPMM algorithm in a motion segmentation application. We use a sequence of the Kanatani Lab, 6 see some examples of frames in Figure 9. This sequence was originally used in =-=[19]-=- and then in [34]. The data consists of the 2D projection coordinates of the trajectories along the sequence of some interest points. The sequence that we analyze corresponds to a car moving in a park... |
32 | Manifold clustering.
- Souvenir, Pless
- 2005
(Show Context)
Citation Context ...al correlation dimension and density for each point; then, standard clustering techniques are used to cluster the two-dimensional representation (dimension + density) of the data. Souvenir and Pless, =-=[30]-=-, use an Expectation Maximization (EM) type of technique, combined with weighted geodesic multidimensional scaling (weighted ISOMAP [32]). The weights measure how well each point fits the underlying m... |
29 | Robust statistical estimation and segmentation of multiple subspaces. - Yang, Rao, et al. - 2006 |
23 | Minimum effective dimension for mixtures of subspaces: A robust GPCA algorithm and its applications.
- Huang, Ma, et al.
- 2004
(Show Context)
Citation Context ... measure how well each point fits the underlying manifold defined by the current set of points in the cluster. After clustering, each cluster dimensionality is estimated following [23]. Vidal et al., =-=[18, 34]-=-, cluster linear subspaces with an algebraic geometric method based on polynomial differentiation, called Generalized PCA (GPCA), which also finds the number of linear subspaces and their intrinsic di... |
20 |
Clustering of spatial data by the EM algorithm
- Ambroise, Dang, et al.
- 1997
(Show Context)
Citation Context ...ularization further helps to improve the classification in noisy data and points lying close to manifold edges (see results in figures 1 and 2). This regularization is inspired in part by the work in =-=[1]-=- for the neighborhood EM (NEM), where the authors extend the EM algorithm adding spatial constraints. This neighborhood spatial information is introduced as a penalization term in the log-likelihood, ... |
20 | Stratification learning: Detecting mixed density and dimensionality in high dimensional point clouds. Neural Information Processing Systems,
- Haro, Randall, et al.
- 2006
(Show Context)
Citation Context ...This technique automatically gives a soft clustering according to dimensionality and density, with an estimation of both quantities for each class. A preliminary version of this work was presented in =-=[15]-=- and a regularized version together with asymptotic results in [16]. These techniques are particular cases of the more general Translated Poisson model introduced in this paper in order to handle nois... |
17 |
G.: Convergence of an EM-Type Algorithm for Spatial Clustering.
- Ambroise
- 1998
(Show Context)
Citation Context ...ss, J = 1, we obtain the global dimension estimator proposed by MacKay and Ghahramani (http://www.inference.phy.cam.ac.uk/mackay/dimension/), a particular case of our proposed framework. As proved in =-=[2]-=-, if α is small enough, (18) has a guaranteed global maximum for a fixed value of ψ, and the additional term S(H) does not affect the convergence of the EM-type algorithm. It can be shown (see Appendi... |
17 | Inferring local homology from sampled stratified spaces
- Bendich, Cohen-Steiner, et al.
- 2007
(Show Context)
Citation Context ...ean space are performed for the whole data set. Recently, and following in part the theory of persistent topology [12], a framework for studying stratas based on local homology has been introduced in =-=[5]-=-. These recent works have clearly shown the necessity to go beyond manifold learning, into “stratification learning.” In our work, we do not assume linear subspaces, and we simultaneously estimate the... |
16 | Combined central and subspace clustering on computer vision applications
- Lu, Vidal
- 2006
(Show Context)
Citation Context ...density criterion for classification was more penalized against the extra term S(H) for larger values of α and thus yielding rather a k-means kind of clustering (note that within the context of GPCA, =-=[24]-=- also proposed a combination of k-means and dimensionality clustering). 5.2 Real data As a test of the performance with real data, we first work with the MNIST database of handwritten digits, 4 which ... |
15 |
Dimension induced clustering
- Gionis, Hinneburg, et al.
- 2005
(Show Context)
Citation Context ...uster for which the change in the fractal dimension after adding the point is the lowest. They also find the number of clusters and the intrinsic dimension of the underlying manifolds. Gionis et al., =-=[13]-=-, propose a two-step algorithm: First, they estimate the local correlation dimension and density for each point; then, standard clustering techniques are used to cluster the two-dimensional representa... |
13 |
Unsupervised dimensionality estimation and manifold learning in high-dimensional spaces by tensor voting
- Mordohai, Medioni
- 2005
(Show Context)
Citation Context ...ors spanning the null space of the LLE matrix [27], which are a linear combination of the membership vectors and the embedding vectors of the J connected components. The work of Mordohai and Medioni, =-=[25]-=-, estimates the local dimension using tensor voting. Cao and Haralick, [7], propose a hard clustering by dimensionality: First, local dimensionality is computed via local PCA; and then, neighboring p... |
12 | Nonlinear manifold clustering by dimensionality
- Cao, Haralick
- 2006
(Show Context)
Citation Context ...nation of the membership vectors and the embedding vectors of the J connected components. The work of Mordohai and Medioni, [25], estimates the local dimension using tensor voting. Cao and Haralick, =-=[7]-=-, propose a hard clustering by dimensionality: First, local dimensionality is computed via local PCA; and then, neighboring points are clustered together if they have the same dimension and if the err... |
5 |
W.: Shape dimension and approximation from samples. Discrete and Computational Geometry 29(3):419–434
- Dey, Giesen, et al.
- 2003
(Show Context)
Citation Context ...e classification is improved in the intersection of the linear subspaces. From the computational geometry perspective, a Voronoi-based technique to compute local dimensionality has been introduced in =-=[11]-=-, and demonstrated for 3D point cloud data. The diffusion distance framework, [8, 22], can work with stratifications, though no explicit estimation of the clusters is performed and single maps into Eu... |
5 |
On the numerical determination of the dimension of an attractor. Lecture notes in mathematics. Dynamical systems and bifurcations
- Takens
- 1125
(Show Context)
Citation Context ...na and Bickel, [23], proposed a geometric and probabilistic method which estimates the local dimension and density of a point cloud data. This dimension estimator is equivalent to the one proposed in =-=[31]-=- in the context of dynamical systems. Their approach is based on the idea that if we sample an m-dimensional manifold with T points, the proportion of points that fall into a ball around a point xt is... |
2 | Regularized mixed dimensionality and density learning in computer vision
- Haro, Randall, et al.
- 2007
(Show Context)
Citation Context ...imensionality and density, with an estimation of both quantities for each class. A preliminary version of this work was presented in [15] and a regularized version together with asymptotic results in =-=[16]-=-. These techniques are particular cases of the more general Translated Poisson model introduced in this paper in order to handle noise. The remainder of this paper is organized as follows: In Section ... |