#### DMCA

## Extensions of Laplacian Eigenmaps for Manifold Learning (2011)

Citations: | 2 - 0 self |

### Citations

13218 | Statistical Learning Theory - Vapnik - 2003 |

10603 | Introduction to Algorithms
- Cormen, Leiverson, et al.
- 2009
(Show Context)
Citation Context ..., then the time complexity T of this algorithm satisfies the following recurrence relation: T (n) = 2T ((1 + α) n 2 ) + f(n). It is straightforward to show that f(n) = O(dn). Using the Master Theorem =-=[28]-=- we then have the solution: T (n) = Θ(dnt), t = 1 1− log2(1 + α) . For example, in the experiments below we use α = 0.1, in which case t = 1.16. The following is pseudo code for the main functions imp... |

3935 |
Dynamic Programming
- Bellman
- 1957
(Show Context)
Citation Context ...y, in an age when data analysis is increasingly automated, highdimensional data is intractable whenever the restrictions of real world computation must be accounted for. The “curse of dimensionality” =-=[15]-=-, as it has been called, refers to common situations where the complexity of a problem increases exponentially with the number of dimensions. One example, common to many problems of machine learning a... |

3609 | Compressed sensing - Donoho - 2006 |

2621 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information - Candès, Romberg, et al. - 2006 |

2461 |
A global geometric framework for nonlinear dimensionality reduction.
- Tenenbaum, Silva, et al.
- 2000
(Show Context)
Citation Context ...ese methods clearly outperform linear methods when applied to common artificial examples such as the Swiss roll in Figure 2.4, where the performance of PCA is compared to that of ISOMAP, described in =-=[59]-=-. However, applications to real world data are often not as convincing [53]. (a) original set (b) PCA (c) ISOMAP Figure 2.4: Reducing the Swiss roll Methods of dimensionality reduction seek to recover... |

1505 | Near optimal signal recovery from random projections: Universal encoding strategies?,” - Candès, Tao - 2006 |

1389 | Stable signal recovery from incomplete and inaccurate measurements - Candès, Romberg, et al. |

1226 | Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation - Belkin, Niyogi - 2003 |

984 | An optimal algorithm for approximate nearest neighbor searching
- Arya, Mount, et al.
- 1994
(Show Context)
Citation Context ...eferences therein. However, 73 most of these algorithms perform poorly in high dimensions, require a significant amount of pre-processing, or fail to provide a guarantee of asymptotic time complexity =-=[1,55]-=-. The algorithm we have chosen to implement and use with LE requires no pre-processing, is very effective in high dimensions, and comes complete with a detailed analysis of time complexity. 5.3 The Al... |

625 | A simple proof of the Restricted Isometry Property for random matrices
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ...strauss) Given 0 < < 1, a set Xof n points in RN , and a number M ≥ O(lnN)/2, there is a Lipschitz function f : RN → RM such that, for all u, v ∈ X, (1− )‖u− v‖ ≤ ‖f(u)− f(v)‖ ≤ (1 + )‖u− v‖. In =-=[7]-=- a fundamental connection was identified between CS theory and the JL Lemma, despite the fact that the former allows for the embedding of an uncountable number of points. We note that computing random... |

607 | Multidimensional Scaling. - Cox, Cox - 1994 |

578 | Manifold regularization: A geometric framework for learning from examples. - Belkin, Niyogi, et al. - 2006 |

567 | For most large underdetermined systems of linear equations the minimal ℓ1 solution is also the sparsest solution - Donoho |

537 | Sparse MRI: The application of compressed sensing for rapid MR imaging
- Lustig, Donoho, et al.
- 2007
(Show Context)
Citation Context ...ly. Dimensionality reduction in CS is linear and nonadaptive, i.e., the mapping does not depend on the data. CS has many promising applications in signal acquisition, compression, and medical imaging =-=[36,51,58]-=-. CS theory states that, with high probability, every K-sparse signal x ∈ RN can be recovered from just M = O(K log(N/K)) linear measurements y = Φx, where Φ is an M ×N measurement matrix drawn random... |

296 | Single-pixel imaging via compressive sampling,”
- Duarte, Davenport, et al.
- 2008
(Show Context)
Citation Context ...ly. Dimensionality reduction in CS is linear and nonadaptive, i.e., the mapping does not depend on the data. CS has many promising applications in signal acquisition, compression, and medical imaging =-=[36,51,58]-=-. CS theory states that, with high probability, every K-sparse signal x ∈ RN can be recovered from just M = O(K log(N/K)) linear measurements y = Φx, where Φ is an M ×N measurement matrix drawn random... |

281 |
Can one hear the shape of a drum
- Kac
- 1966
(Show Context)
Citation Context ...ohn Milnor, who constructed two isospectral non-isometric manifolds. Nonetheless, Mark Kac continued to wonder if the answer might be positive for planar domains, and popularized the question in 1966 =-=[47]-=- when he asked: “Can one hear the shape of a drum?” But again the question was answered negatively in 1992 [41] by Gordon, Webb, and Wolpert, who constructed the counter example shown in Figure 2.6: t... |

257 | Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps - Coifman, Lafon, et al. |

231 |
A tutorial on spectral clustering.
- Luxburg
- 2007
(Show Context)
Citation Context ...n probability, and the second one is deterministic. However, we observe that Vn does not depend on the parameter t. Thus it suffices to show the first convergence. To do so, we adapt the arguments in =-=[52]-=- to establish convergence of the eigenvalues and eigenvectors of the empirical Schrödinger operator, under certain conditions. 3.5.1 Overview of Method We are given n data points x1, x2, . . . , xn s... |

171 |
Spectral Approximations of Linear operators
- CHATELIN
- 1983
(Show Context)
Citation Context ...e (xn)n in B, the sequence (S − Sn)xn is relatively compact (has compact closure) in (E, ‖ · ‖E). 43 Compact convergence will ensure the convergence of spectral properties in the following sense, see =-=[22]-=-. Proposition 3.5.5 (Perturbation) Let (E, ‖ · ‖E) be a Banach space and (Tn)n and T bounded linear operators on E with Tn c−→ T . Let λ ∈ σ(T ) be an isolated eigenvalue with finite multiplicity m, a... |

165 |
The Laplacian on a Riemannian manifold
- Rosenberg
- 1997
(Show Context)
Citation Context ...ns that minimize (2.1) are the eigenfunctions of the LaplaceBeltrami operator corresponding to the smallest eigenvalues. Furthermore, the spectrum of ∆M on a compact manifoldM is known to be discrete =-=[56]-=-. We discard the constant eigenfunction corresponding to eigenvalue 0 and use the next k eigenfunctions to produce an optimally smooth embedding into Rk. 2.2.3 The Algorithm Given points x1, x2, . . .... |

158 |
Nonlinear Dimensionality Reduction.
- Lee, Verleysen
- 2007
(Show Context)
Citation Context ...e original point cloud. The sense in which the representation is faithful or optimal varies – different algorithms attempt to preserve 11 different geometric or topological properties of the manifold =-=[49]-=-. In general, nothing is known about the geometry or topology of the manifold, not even its dimension, and this makes manifold learning a notoriously ill-posed problem. Indeed, through any given set o... |

156 | Towards a theoretical foundation for Laplacian-based manifold methods.
- Belkin, Niyogi
- 2005
(Show Context)
Citation Context ...ontinuous settings, the formal and precise connections between the two were established only much later than the formulation of the algorithm, in a series of results we shall discuss in Section 2.2.6 =-=[10,12]-=-. Let M be a smooth, compact, k-dimensional manifold embedded in Rd. We are looking for a map f : M → R such that if x, y ∈ M are close, then f(x) and f(y) are also close, and assume that f is twice d... |

152 | An elementary proof of the JohnsonLindenstrauss lemma
- Dasgupta, Gupta
- 1999
(Show Context)
Citation Context ...or measurement in Compressed Sensing The notion of using a random projection for dimensionality reduction is not 52 new. Long before the present wave of interest, the Johnson-Lindenstrauss Lemma (JL) =-=[32]-=- used a random projection for a stable embedding of a finite point cloud. Lemma 4.1.1 (Johnson-Lindenstrauss) Given 0 < < 1, a set Xof n points in RN , and a number M ≥ O(lnN)/2, there is a Lipschi... |

141 | Hessian eigenmaps: new locally linear embedding techniques for high-dimensional data.
- Donoho, Grimes
- 2003
(Show Context)
Citation Context ...ons are typically derived from a graph representation of the data, used in most state-of-the-art learning algorithms, such as Diffusion Wavelets [26], Locally Linear Embedding (LLE) [57], Hessian LLE =-=[33]-=-, and Laplacian Eigenmaps [9]. 2.2 Laplacian Eigenmaps Laplacian Eigenmaps (LE) shall be our point of departure in much of what follows, so we now take the time to describe key features of the algorit... |

125 | Graph approximations to geodesics on embedded manifolds. - Bernstein, Silva, et al. - 2001 |

123 |
Nonlinear Dimensionality Reduction by Locally
- Roweis, Saul
- 2000
(Show Context)
Citation Context .... Such approximations are typically derived from a graph representation of the data, used in most state-of-the-art learning algorithms, such as Diffusion Wavelets [26], Locally Linear Embedding (LLE) =-=[57]-=-, Hessian LLE [33], and Laplacian Eigenmaps [9]. 2.2 Laplacian Eigenmaps Laplacian Eigenmaps (LE) shall be our point of departure in much of what follows, so we now take the time to describe key featu... |

108 | A New Compressive Imaging Camera Architecture using Optical-Domain Compression”,
- Takhar, Laska, et al.
- 2006
(Show Context)
Citation Context ...ly. Dimensionality reduction in CS is linear and nonadaptive, i.e., the mapping does not depend on the data. CS has many promising applications in signal acquisition, compression, and medical imaging =-=[36,51,58]-=-. CS theory states that, with high probability, every K-sparse signal x ∈ RN can be recovered from just M = O(K log(N/K)) linear measurements y = Φx, where Φ is an M ×N measurement matrix drawn random... |

71 | One cannot hear the shape of a drum.
- Gordon, Webb, et al.
- 1992
(Show Context)
Citation Context ...if the answer might be positive for planar domains, and popularized the question in 1966 [47] when he asked: “Can one hear the shape of a drum?” But again the question was answered negatively in 1992 =-=[41]-=- by Gordon, Webb, and Wolpert, who constructed the counter example shown in Figure 2.6: these two regions have identical eigenvalues but different shapes. Figure 2.6: One cannot hear the shape of a dr... |

70 | Intrinsic dimension estimation using packing numbers.
- Kegl
- 2002
(Show Context)
Citation Context ...old on the right, the green square is closer. Most algorithms therefore require the intrinsic dimension as an input parameter, and several algorithms exist for estimating it for any given point cloud =-=[48]-=-. Figure 2.5: One and two-dimensional manifolds A popular family of manifold learning algorithms is based on the LaplaceBeltrami operator of the underlying manifold. Eigenfunctions of this operator, w... |

49 | Improved nystrom low-rank approximation and error analysis. In:
- Zhang, Tsang, et al.
- 2008
(Show Context)
Citation Context ...e data set to a low-dimensional space (R20). Furthermore, we compare the result with the output produced by the state-of-the-art out-of-sample extension algorithm, Improved Nystrom (IN), described in =-=[63]-=-. Figure 4.3 shows the results. We clearly see, as suggested by the colored marks, that even when the general shape of the image is somewhat altered with LERP, the relative positions of the mapped poi... |

43 | Empirical graph laplacian approximation of laplace–beltrami operators: Large sample results. - Giné, Koltchinskii - 2006 |

42 | Dimensionality Reduction: A Comparative Review,”
- Maaten, Postma, et al.
- 2009
(Show Context)
Citation Context ...on with Diffusion Wavelets Many methods of dimensionality reduction (DR) have been developed and successfully applied. An important distinction is made between the linear and the nonlinear techniques =-=[53]-=-. Linear methods assume that the data lies on or near a linear subspace of the high-dimensional ambient space. Nonlinear methods make no assumption of linearity and are designed to identify complex no... |

41 | Data dimensionality estimation methods: a survey. - Camastra - 2003 |

39 | Exploiting manifold geometry in hyperspectral imagery - Bachmann, Ainsworth, et al. - 2005 |

39 | Constructing the Laplace Operator from Point Clouds - Belkin, Sun, et al. - 2009 |

28 | Fast approximate kNN graph construction for high dimensional data via recursive Lanczos bisection.
- Chen, Fang, et al.
- 2009
(Show Context)
Citation Context ... the previous section targets the dimension. In an effort to deal with the number of points, we have implemented and tested an algorithm for the construction of approximate neighborhoods described in =-=[23]-=-. In this algorithm, the set of points is recursively divided into two smaller subsets, using spectral bisection, based on the inexpensive Lanczos algorithm. Once the size of the subset is small enoug... |

17 | Improved manifold coordinate representations of large-scale hyperspectral scenes - Bachmann, Ainsworth, et al. - 2006 |

15 | Decoding via linear programming - Candes, Tao - 2005 |

13 |
Iterative Non-linear Dimensionality Reduction with Manifold Sculpting,”
- GASHLER, VENTURA, et al.
- 2008
(Show Context)
Citation Context ...scaled and rotated. While the ambient space is R1024, the intrinsic dimensionality of the set is two, since only two variables are needed to produce each image. Using Manifold Sculpting, described in =-=[38]-=-, to map the dataset to the two-dimensional plane, we obtain the result on the right side of the figure. Figure 2.1: Dimensionality Reduction Using Manifold Sculpting 8 As a slightly more advanced exa... |

12 | Practical construction of k-nearest neighbor graphs in metric spaces
- Paredes, Chavez, et al.
- 2006
(Show Context)
Citation Context ...eferences therein. However, 73 most of these algorithms perform poorly in high dimensions, require a significant amount of pre-processing, or fail to provide a guarantee of asymptotic time complexity =-=[1,55]-=-. The algorithm we have chosen to implement and use with LE requires no pre-processing, is very effective in high dimensions, and comes complete with a detailed analysis of time complexity. 5.3 The Al... |

9 | Learning: The Price of Normalization
- Goldberg, Kushnir, et al.
(Show Context)
Citation Context ...e next section, LE performs reasonably well on some real datasets. However, we note that it is easy to construct simple counterexamples, i.e., 19 manifolds that cannot be recovered by LE. In fact, in =-=[40]-=- the authors consider a two-dimensional rectangle and show that if the ratio of its sides is greater than two, the output of LE will be a one-dimensional manifold. 2.2.5 Examples As a toy vision examp... |

5 | Improving the performance of classifiers in high-dimensional remote sensing applications: An adaptive resampling strategy for error-prone exemplars (aresepe - Bachmann |

2 | The Multiplicative Zak Transform, Dimension Reduction, and Wavelet Analysis of LIDAR Data - Flake - 2010 |

2 |
Enumeration of Harmonic Frames and Frame Based Dimension Reduction
- Hirn
- 2009
(Show Context)
Citation Context ...r representing each class and its coordinates are the average of all the vectors in the training set that belong to that class. We now describe this procedure in detail, following the presentation in =-=[45]-=-. Figure 2.14: kNN classification Let X = {x1, x2, . . . , xn} ⊂ RD denote the set of input vectors in our training set. For each i, xi belongs to exactly one of q classes Ck, k = 1, 2, . . . , q. Ass... |

2 |
Hyperspectral Dataset, HYDICE sensor imagery. Available: http://www.agc.army.mil/Hypercube
- Urban
(Show Context)
Citation Context ...e containing 224 bands [2] In chapter 6, we will use two specific data sets, known as “Urban” and “Smith”. Two features of these sets make them popular. First, these sets are publicly available [3–6] =-=[60]-=-. Second, ground truth for the purposes of material classification is available for some of the pixels, a fact that we shall later exploit in order to assess the performance of our algorithms. A color... |

2 | Dimensionality Reduction for Hyperspectral Data - Widemann - 2008 |

1 | Automatic classification of land-cover on smith island, va using hymap imagery - Bachmann, Donato, et al. |

1 | Discrete Laplace Operator on Meshed - Belkin, Sun, et al. - 2008 |

1 |
Schroedinger Eigenmaps for the Analysis and Classification
- Czaja, Ehler
(Show Context)
Citation Context ...bility of LE to identify underlying structure such as clusters in the data, and also allow for the introduction of prior knowledge. One such generalization, an algorithm called Schrödinger Eigenmaps =-=[30]-=-, was recently designed and applied to biomedical images. In Chapter 3, we establish its consistency, i.e., we show that under certain mild conditions, the discrete approximations converge to well def... |

1 | Laplacian Eigenmaps with Random - Halevy |

1 | Accelerating Laplacian Eigenmaps Using Fast Approximate Neighborhood - Halevy |

1 |
Multiscale analysis of data sets with diffusion wavelets
- Maggioni, Coifman
- 2007
(Show Context)
Citation Context ...en well preserved in the representation space, whose dimension is drastically lower. Figure 2.2: Face Recognition with LPP As an example from another field, consider the following problem, taken from =-=[54]-=-. We are given 1047 articles from Science News, each belonging to one of eight fields. 2036 words are selected to capture the information content of this 9 particular body of documents. We then repres... |