### Citations

3463 |
The Elements of Statistical Learning
- Hastie, Tibshirani, et al.
- 2001
(Show Context)
Citation Context ...such as engineering, biology, and economics. However, the performance of many learning algorithms degrades rapidly as the dimensionality increases, which is referred to as the curse of dimensionality =-=[14]-=-. As a result, dimensionality reduction becomes an essential data preprocessing technique for finding meaningful low-dimensional structures hidden in the original high-dimensional space[10]. Two of th... |

1254 |
Kernel Methods for Pattern Analysis
- Shawe-Taylor, Cristianini
- 2004
(Show Context)
Citation Context ...t xi and each projection l, LLP fits a Kernel Machine f li (x) using {xj , ylj}xj∈N−i as the training data. Denote the size of N−i as n−i . The function f li (xi) is fitted via kernel ridge regression=-=[20]-=-, and we obtain f li (xi) = (k − i ) T (K−i + λI) −1yli (10) where k−i ∈ Rn − i is the vector [K(xi,xj)]T for xj ∈ N−i , K−i ∈ Rn − i ×n−i is the local kernel matrix over N−i , and yli ∈ Rn − i is the... |

665 | Laplacian eigenmaps and spectral techniques for embedding and clustering
- Belkin, Niyogi
- 2001
(Show Context)
Citation Context ... Revised 2013-07-14; Accepted 2013-08-31. 436 International Journal of Software and Informatics, Volume 7, Issue 3 (2013) approaches include Locally Linear Embedding (LLE)[18], Laplacian Eigenmap (LE)=-=[4]-=-, Neighborhood Preserving Embedding (NPE)[11], and Locality Preserving Projections (LPP)[12]. These methods have succeeded in recovering the intrinsic geometric structure of a broad class of nonlinear... |

414 | Locality preserving projections.
- He, Niyogi
- 2003
(Show Context)
Citation Context ...tics, Volume 7, Issue 3 (2013) approaches include Locally Linear Embedding (LLE)[18], Laplacian Eigenmap (LE)[4], Neighborhood Preserving Embedding (NPE)[11], and Locality Preserving Projections (LPP)=-=[12]-=-. These methods have succeeded in recovering the intrinsic geometric structure of a broad class of nonlinear data manifolds. Besides, it has been shown that all of those algorithms can be reformulated... |

292 |
Handbook of matrices
- Lütkepohl
- 1996
(Show Context)
Citation Context ...an conclude that L1 = m∑ i=1 SiLiS T i 1 = m∑ i=1 SiLi1 = 0, (23) where we use the fact that STi 1 = 1 and Π1 = 0. So, 1 is an eigenvector of L with eigenvalue 0. ¤ Following the Rayleigh-Ritz theorem=-=[17]-=-, we know that the optimal P ∗ that minimizes Eq. (22) is given by the smallest eigenvectors of the following generalized eigenvalue problem: XLXTα = γXXTα (24) Let {α1, · · · ,αp} ⊂ Rn be the smalles... |

207 | Global versus local methods for nonlinear dimensionality reduction
- Silva, Tenenbaum
- 2003
(Show Context)
Citation Context ...tructure, and fail to respect the local geometric structure. Instead of focusing on global structure, local approaches for dimensionality reduction try to preserve the local geometry of the data space=-=[9]-=-. Typical local Corresponding author: Lijun Zhang, Email: zhanglij@msu.edu Received 2012-11-04; Revised 2013-07-14; Accepted 2013-08-31. 436 International Journal of Software and Informatics, Volume 7... |

172 | Stochastic neighbor embedding. In
- Hinton, Roweis
- 2003
(Show Context)
Citation Context ... and Informatics, Volume 7, Issue 3 (2013) of LME is useful for visualizing and understanding the relation between the original variables that create local minima. Stochastic neighbour embedding (SNE)=-=[13]-=- is a famous dimensionality reduction method which aims to optimally preserve neighborhood identity. In [7], a new method named Elastic Embedding (EE) is proposed, which reveals the relationship betwe... |

80 | Document clustering using locality preserving indexing
- Cai, He, et al.
- 2005
(Show Context)
Citation Context ...ata point x̂i into the PCA subspace by throwing away the smallest principal components. We denote the projection matrix of PCA by PPCA. Through data centering, the trivial solution PTX = 1m is removed=-=[5]-=-. The role of PCA is to make the matrixXXT positive definite, which is necessary in solving the generalized eigenvalue problem (24)[5]. We use X̂PCA denote the data matrix after this step. 3. Calculat... |

32 | An efficient algorithm for large-scale discriminant analysis".
- Cai, He, et al.
- 2008
(Show Context)
Citation Context ...between-class scatter matrix: SW = c∑ i=1 ( XiΠri(X i)T ) (36) SW + SB = XΠmXT (37) Thus, the solution of LDA is also given by the smallest eigenvectors of the following generalized eigenvalue problem=-=[6,15]-=-: c∑ i=1 XiΠri(X i)Tα = γXΠmXTα (38) Following Eq. (4.35), we can conclude that as λ →∞, the eigenproblem (24) of LRP converges to the eigenproblem (38) of LDA. In practice, λ is much smaller than∞, s... |

32 |
Nonlinear dimensionality reduction by locally linear embedding. Science
- ST, LK
- 2000
(Show Context)
Citation Context ...@msu.edu Received 2012-11-04; Revised 2013-07-14; Accepted 2013-08-31. 436 International Journal of Software and Informatics, Volume 7, Issue 3 (2013) approaches include Locally Linear Embedding (LLE)=-=[18]-=-, Laplacian Eigenmap (LE)[4], Neighborhood Preserving Embedding (NPE)[11], and Locality Preserving Projections (LPP)[12]. These methods have succeeded in recovering the intrinsic geometric structure o... |

29 | Structure preserving embedding. - Shaw, Jebara - 2009 |

16 |
Introduction to Linear Algebra, 3rd edition
- Strang
- 2003
(Show Context)
Citation Context ...ative structure of Ni. The formulation of Li in Eq. (18) involves the inverse of one n×n matrix, which is computationally expensive when the dimensionality is high. Using the Woodbury-Morrison formula=-=[21]-=-, Li can be reformulated as[1]: 1 ni ( Π−ΠXTi (XiΠXTi + niλI)−1XiΠ ) = 1 ni Π ( I −ΠXTi (XiΠXTi + niλI)−1XiΠ ) Π = 1 ni Π ( I − IΠXTi (XiΠIΠXTi + niλI)−1XiΠI ) Π = 1 ni Π(I + 1 niλ ΠXTi XiΠ) −1Π =λΠ(n... |

12 |
Hespanha JP, Kriegman DJ. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection
- PN
- 1997
(Show Context)
Citation Context ...al patches for NPE, LLP and LRP. k is empirically set to 5 for all the methods. The parameter λ in LLP and LRP is set to 1. 5.1 Face recognition For face recognition, we compare our algorithm with PCA=-=[2]-=-, LDA[2], NPE[11], and LLP[24]. Classification in the original 1024-dimensional space is referred to as Baseline. For each database, r images per class are randomly selected as training samples, and t... |

12 | Face Recognition Using Eigenfaces. - MA, AP - 1991 |

8 | Locally discriminative coclustering,”
- Zhang, Chen, et al.
- 2012
(Show Context)
Citation Context ..., called Locally Regressive Projections (LRP), is proposed in this paper. LRP is fundamentally built upon the idea of local linear regression, which is recently applied to ranking[25] and coclustering=-=[27]-=-. For the purpose of discovering the local discriminative structure, we define a local patch for each data point as the set containing this point and its neighbors. LRP assumes that for each local pat... |

5 | Worst-case linear discriminant analysis
- Zhang, Yeung
(Show Context)
Citation Context ...ximizes the estimated Dirichlet precision on the projected data, thus reducing the compositional data to a lower dimensionality such that the components are de-correlated as much as possible. In Ref. =-=[28]-=-, Worst-case Linear Discriminant Analysis (WLDA) is developed by defining new between-class and within-class scatter measures. WLDA adopts the worst-case view and is more suitable for applications suc... |

4 |
Niyogi P, Zhang H-J. Face recognition using laplacianfaces
- He, Yan, et al.
(Show Context)
Citation Context ...ata manifolds. Besides, it has been shown that all of those algorithms can be reformulated in a general graph embedding framework, and their differences lie in the way of describing the local geometry=-=[15,26]-=-. Among those approaches, Locally Linear Embedding (LLE)[18] is one typical local learning method which characterizes the local geometry of the data space by linear coefficients that reconstruct each ... |

3 |
H.J.: Neighborhood preserving embedding
- He, Cai, et al.
(Show Context)
Citation Context ...6 International Journal of Software and Informatics, Volume 7, Issue 3 (2013) approaches include Locally Linear Embedding (LLE)[18], Laplacian Eigenmap (LE)[4], Neighborhood Preserving Embedding (NPE)=-=[11]-=-, and Locality Preserving Projections (LPP)[12]. These methods have succeeded in recovering the intrinsic geometric structure of a broad class of nonlinear data manifolds. Besides, it has been shown t... |

2 |
Harchaoui Z. Diffrac: a discriminative and flexible framework for clustering
- Bach
(Show Context)
Citation Context ...lly Regressive Projections. 3.1 The objective Since (local) linear regression has been widely studied in other problems, the mathematical formulations in this subsection are similar to some other work=-=[1,25,27]-=-. The key difference is that here local linear regression is applied to dimensionality reduction. For each data point xi, we define the local patch Ni be the set containing xi and its neighboring poin... |

2 |
The elastic embedding algorithm for dimensionality reduction
- MA
- 2010
(Show Context)
Citation Context ...etween the original variables that create local minima. Stochastic neighbour embedding (SNE)[13] is a famous dimensionality reduction method which aims to optimally preserve neighborhood identity. In =-=[7]-=-, a new method named Elastic Embedding (EE) is proposed, which reveals the relationship between Laplacian Eigenmap[4] and SNE. 3 Locally Regressive Projections (LRP) In local dimensionality reduction ... |

2 | H: Dirichlet component analysis: feature extraction for compositional data
- HY, Yang, et al.
(Show Context)
Citation Context ...timation error of each point is also counted once in Eq. (11), so the importance of each point is the same in LLP. 2.4 More recent progresses As an extension of PCA, Dirichlet Component Analysis (DCA)=-=[23]-=- is proposed to handle the compositional data (positive constant-sum real vectors). DCA attempts to find the optimal projection that maximizes the estimated Dirichlet precision on the projected data, ... |

1 |
A survey of dimension reduction techniques[Technical Report
- IK
- 2002
(Show Context)
Citation Context ...nsionality [14]. As a result, dimensionality reduction becomes an essential data preprocessing technique for finding meaningful low-dimensional structures hidden in the original high-dimensional space=-=[10]-=-. Two of the most well known algorithms are Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Principal Component Analysis (PCA)[3] reduces the dimensionality of the input dat... |

1 |
De la Torre F. Local minima embedding
- Kim
- 2010
(Show Context)
Citation Context ...i-definite program constrained by a set of linear inequalities which captures the connectivity structure of the graph. Instead of focusing the structure of the data space, Local Minima Embedding (LME)=-=[16]-=- tries to find a low-dimensional embedding that preserves the local minima structure of a given objective function. The embedding 440 International Journal of Software and Informatics, Volume 7, Issue... |

1 |
SP, Schölkopf B. Local learning projections
- MR, Yu, et al.
- 2007
(Show Context)
Citation Context ...ients that reconstruct each point from its neighbors. It then assumes that the embedding of each point can also be reconstructed from its neighbors’ embeddings with the same coefficients. Recently, Wu=-=[24]-=- et al. proposed another local method named Local Learning Projections (LLP). Similar to LLE, LLP assumes that the projection value of each point can be estimated based on its neighbors and their proj... |

1 | Nie FP, Luo JB, Zhuang YT. Ranking with local regression and global alignment for cross media retrieval - Yang, Xu - 2009 |

1 |
Zhang BY, Zhang HJ. Graph embedding: A general framework for dimensionality reduction
- SC, Xu
- 2005
(Show Context)
Citation Context ...ata manifolds. Besides, it has been shown that all of those algorithms can be reformulated in a general graph embedding framework, and their differences lie in the way of describing the local geometry=-=[15,26]-=-. Among those approaches, Locally Linear Embedding (LLE)[18] is one typical local learning method which characterizes the local geometry of the data space by linear coefficients that reconstruct each ... |