Results 1  10
of
1,651
Greedy spectral embedding
"... Spectral dimensionality reduction methods and spectral clustering methods require computation of the principal eigenvectors of an n × n matrix where n is the number of examples. Following up on previously proposed techniques to speedup kernel methods by focusing on a subset of m examples, we study ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
a greedy selection procedure for this subset, based on the featurespace distance between a candidate example and the span of the previously chosen ones. In the case of kernel PCA or spectral clustering this reduces computation to O(m² n). For the same computational complexity, we can also compute
Laplacian eigenmaps and spectral techniques for embedding and clustering.
 Proceeding of Neural Information Processing Systems,
, 2001
"... Abstract Drawing on the correspondence between the graph Laplacian, the LaplaceBeltrami op erator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in ..."
Abstract

Cited by 668 (7 self)
 Add to MetaCart
Abstract Drawing on the correspondence between the graph Laplacian, the LaplaceBeltrami op erator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues
Complete discrete 2D Gabor transforms by neural networks for image analysis and compression
, 1988
"... A threelayered neural network is described for transforming twodimensional discrete signals into generalized nonorthogonal 2D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a comp ..."
Abstract

Cited by 478 (8 self)
 Add to MetaCart
complete image description in terms of locally windowed 2D spectral coordinates embedded within global 2D spatial coordinates. Because intrinsic redundancies within images are extracted, the resulting image codes can be very compact. However, these conjoint transforms are inherently difficult to compute
Induction of Selective Bayesian Classifiers
 CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1994
"... In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space ..."
Abstract

Cited by 265 (7 self)
 Add to MetaCart
In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space
Sparse subspace clustering
 In CVPR
, 2009
"... We propose a method based on sparse representation (SR) to cluster data drawn from multiple lowdimensional linear or affine subspaces embedded in a highdimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all oth ..."
Abstract

Cited by 241 (14 self)
 Add to MetaCart
We propose a method based on sparse representation (SR) to cluster data drawn from multiple lowdimensional linear or affine subspaces embedded in a highdimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all
Image compression via joint statistical characterization in the wavelet domain
, 1997
"... We develop a statistical characterization of natural images in the wavelet transform domain. This characterization describes the joint statistics between pairs of subband coefficients at adjacent spatial locations, orientations, and scales. We observe that the raw coefficients are nearly decorrelate ..."
Abstract

Cited by 238 (24 self)
 Add to MetaCart
demonstrate the power of this model, we construct an image coder called EPWIC (Embedded Predictive Wavelet Image Coder), in which subband coefficients are encoded one bitplane at a time using a nonadaptive arithmetic encoder that utilizes probabilities calculated from the model. Bitplanes are ordered using a
Spectral bounds for sparse PCA: Exact and greedy algorithms
 Advances in Neural Information Processing Systems 18
, 2006
"... Sparse PCA seeks approximate sparse “eigenvectors ” whose projections capture the maximal variance of data. As a cardinalityconstrained and nonconvex optimization problem, it is NPhard and yet it is encountered in a wide range of applied fields, from bioinformatics to finance. Recent progress ha ..."
Abstract

Cited by 79 (4 self)
 Add to MetaCart
has focused mainly on continuous approximation and convex relaxation of the hard cardinality constraint. In contrast, we consider an alternative discrete spectral formulation based on variational eigenvalue bounds and provide an effective greedy strategy as well as provably optimal solutions using
Probabilistic nonlinear principal component analysis with Gaussian process latent variable models
 Journal of Machine Learning Research
, 2005
"... Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component ..."
Abstract

Cited by 229 (24 self)
 Add to MetaCart
Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal
Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations
, 2008
"... ... reduced basis approximation and a posteriori error estimation for linear functional outputs of affinely parametrized elliptic coercive partial differential equations. The essential ingredients are (primaldual) Galerkin projection onto a lowdimensional space associated with a smooth “parametric ..."
Abstract

Cited by 204 (37 self)
 Add to MetaCart
“parametric manifold”—dimension reduction; efficient and effective greedy sampling methods for identification of optimal and numerically stable approximations—rapid convergence; a posteriori error estimation procedures—rigorous and sharp bounds for the linearfunctional outputs of interest; and Offline
Results 1  10
of
1,651