Results 11  20
of
321
LowDimensional Models for Dimensionality Reduction and Signal Recovery: A Geometric Perspective
, 2009
"... We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal model ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
(Show Context)
We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal models, point clouds, and manifold signal models. Each model has a particular geometrical structure that enables signal information in to be stably preserved via a simple linear and nonadaptive projection to a much lower dimensional space whose dimension either is independent of the ambient dimension at best or grows logarithmically with it at worst. As a bonus, we point out a common misconception related to probabilistic compressible signal models, that is, that the generalized Gaussian and Laplacian random models do not support stable linear dimensionality reduction.
A unified Bayesian framework for MEG/EEG source imaging
 Neuroimage
, 2009
"... The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicit ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
(Show Context)
The illposed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include weighted minimum ℓ2norm, FOCUSS, MCE, VESTAL, sLORETA, ReML and covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination (ARD). Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant. I.
Robust sparse coding for face recognition
 Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition
, 2011
"... Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
(Show Context)
Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the
Bayesian Robust Principal Component Analysis
, 2010
"... A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate r ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the lowrank and sparseoutlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the lowrank and sparse components. We compare the Bayesian model to a stateoftheart optimizationbased implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.
Sparsity preserving projections with applications to face recognition
 Pattern Recogn. 2010
"... Abstract: Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the highdimensional data such as face images. In this paper, we propose a new unsupervised DR method called Sparsity Preserving Projections (SPP). Unlike many existing techniques such as Local ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
(Show Context)
Abstract: Dimensionality reduction methods (DRs) have commonly been used as a principled way to understand the highdimensional data such as face images. In this paper, we propose a new unsupervised DR method called Sparsity Preserving Projections (SPP). Unlike many existing techniques such as Local Preserving Projection (LPP) and Neighborhood Preserving Embedding (NPE), where local neighborhood information is preserved during the DR procedure, SPP aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularizationrelated objective function. The obtained projections are invariant to rotations, rescalings and translations of the data, and more importantly, they contain natural discriminating information even if no class labels are provided. Moreover, SPP chooses its neighborhood automatically and hence can be more conveniently used in practice compared to LPP and NPE. The feasibility and effectiveness of the proposed method is verified on three popular face databases (Yale, AR and Extended Yale B) with promising results. Key words: Dimensionality reduction; sparse representation; compressive sensing; face recognition. 1
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Compressive Estimation of Doubly Selective Channels: Exploiting Channel Sparsity to Improve Spectral Efficiency in Multicarrier Transmissions
"... We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDopple ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
(Show Context)
We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDoppler sparsity, CSbased channel estimation allows an increase in spectral efficiency through a reduction of the number of pilot symbols that have to be transmitted. We also present an extension of our basic channel estimator that employs a sparsityimproving basis expansion. We propose a framework for optimizing the basis and an iterative approximate basis optimization algorithm. Simulation results using three different CS recovery algorithms demonstrate significant performance gains (in terms of improved estimation accuracy or reduction of the number of pilots) relative to conventional leastsquares estimation, as well as substantial advantages of using an optimized basis.
Learning with dynamic group sparsity
 In International Conference on Computer Vision
, 2009
"... This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clu ..."
Abstract

Cited by 33 (13 self)
 Add to MetaCart
(Show Context)
This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current stateoftheart algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm. 1.
Compressiveprojection principal component analysis and the first eigenvector
 in Proc. IEEE Data Compression Conf
, 2009
"... Abstract—Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its datadependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resourceconstrained settings such as satelliteborn ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its datadependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resourceconstrained settings such as satelliteborne sensors. A process is presented that effectively shifts the computational burden of PCA from the resourceconstrained encoder to a presumably more capable basestation decoder. The proposed approach, compressiveprojection PCA (CPPCA), is driven by projections at the sensor onto lowerdimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh–Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convexset optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionalityreduction and compression performance to be realized in an lightencoder/heavydecoder system architecture. In experimental results, CPPCA outperforms a multiplevector variant of compressed sensing for the reconstruction of hyperspectral data. Index Terms—Hyperspectral data, principal component analysis (PCA), random projections, Rayleigh–Ritz theory. I.
Expectationmaximization BernoulliGaussian approximate message passing
 in Proc. Asilomar Conf. Signals Syst. Comput
, 2011
"... Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. W ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not leastfavorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d BernoulliGaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zeromean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BGAMP algorithm within an expectationmaximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EMBGAMP on a range of signal types. 12 I.