Results 11  20
of
108
LowDimensional Models for Dimensionality Reduction and Signal Recovery: A Geometric Perspective
, 2009
"... We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal model ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal models, point clouds, and manifold signal models. Each model has a particular geometrical structure that enables signal information in to be stably preserved via a simple linear and nonadaptive projection to a much lower dimensional space whose dimension either is independent of the ambient dimension at best or grows logarithmically with it at worst. As a bonus, we point out a common misconception related to probabilistic compressible signal models, that is, that the generalized Gaussian and Laplacian random models do not support stable linear dimensionality reduction.
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
(Show Context)
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms
Rank Awareness in Joint Sparse Recovery
, 2010
"... In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the hea ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation has origins in the field of array signal processing as we discuss in this paper. Inspired by these links, we introduce a new family of MMV algorithms based on the wellknow MUSIC method in array processing. We particularly highlight the role of the rank of the unknown signal matrix X in determining the difficulty of the recovery problem. We begin by deriving necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that as the rank of X increases, the computational effort required to solve the MMV problem through a combinatorial search is reduced. In the second part of the paper we consider practical suboptimal algorithms for MMV recovery. We examine the rank awareness of popular methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such method is a discrete version of the MUSIC algorithm popular in array signal processing. This approach is guaranteed to recover the sparse vectors in the full rank MMV setting under mild conditions. We then extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case. This approach also provides guaranteed recovery in the full rank setting. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Compressiveprojection principal component analysis and the first eigenvector
 in Proc. IEEE Data Compression Conf
, 2009
"... Abstract—Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its datadependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resourceconstrained settings such as satelliteborn ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Principal component analysis (PCA) is often central to dimensionality reduction and compression in many applications, yet its datadependent nature as a transform computed via expensive eigendecomposition often hinders its use in severely resourceconstrained settings such as satelliteborne sensors. A process is presented that effectively shifts the computational burden of PCA from the resourceconstrained encoder to a presumably more capable basestation decoder. The proposed approach, compressiveprojection PCA (CPPCA), is driven by projections at the sensor onto lowerdimensional subspaces chosen at random, while the CPPCA decoder, given only these random projections, recovers not only the coefficients associated with the PCA transform, but also an approximation to the PCA transform basis itself. An analysis is presented that extends existing Rayleigh–Ritz theory to the special case of highly eccentric distributions; this analysis in turn motivates a reconstruction process at the CPPCA decoder that consists of a novel eigenvector reconstruction based on a convexset optimization driven by Ritz vectors within the projected subspaces. As such, CPPCA constitutes a fundamental departure from traditional PCA in that it permits its excellent dimensionalityreduction and compression performance to be realized in an lightencoder/heavydecoder system architecture. In experimental results, CPPCA outperforms a multiplevector variant of compressed sensing for the reconstruction of hyperspectral data. Index Terms—Hyperspectral data, principal component analysis (PCA), random projections, Rayleigh–Ritz theory. I.
Surveying and comparing simultaneous sparse approximation (or grouplasso) algorithms
"... ..."
(Show Context)
Domain decomposition methods for linear inverse problems with sparsity constraints
, 2007
"... Quantities of interest appearing in concrete applications often possess sparse expansions with respect to a preassigned frame. Recently, there were introduced sparsity measures which are typically constructed on the basis of weighted ℓ1 norms of frame coefficients. One can model the reconstruction o ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
Quantities of interest appearing in concrete applications often possess sparse expansions with respect to a preassigned frame. Recently, there were introduced sparsity measures which are typically constructed on the basis of weighted ℓ1 norms of frame coefficients. One can model the reconstruction of a sparse vector from noisy linear measurements as the minimization of the functional defined by the sum of the discrepancy with respect to the data and the weighted ℓ1norm of suitable frame coefficients. Thresholded Landweber iterations were proposed for the solution of the variational problem. Despite of its simplicity which makes it very attractive to users, this algorithm converges slowly. In this paper we investigate methods to accelerate significantly the convergence. We introduce and analyze sequential and parallel iterative algorithms based on alternating subspace corrections for the solution of the linear inverse problem with sparsity constraints. We prove their norm convergence to minimizers of the functional. We compare the computational cost and the behavior of these new algorithms with respect to the thresholded Landweber iterations.
Restoration of color images by vector valued BV functions and variational calculus
 SIAM J. Appl. Math
, 2006
"... Abstract. We analyze a variational problem for the recovery of vector valued functions and we compute its numerical solution. The data of the problem are a small set of complete samples of the vector valued function and a significant incomplete information where the former are missing. The incomplet ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
(Show Context)
Abstract. We analyze a variational problem for the recovery of vector valued functions and we compute its numerical solution. The data of the problem are a small set of complete samples of the vector valued function and a significant incomplete information where the former are missing. The incomplete information is assumed as the result of a distortion, with values in a lower dimensional manifold. For the recovery of the function we minimize a functional which is formed by the discrepancy with respect to the data and total variation regularization constraints. We show existence of minimizers in the space of vector valued BV functions. For the computation of minimizers we provide a stable and efficient method. First we approximate the functional by coercive functionals on W 1,2 in terms of Γconvergence. Then we realize approximations of minimizers of the latter functionals by an iterative procedure to solve the PDE system of the corresponding EulerLagrange equations. The numerical implementation comes naturally by finite element discretization. We apply the algorithm to the restoration of color images from a limited color information and gray levels where the colors are missing. The numerical experiments show that this scheme is very fast and robust. The reconstruction capabilities of the model are shown, also from very limited (randomly distributed) color data. Several examples are included from the real restoration problem of the A. Mantegna’s art frescoes in Italy.
Nonparametric Regression and Classification with Joint Sparsity Constraints
"... We propose new families of models and algorithms for highdimensional nonparametric learning with joint sparsity constraints. Our approach is based on a regularization method that enforces common sparsity patterns across different function components in a nonparametric additive model. The algorithms ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We propose new families of models and algorithms for highdimensional nonparametric learning with joint sparsity constraints. Our approach is based on a regularization method that enforces common sparsity patterns across different function components in a nonparametric additive model. The algorithms employ a coordinate descent approach that is based on a functional softthresholding operator. The framework yields several new models, including multitask sparse additive models, multiresponse sparse additive models, and sparse additive multicategory logistic regression. The methods are illustrated with experiments on synthetic data and gene microarray data. 1
ELASTICNET REGULARIZATION IN LEARNING THEORY
, 2008
"... Abstract. Within the framework of statistical learning theory we analyze in detail the socalled elasticnet regularization scheme proposed by Zou and Hastie [45] for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Within the framework of statistical learning theory we analyze in detail the socalled elasticnet regularization scheme proposed by Zou and Hastie [45] for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is randomdesign regression where we allow the response variable to be vectorvalued and we consider prediction functions which are linear combination of elements (features) in an infinitedimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular “elasticnet representation ” of the regression function such that, if the number of data increases, the elasticnet estimator is consistent not only for prediction but also for variable/feature selection. Our results include finitesample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elasticnet solution which is different from the optimization procedure originally proposed in [45]. 1.
Performance Analysis for Sparse Support Recovery
, 2009
"... In this paper, the performance of estimating the common support for jointly sparse signals based on their projections onto lowerdimensional space is analyzed. Support recovery is formulated as a multiplehypothesis testing problem and both upper and lower bounds on the probability of error are deri ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In this paper, the performance of estimating the common support for jointly sparse signals based on their projections onto lowerdimensional space is analyzed. Support recovery is formulated as a multiplehypothesis testing problem and both upper and lower bounds on the probability of error are derived for general measurement matrices, by using the Chernoff bound and Fano’s inequality, respectively. The form of the upper bound shows that the performance is determined by a single quantity that is a measure of the incoherence of the measurement matrix, while the lower bound reveals the importance of the total measurement gain. To demonstrate its immediate applicability, the lower bound is applied to derive the minimal number of samples needed for accurate direction of arrival (DOA) estimation for an algorithm based on sparse representation. When applied to Gaussian measurement ensembles, these bounds give necessary and sufficient conditions to guarantee a vanishing probability of error for majority realizations of the measurement matrix. Our results offer surprising insights into the sparse signal reconstruction based on their projections. For example, as far as support recovery is concerned, the wellknown bound in compressive sensing is generally not sufficient if the Gaussian ensemble is used. Our study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in compressive sensing as well as other application areas taking advantage of signal sparsity.