Results 11  20
of
232
Robust recovery of signals from a union of subspaces
 IEEE TRANS. INFORM. THEORY
, 2008
"... Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees a unique signal consistent with the given measurements is that x lies in a known subspace. Recently, there has been growing interest in non ..."
Abstract

Cited by 44 (14 self)
 Add to MetaCart
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees a unique signal consistent with the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x is assumed to lie in a union of subspaces. An example is the case in which x is a finite length vector that is sparse in a given basis. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a finite union of finite dimensional spaces and the samples are modelled as inner products with an arbitrary set of sampling functions. We first develop conditions under which unique and stable recovery of x is possible, albeit with algorithms that have combinatorial complexity. To derive an efficient and robust recovery algorithm, we then show that our problem can be formulated as that of recovering a block sparse vector, namely a vector whose nonzero elements appear in fixed blocks. To solve this problem, we suggest minimizing a mixed ℓ2/ℓ1 norm subject to the measurement equations. We then develop equivalence conditions under which the proposed convex algorithm is guaranteed to recover the original signal. These results rely on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. A special case of the proposed framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Specializing our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
(Show Context)
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms
Linear Regression With a Sparse Parameter Vector
 in IEEE Percentage of correctly selected order BOSS, σ 2 = −10 dB BOSS empirical AIC c BIC −5 noise variance σ 2
, 2007
"... We consider linear regression under a model where the parameter vector is known to be sparse. Using a Bayesian framework, we derive a computationally efficient approximation to the minimum meansquare error (MMSE) estimate of the parameter vector. The performance of the soobtained estimate is illus ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
(Show Context)
We consider linear regression under a model where the parameter vector is known to be sparse. Using a Bayesian framework, we derive a computationally efficient approximation to the minimum meansquare error (MMSE) estimate of the parameter vector. The performance of the soobtained estimate is illustrated via numerical examples.
Spectral Compressive Sensing
, 2010
"... Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency do ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency domain. In practical applications, the standard frequency domain signal representation is the discrete Fourier transform (DFT). Unfortunately, the DFT coefficients of a frequencysparse signal are themselves sparse only in the contrived case where the sinusoid frequencies are integer multiples of the DFT’s fundamental frequency. As a result, practical DFTbased CS acquisition and recovery of smooth signals does not perform nearly as well as one might expect. In this paper, we develop a new spectral compressive sensing (SCS) theory for general frequencysparse signals. The key ingredients are an oversampled DFT frame, a signal model that inhibits closely spaced sinusoids, and classical sinusoid parameter estimation algorithms from the field of spectrum estimation. Using peridogram and eigenanalysis based spectrum estimates (e.g., MUSIC), our new SCS algorithms significantly outperform the current stateoftheart CS algorithms while providing provable bounds on the number of measurements required for stable recovery.
Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples
, 2007
"... ..."
(Show Context)
Kronecker Compressive Sensing
"... Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.
Distributed Target Localization via Spatial Sparsity
 in European Signal Processing Conference (EUSIPCO
, 2008
"... We propose an approximation framework for distributed target localization in sensor networks. We represent the unknown target positions on a location grid as a sparse vector, whose support encodes the multiple target locations. The location vector is linearly related to multiple sensor measurements ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
(Show Context)
We propose an approximation framework for distributed target localization in sensor networks. We represent the unknown target positions on a location grid as a sparse vector, whose support encodes the multiple target locations. The location vector is linearly related to multiple sensor measurements through a sensing matrix, which can be locally estimated at each sensor. We show that we can successfully determine multiple target locations by using linear dimensionalityreducing projections of sensor measurements. The overall communication bandwidth requirement per sensor is logarithmic in the number of grid points and linear in the number of targets, ameliorating the communication requirements. Simulations results demonstrate the performance of the proposed framework. 1.
A MULTIDIMENSIONAL SHRINKAGETHRESHOLDING OPERATOR
"... The scalar shrinkagethresholding operator (SSTO) is a key ingredient of many modern statistical signal processing algorithms including: sparse inverse problem solutions, wavelet denoising, and JPEG2000 image compression. In these applications, it is customary to select the threshold of the operator ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
The scalar shrinkagethresholding operator (SSTO) is a key ingredient of many modern statistical signal processing algorithms including: sparse inverse problem solutions, wavelet denoising, and JPEG2000 image compression. In these applications, it is customary to select the threshold of the operator by solving a scalar sparsity penalized quadratic optimization. In this work, we present a natural multidimensional extension of the scalar shrinkage thresholding operator. Similarly to the scalar case, the threshold is determined by the minimization of a convex quadratic form plus an euclidean penalty, however, here the optimization is performed over a domain of dimension N ≥ 1. The solution to this convex optimization problem is called the multidimensional shrinkage threshold operator (MSTO). The MSTO reduces to the standard SSTO in the special case of N = 1. In the general case of N> 1 the optimal MSTO threshold can be found by a simple convex line search. We present three illustrative applications of the MSTO in the context of nonlinear regression: l2penalized linear regression, Group LASSO linear regression and Group LASSO logistic regression.
Signal reconstruction in sensor arrays using sparse representations $
, 2005
"... We propose a technique of multisensor signal reconstruction based on the assumption, that source signals are spatially sparse, as well as have sparse representation in a chosen dictionary in time domain. This leads to a large scale convex optimization problem, which involves combined l1l2 norm mini ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
We propose a technique of multisensor signal reconstruction based on the assumption, that source signals are spatially sparse, as well as have sparse representation in a chosen dictionary in time domain. This leads to a large scale convex optimization problem, which involves combined l1l2 norm minimization. The optimization is carried by the truncated Newton method, using preconditioned conjugate gradients in inner iterations. The byproduct of reconstruction is the estimation of source locations. r 2005 Published by Elsevier B.V.