• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Theoretical results on sparse representations of multiple-measurement vectors (2006)

by J Chen, X Huo
Venue:IEEE Trans. Signal Processing
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 148
Next 10 →

Robust Recovery of Signals From a Structured Union of Subspaces

by Yonina C. Eldar, Moshe Mishali , 2008
"... Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structu ..."
Abstract - Cited by 221 (47 self) - Add to MetaCart
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. We then propose a mixed ℓ2/ℓ1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
(Show Context)

Citation Context

...f unknown vectors that share a joint sparsity pattern. MMV recovery algorithms were studied in [19], [27]–[30]. Equivalence results based on mutual coherence for a mixed ℓp/ℓ1 program were derived in =-=[28]-=-. These results turn out to be the same as that obtained from a single measurement problem. This is in contrast to the fact that in practice, MMV methods tend to outperform algorithms that treat each ...

Block-sparse signals: Uncertainty relations and efficient recovery

by Yonina C. Eldar, Patrick Kuppinger, Helmut Bölcskei - IEEE TRANS. SIGNAL PROCESS , 2010
"... We consider efficient methods for the recovery of block-sparse signals — i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we ..."
Abstract - Cited by 161 (17 self) - Add to MetaCart
We consider efficient methods for the recovery of block-sparse signals — i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block k-sparse signals in no more than k steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed `2=`1-optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.

From theory to practice: Sub-Nyquist sampling of sparse wideband analog signals

by Moshe Mishali, Yonina C. Eldar - IEEE J. SEL. TOPICS SIGNAL PROCESS , 2010
"... Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. ..."
Abstract - Cited by 153 (55 self) - Add to MetaCart
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.

Sampling theorems for signals from the union of finite-dimensional linear subspaces

by Thomas Blumensath, Mike E. Davies - IEEE Trans. on Inform. Theory , 2009
"... Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampli ..."
Abstract - Cited by 110 (14 self) - Add to MetaCart
Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of low dimension. We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters. Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically
(Show Context)

Citation Context

...k non-zero elements. • The set of k-sparse signals in which the non-zero elements form a tree as considered in [18].VERSION: DECEMBER 3, 2009 3 • • The simultaneous sparse approximation problem [19] =-=[20]-=- [21] [22] [23], where a number of observations yi is assumed to follow the model yi = Φixi where the xi are constrained to have the same non-zero elements. The set consisting of the union of statisti...

Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals

by Moshe Mishali, Yonina C. Eldar
"... We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose stric ..."
Abstract - Cited by 109 (60 self) - Add to MetaCart
We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose strict limitations on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finitedimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.
(Show Context)

Citation Context

...lock requires finding a sparsest solution matrix which is an NP-hard problem [12]. Several sub-optimal efficient methods have been developed for this problem in the compressed sensing (CS) literature =-=[15]-=-,[16]. In our algorithms, any of these techniques can be used. Numerical experiments on random constructions of multi-band signals show that both SBR4 and SBR2 maintain a satisfactory exact recovery r...

Structured compressed sensing: From theory to applications

by Marco F. Duarte, Yonina C. Eldar - IEEE TRANS. SIGNAL PROCESS , 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract - Cited by 104 (16 self) - Add to MetaCart
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
(Show Context)

Citation Context

...sary and sufficient condition for the measurements to uniquely determine the jointly sparse matrix is that spark (31) The sufficiency result was initially shown for the case [111]. As shown in [110], =-=[112]-=-, we can replace by in (31). The sufficient direction of this condition was shown in [113] to hold even in the case where there are infinitely many vectors . A direct consequence of Theorem 18 is that...

Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation

by Yonina C. Eldar, Holger Rauhut
"... In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relax ..."
Abstract - Cited by 102 (22 self) - Add to MetaCart
In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed ℓ2,1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.
(Show Context)

Citation Context

... behavior. The BP principle as well as greedy approaches have been extended to the multichannel setup where the signal consists of several channels with joint sparsity support [47], [45], [22], [13], =-=[11]-=-, [31], [20], [21]. In [2] the buzzword distributed compressed sensing was coined for this setup. An alternative approach is to first reduce the problem to a single channel problem that preserves the ...

Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors

by Moshe Mishali, Yonina C. Eldar , 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of non-adaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been ext ..."
Abstract - Cited by 100 (41 self) - Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of non-adaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common sparsity pattern. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing algorithms to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can be recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic model of a single sparse vector by randomly combining the measurements. Our approach is exact for both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently find the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm has fast run time and much higher recovery rate than known popular methods.

ATOMS OF ALL CHANNELS, UNITE! AVERAGE CASE ANALYSIS OF MULTI-CHANNEL SPARSE RECOVERY USING GREEDY ALGORITHMS

by Rémi Gribonval, Holger Rauhut, Karin Schnass, Pierre Vandergheynst , 2007
"... ..."
Abstract - Cited by 83 (12 self) - Add to MetaCart
Abstract not found

Compressed Sensing of Analog Signals in Shift-Invariant Spaces

by Yonina C. Eldar , 2009
"... A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that on ..."
Abstract - Cited by 74 (41 self) - Add to MetaCart
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worst-case scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for low-rate sampling of continuous-time sparse signals in shift-invariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finite-length vectors, we consider sampling of analog signals for which no underlying finite-dimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
(Show Context)

Citation Context

...ed as a union of subspaces where each subspace is spanned by columns of . A sufficient condition for the uniqueness of a -sparse solution to the equations is that has a Kruskal-rank of at least [32], =-=[40]-=-. The Kruskal-rank is the maximal number such that every set of columns of is linearly independent [41]. This unique can be recovered by solving the optimization problem [27] (20) where the pseudo-nor...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University