Results 1  10
of
97
Compressed sensing
, 2004
"... We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal numbe ..."
Abstract

Cited by 3625 (22 self)
 Add to MetaCart
(Show Context)
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible `1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of lessfavorable cases, in which the object has all coefficients nonzero, but the coefficients obey an `p bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image pro
Sparse solutions to linear inverse problems with multiple measurement vectors
 IEEE Trans. Signal Processing
, 2005
"... Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known ..."
Abstract

Cited by 272 (22 self)
 Add to MetaCart
(Show Context)
Abstract—We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NPhard, many single–measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms–Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)–to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a testcase dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared. I.
Sparsest solutions of underdetermined linear systems via ℓ
"... We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compu ..."
Abstract

Cited by 192 (11 self)
 Add to MetaCart
(Show Context)
We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓqquasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓqquasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓqmethod performs better than other available methods. 1
Computational methods for sparse solution of linear inverse problems
, 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Abstract

Cited by 167 (0 self)
 Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a wealth of applications.
Just relax: Convex programming methods for subset selection and sparse approximation
, 2004
"... Subset selection and sparse approximation problems request a good approximation of an input signal using a linear combination of elementary signals, yet they stipulate that the approximation may only involve a few of the elementary signals. This class of problems arises throughout electrical enginee ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
(Show Context)
Subset selection and sparse approximation problems request a good approximation of an input signal using a linear combination of elementary signals, yet they stipulate that the approximation may only involve a few of the elementary signals. This class of problems arises throughout electrical engineering, applied mathematics and statistics, but small theoretical progress has been made over the last fifty years. Subset selection and sparse approximation both admit natural convex relaxations, but the literature contains few results on the behavior of these relaxations for general input signals. This report demonstrates that the solution of the convex program frequently coincides with the solution of the original approximation problem. The proofs depend essentially on geometric properties of the ensemble of elementary signals. The results are powerful because sparse approximation problems are combinatorial, while convex programs can be solved in polynomial time with standard software. Comparable new results for a greedy algorithm, Orthogonal Matching Pursuit, are also stated. This report should have a major practical impact because the theory applies immediately to many realworld signal processing problems.
Toeplitz compressed sensing matrices with applications to sparse channel estimation
, 2010
"... Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of highdimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entri ..."
Abstract

Cited by 93 (12 self)
 Add to MetaCart
Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of highdimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zeromean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, timeinvariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that timedomain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional leastsquares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.
The Cosparse Analysis Model and Algorithms
, 2011
"... After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This workproposeseffectivepursuitmethodsthat aimtosolveinverseproblemsregularized with the analysismodel prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Fast solution of ℓ1norm minimization problems when the solution may be sparse
, 2006
"... The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slo ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
(Show Context)
The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slow for ℓ1 minimization in many largescale applications. The Homotopy method was originally proposed by Osborne et al. for solving noisy overdetermined ℓ1penalized least squares problems. We here apply it to solve the noiseless underdetermined ℓ1minimization problem min ‖x‖1 subject to y = Ax. We show that Homotopy runs much more rapidly than generalpurpose LP solvers when sufficient sparsity is present. Indeed, the method often has the following kstep solution property: if the underlying solution has only k nonzeros, the Homotopy method reaches that solution in only k iterative steps. When this property holds and k is small compared to the problem size, this means that ℓ1 minimization problems with ksparse solutions can be solved in a fraction of the cost of solving one fullsized linear system. We demonstrate this kstep solution property for two kinds of problem suites. First,