Results 1 
7 of
7
OracleOrder Recovery Performance of Greedy Pursuits With Replacement Against General Perturbations
 Signal Processing, IEEE Transactions on
, 2013
"... Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminate ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery — least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracleorder recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.
Sparse recovery with coherent tight frame via analysis Dantzig selector and analysis LASSO
, 2013
"... ..."
(Show Context)
A null space analysis of the ℓ1synthesis method in framebased compressed sensing. in preperation
, 2013
"... ar ..."
(Show Context)
GLOBAL GEOMETRIC CONDITIONS ON SENSING MATRICES FOR THE SUCCESS OF `1 MINIMIZATION ALGORITHM
, 2013
"... Compressed Sensing concerns a new class of linear data acquisition protocols that are more efficient than the classical Shannon sampling theorem when targeting at signals with sparse structures. In this thesis, we study the stability of a Statistical Restricted Isometry Property and show how this pr ..."
Abstract
 Add to MetaCart
(Show Context)
Compressed Sensing concerns a new class of linear data acquisition protocols that are more efficient than the classical Shannon sampling theorem when targeting at signals with sparse structures. In this thesis, we study the stability of a Statistical Restricted Isometry Property and show how this property can be further relaxed while maintaining its sufficiency for the Basis Pursuit algorithm to recover sparse signals. We then look at the dictionary extension of Compressed Sensing where signals are sparse under a redundant dictionary and reconstruction is achieved by the `1 synthesis method. By establishing a necessary and sufficient condition for the stability of `1 synthesis, we are able to predict this algorithm’s performances under different dictionaries. Last, we construct a class of deterministic sensing matrix for the DiracFourier joint dictionary.
Robustness of Sparse Recovery via Fminimization: A Topological Viewpoint
, 2012
"... A recent trend in compressed sensing is to consider nonconvex optimization techniques for sparse recovery. A general class of such optimizations, called Fminimization, has become of particular interest, since its exact reconstruction condition (ERC) in the noiseless setting can be precisely chara ..."
Abstract
 Add to MetaCart
(Show Context)
A recent trend in compressed sensing is to consider nonconvex optimization techniques for sparse recovery. A general class of such optimizations, called Fminimization, has become of particular interest, since its exact reconstruction condition (ERC) in the noiseless setting can be precisely characterized by null space property (NSP). However, little work has been done concerning its robust reconstruction condition (RRC) in the noisy setting. In this paper we look at the null space of the measurement matrix as a point on the Grassmann manifold, and then study the relation of the ERC and RRC sets on the Grassmannian. It is shown that the RRC set is exactly the topological interior of the ERC set. From this characterization, a previous result of the equivalence of ERC and RRC for lpminimization follows easily as a special case. Moreover, when F is nondecreasing, it is shown that the ERC and RRC sets are equivalent up to a set of measure zero. As a consequence, the probabilities of ERC and RRC are the same if the measurement matrix is randomly generated according to a continuous distribution. Finally, we provide several rules for comparing the performances of different cost functions, as applications of the above results.
1Robustness of Sparse Recovery via Fminimization: A Topological Viewpoint
"... A recent trend in compressed sensing is to consider nonconvex optimization techniques for sparse recovery. A general class of such optimizations, called Fminimization, has become of particular interest, since its exact reconstruction condition (ERC) in the noiseless setting can be precisely charac ..."
Abstract
 Add to MetaCart
(Show Context)
A recent trend in compressed sensing is to consider nonconvex optimization techniques for sparse recovery. A general class of such optimizations, called Fminimization, has become of particular interest, since its exact reconstruction condition (ERC) in the noiseless setting can be precisely characterized by null space property (NSP). However, little work has been done concerning its robust reconstruction condition (RRC) in the noisy setting. In this paper we look at the null space of the measurement matrix as a point on the Grassmann manifold, and then study the relation of the ERC and RRC sets, denoted as ΩJ and ΩrJ. It is shown that ΩrJ is the interior of ΩJ. From this characterization, a previous result of the equivalence of ERC and RRC for `pminimization follows easily as a special case. Moreover, when F is nondecreasing, it is shown that ΩJ \ int(ΩJ) is a set of measure zero and of the first category. As a consequence, the probabilities of ERC and RRC are the same if the measurement matrix A is randomly generated according to a continuous distribution. Quantitatively, if the null space N (A) lies in the “dinterior ” of ΩJ, then RRC will be satisfied with the robustness constant C = 2+2d dσmin(A>) ; and conversely if RRC holds with C = 2−4d dσmax(A>), then N (A) must lie in d int(ΩJ). Based on this result, Gordon’s escape through the mesh theorem is applied to study the tradeoff between measurement rate and robustness in the asymptotic region. Finally, we present several rules for comparing the performances of different cost functions, which potentially provide guiding principles for the design of the F function.