Results 1  10
of
46
Probing the Pareto frontier for basis pursuit solutions
, 2008
"... The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the ..."
Abstract

Cited by 157 (2 self)
 Add to MetaCart
The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the onenorm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a rootfinding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradientprojection method approximately minimizes a leastsquares problem with an explicit onenorm constraint. Only matrixvector operations are required. The primaldual solution of this problem gives function and derivative information needed for the rootfinding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
Compressive simultaneous fullwaveform simulation
, 2008
"... The fact that the computational complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is a major impediment within seismic imaging. By turning simulation into a compressive sensing prob ..."
Abstract

Cited by 34 (19 self)
 Add to MetaCart
The fact that the computational complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is a major impediment within seismic imaging. By turning simulation into a compressive sensing problem—where simulated data is recovered from a relatively small number of independent simultaneous sources—we remove this impediment by showing that compressively sampling a simulation is equivalent to compressively sampling the sources, followed by solving a reduced system. As in compressive sensing, this allows for a reduction in sampling rate and hence in simulation costs. We demonstrate this principle for the timeharmonic Helmholtz solver. The solution is computed by inverting the reduced system, followed by a recovery of the full wavefield with a sparsity promoting program. Depending on the wavefield’s sparsity, this approach can lead to significant cost reductions, in particular when combined with the implicit preconditioned Helmholtz solver, which is known to converge even for decreasing mesh sizes and increasing angular frequencies. These properties make our scheme a viable alternative to explicit timedomain finitedifferences.
Shifting Inequality and Recovery of Sparse Signals
 IEEE Transactions on Signal Processing
"... Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In parti ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via `1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the `2 norm of a subsequence in terms of the `1 norm of another subsequence by shifting the elements to the upper end. Index Terms — 1 minimization, restricted isometry property, shifting inequality, sparse recovery. I.
Structured compressed sensing: From theory to applications
 IEEE Trans. Signal Process
, 2011
"... Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract—Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications. Index Terms—Approximation algorithms, compressed sensing, compression algorithms, data acquisition, data compression, sampling methods. I.
Manifoldbased signal recovery and parameter estimation from compressive measurements
"... A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing highdimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressiv ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing highdimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or nearsparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and nonparametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instanceoptimal bounds in ℓ2 for manifoldbased signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsitybased CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifoldbased models can be used with high accuracy in compressive signal processing.
Kronecker Compressive Sensing
"... Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes. I.
Compressed sensing over ℓpballs: Minimax mean square error. arXiv preprint arXiv:1103.1943v2
, 2011
"... We consider the compressed sensing problem, where the object x0 ∈ R N is to be recovered from incomplete measurements y = Ax0 + z; here the sensing matrix A is an n × N random matrix with iid Gaussian entries and n < N. A popular method of sparsitypromoting reconstruction is ℓ1penalized leastsqua ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We consider the compressed sensing problem, where the object x0 ∈ R N is to be recovered from incomplete measurements y = Ax0 + z; here the sensing matrix A is an n × N random matrix with iid Gaussian entries and n < N. A popular method of sparsitypromoting reconstruction is ℓ1penalized leastsquares reconstruction (aka LASSO, Basis Pursuit). It is currently popular to consider the strict sparsity model, where the object x0 is nonzero in only a small fraction of entries. In this paper, we instead consider the much more broadly applicable ℓpsparsity model, where x0 is sparse in the sense of having ℓp norm bounded by ξ · N 1/p for some fixed 0 < p ≤ 1 and ξ> 0. We study an asymptotic regime in which n and N both tend to infinity with limiting ratio n/N = δ ∈ (0, 1), both in the noisy (z = 0) and noiseless (z = 0) cases. Under weak assumptions on x0, we are able to precisely evaluate the worstcase asymptotic minimax meansquared reconstruction error (AMSE) for ℓ1 penalized leastsquares: min over penalization parameters, max over ℓpsparse objects x0. We exhibit the asymptotically
A restricted isometry property for structurallysubsampled unitary matrices
 in Proc. Annu. Allerton Conf. Communication, Control, and Computing
, 2009
"... Abstract—Subsampled (or partial) Fourier matrices were originally introduced in the compressive sensing literature by Candès et al. Later, in papers by Candès and Tao and Rudelson and Vershynin, it was shown that (random) subsampling of the rows of many other classes of unitary matrices also yield e ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract—Subsampled (or partial) Fourier matrices were originally introduced in the compressive sensing literature by Candès et al. Later, in papers by Candès and Tao and Rudelson and Vershynin, it was shown that (random) subsampling of the rows of many other classes of unitary matrices also yield effective sensing matrices. The key requirement is that the rows of U, the unitary matrix, must be highly incoherent with the basis in which the signal is sparse. In this paper, we consider acquisition systems that—despite sensing sparse signals in an incoherent domain— cannot randomly subsample rows from U. We consider a general class of systems in which the sensing matrix corresponds to subsampling of the rows of matrices of the form Φ = RU (instead of U), where R is typically a lowrank matrix whose structure reflects the physical/technological constraints of the acquisition system. We use the term “structurallysubsampled unitary matrices ” to describe such sensing matrices. We investigate the restricted isometry property of a particular class of structurallysubsampled unitary matrices that arise naturally in application areas such as multipleantenna channel estimation and subnyquist sampling. In addition, we discuss an immediate application of this work in the area of wireless channel estimation, where the main results of this paper can be applied to the estimation of multipleantenna orthogonal frequency division multiplexing channels that have sparse impulse responses. A. Background I.
Compressed Sensing for Surface Characterization and Metrology
 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
, 2009
"... Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geomet ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geometric waveletbased recovery algorithm is proposed for scratched and textural surfaces by solving a convex optimal problem with sparse constrained by curvelet transform and wave atom transform. In the framework of compressed measurement, one can stably recover compressible surfaces from incomplete and inaccurate random measurements by using the recovery algorithm. The necessary number of measurements is far fewer than those required by traditional methods that have to obey the Shannon sampling theorem. The compressed metrology essentially shifts online measurement cost to computational cost of offline nonlinear recovery. By combining the idea of sampling, sparsity, and compression, the proposed method indicates a new acquisition protocol and leads to building new measurement instruments. It is very significant for measurements limited by physical constraints, or is extremely expensive. Experiments on engineering and bioengineering surfaces demonstrate good performances of the proposed method.