Results 1  10
of
18
Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information
, 2006
"... This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this pa ..."
Abstract

Cited by 1304 (42 self)
 Add to MetaCart
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this paper is as follows. Suppose that is a superposition of spikes @ Aa @ A @ A obeying @�� � A I for some constant H. We do not know the locations of the spikes nor their amplitudes. Then with probability at least I @ A, can be reconstructed exactly as the solution to the I minimization problem I aH @ A s.t. ” @ Aa ” @ A for all
Image Decomposition via the Combination of Sparse Representations and a Variational Approach
 IEEE Transactions on Image Processing
, 2004
"... The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and s ..."
Abstract

Cited by 127 (27 self)
 Add to MetaCart
The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years several pioneering works suggested such a separation based on variational formulation, and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the Basis Pursuit Denoising (BPDN) algorithm and the TotalVariation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures, and the other for the natural scene parts, assumed to be piecewisesmooth. Both dictionaries are chosen such that they lead to sparse representations over one type of imagecontent (either texture or piecewise smooth). The use of the BPDN with the two augmented dictionaries leads to the desired separation, along with noise removal as a byproduct. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly e#cient numerical scheme to solve the combined optimization problem posed in our model, and show several experimental results that validate the algorithm's performance.
Compressed Sensing MRI
"... Compressed sensing (CS) aims to reconstruct signals and images from significantly fewer measurements than were traditionally thought necessary. Magnetic Resonance Imaging (MRI) is an essential medical imaging tool with an inherently slow data acquisition process. Applying CS to MRI offers potentiall ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Compressed sensing (CS) aims to reconstruct signals and images from significantly fewer measurements than were traditionally thought necessary. Magnetic Resonance Imaging (MRI) is an essential medical imaging tool with an inherently slow data acquisition process. Applying CS to MRI offers potentially significant scan time reductions, with benefits for patients and health
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
Compressive Structured Light for Recovering Inhomogeneous Participating Media
"... Abstract. We propose a new method named compressive structured light for recovering inhomogeneous participating media. Whereas conventional structured light methods emit coded light patterns onto the surface of an opaque object to establish correspondence for triangulation, compressive structured li ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Abstract. We propose a new method named compressive structured light for recovering inhomogeneous participating media. Whereas conventional structured light methods emit coded light patterns onto the surface of an opaque object to establish correspondence for triangulation, compressive structured light projects patterns into a volume of participating medium to produce images which are integral measurements of the volume density along the line of sight. For a typical participating medium encountered in the real world, the integral nature of the acquired images enables the use of compressive sensing techniques that can recover the entire volume density from only a few measurements. This makes the acquisition process more efficient and enables reconstruction of dynamic volumetric phenomena. Moreover, our method requires the projection of multiplexed coded illumination, which has the added advantage of increasing the signaltonoise ratio of the acquisition. Finally, we propose an iterative algorithm to correct for the attenuation of the participating medium during the reconstruction process. We show the effectiveness of our method with simulations as well as experiments on the volumetric recovery of multiple translucent layers, 3D point clouds etched in glass, and the dynamic process of milk drops dissolving in water. 1
Image Decomposition: Separation of Texture from Piecewise Smooth Content
, 2003
"... This paper presents a novel method for separating images into texture and piecewise smooth parts. The proposed approach is based on a combination of the Basis Pursuit Denoising (BPDN) algorithm and the TotalVariation (TV) regularization scheme. The basic idea promoted in this paper is the use of tw ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
This paper presents a novel method for separating images into texture and piecewise smooth parts. The proposed approach is based on a combination of the Basis Pursuit Denoising (BPDN) algorithm and the TotalVariation (TV) regularization scheme. The basic idea promoted in this paper is the use of two appropriate dictionaries, one for the representation of textures, and the other for the natural scene parts. Each dictionary is designed for sparse representation of a particular type of imagecontent (either texture or piecewise smooth). The use of BPDN with the two augmented dictionaries leads to the desired separation, along with noise removal as a byproduct. As the need to choose a proper dictionary for natural scene is very hard, a TV regularization is employed to better direct the separation process. Experimental results validate the algorithm's performance.
Compressed Sensing for Surface Characterization and Metrology
 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
, 2009
"... Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geomet ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geometric waveletbased recovery algorithm is proposed for scratched and textural surfaces by solving a convex optimal problem with sparse constrained by curvelet transform and wave atom transform. In the framework of compressed measurement, one can stably recover compressible surfaces from incomplete and inaccurate random measurements by using the recovery algorithm. The necessary number of measurements is far fewer than those required by traditional methods that have to obey the Shannon sampling theorem. The compressed metrology essentially shifts online measurement cost to computational cost of offline nonlinear recovery. By combining the idea of sampling, sparsity, and compression, the proposed method indicates a new acquisition protocol and leads to building new measurement instruments. It is very significant for measurements limited by physical constraints, or is extremely expensive. Experiments on engineering and bioengineering surfaces demonstrate good performances of the proposed method.
Nayak, “Accelerated threedimensional upper airway MRI using compressed sensing
 Magn. Reson. Med
, 2009
"... upper airway has provided insights into vocal tract shaping and data for its modeling. Small movements of articulators can lead to large changes in the produced sound, therefore improving the resolution of these data sets, within the constraints of a sustained speech sound (6–12 s), is an important ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
upper airway has provided insights into vocal tract shaping and data for its modeling. Small movements of articulators can lead to large changes in the produced sound, therefore improving the resolution of these data sets, within the constraints of a sustained speech sound (6–12 s), is an important area for investigation. The purpose of the study is to provide a first application of compressed sensing (CS) to highresolution 3D upper airway MRI using spatial finite difference as the sparsifying transform, and to experimentally determine the benefit of applying constraints on image phase. Estimates of image phase are incorporated into the CS reconstruction to improve the sparsity of the finite difference of the solution. In a retrospective subsampling experiment with no sound production, 5 � and 4 � were the highest acceleration factors that produced acceptable image quality when using a phase constraint and when not using a phase constraint, respectively.
Fast Algorithms for Image Reconstruction with Application to Partially Parallel MR Imaging
"... This paper presents two fast algorithms for total variationbased image reconstruction in partially parallel magnetic resonance imaging (PPI) where the inversion matrix is large and illconditioned. These algorithms utilize variable splitting techniques to decouple the original problem into more eas ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper presents two fast algorithms for total variationbased image reconstruction in partially parallel magnetic resonance imaging (PPI) where the inversion matrix is large and illconditioned. These algorithms utilize variable splitting techniques to decouple the original problem into more easily solved subproblems. The first method reduces the image reconstruction problem to an unconstrained minimization problem, which is solved by an alternating proximal minimization algorithm. One phase of the algorithm solves a total variation (TV) denoising problem, and second phase solves an illconditioned linear system. Linear and sublinear convergence results are given, and an implementation based on a primaldual hybrid gradient (PDHG) scheme for the TV problem and a BarzilaiBorwein scheme for the linear inversion is proposed. The second algorithm exploits the special structure of the PPI reconstruction problem by decomposing it into one subproblem involving Fourier transforms and another subproblem that can be treated by the PDHG scheme. Numerical results and comparisons with recently developed methods indicate the efficiency of the proposed algorithms. Key words. Image reconstruction, Variable splitting, TV denoising, Nonlinear optimization 1
Stable Signal Recovery from Incomplete and Inaccurate Measurements
, 2005
"... Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, w ..."
Abstract
 Add to MetaCart
Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, we consider the solution x ♯ to the ℓ1regularization problem min �x�ℓ1 subject to �Ax − y�ℓ2 ≤ ǫ, where ǫ is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the vector x0 is sufficiently sparse, then the solution is within the noise level �x ♯ − x0�ℓ2 ≤ C · ǫ. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A’s provided that the number of nonzeros of x0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x0, then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[logm] 6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.