Results 1  10
of
23
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 59 (13 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
FIXEDPOINT CONTINUATION FOR ℓ1MINIMIZATION: METHODOLOGY AND CONVERGENCE
"... We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish qlinear convergence rates for the fixedpoint algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation
 SIAM Journal on Scientific Computing
, 2010
"... Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a firstorder iterative method called “shrinkage ” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ℓ1norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic twostage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to µ. This code exhibits stateoftheart performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.
Optimally tuned iterative reconstruction algorithms for compressed sensing
 Selected Topics in Signal Processing
"... Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at spar ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box ’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a welldefined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constantamplitude randomsign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square. I.
CurveletWavelet Regularized Split Bregman Iteration for Compressed Sensing
"... Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent pointlike features while curvelets represent linelike features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + ɛ with the measurement matrix Φ ∈ R K×N, K<<N, we consider a jointly sparsityconstrained optimization problem of the form argmin{‖ΛcΨcu‖1 + ‖ΛwΨwu‖1 + u 1 2‖f − Φu‖22}. Here Ψcand Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the DouglasRachford Split algorithm. Numerical experiments for compressed sensing based Fourierdomain random imaging show good performances of the proposed curveletwavelet regularized split Bregman (CWSpB) methods,whereweparticularlyuseacombination of wavelet and curvelet coefficients as sparsity constraints.
On approximation of orientation distributions by means of spherical ridgelets
 In Proc. IEEE International Symposium on Biomedical Imaging: from Nano to Macro. IEEE
, 2008
"... Visualization and analysis of the microarchitecture of brain parenchyma by means of magnetic resonance imaging is nowadays believed to be one of the most powerful tools used for the assessment of various cerebral conditions as well as for understanding the intracerebral connectivity. Unfortunately, ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Visualization and analysis of the microarchitecture of brain parenchyma by means of magnetic resonance imaging is nowadays believed to be one of the most powerful tools used for the assessment of various cerebral conditions as well as for understanding the intracerebral connectivity. Unfortunately, the conventional diffusion tensor imaging (DTI) used for estimating the local orientations of neural fibers, is incapable of performing reliably in the situations when a voxel of interest accommodates multiple fiber tracts. In this case, a much more accurate analysis is possible using the high angular resolution diffusion imaging (HARDI) that represents local diffusion by its apparent coefficients measured as a discrete function of spatial orientations. In this note, a novel approach to enhancing and modeling the HARDI signals using multiresolution bases of spherical ridgelets is presented. In addition to its desirable properties of being adaptive, sparsifying, and efficiently computable, the proposed modeling leads to analytical computation of the orientation distribution functions associated with the measured diffusion, thereby providing a fast and robust analytical solution for qball imaging.
Compressed Sensing for Surface Characterization and Metrology
 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
, 2009
"... Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geomet ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Surface metrology is the science of measuring smallscale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geometric waveletbased recovery algorithm is proposed for scratched and textural surfaces by solving a convex optimal problem with sparse constrained by curvelet transform and wave atom transform. In the framework of compressed measurement, one can stably recover compressible surfaces from incomplete and inaccurate random measurements by using the recovery algorithm. The necessary number of measurements is far fewer than those required by traditional methods that have to obey the Shannon sampling theorem. The compressed metrology essentially shifts online measurement cost to computational cost of offline nonlinear recovery. By combining the idea of sampling, sparsity, and compression, the proposed method indicates a new acquisition protocol and leads to building new measurement instruments. It is very significant for measurements limited by physical constraints, or is extremely expensive. Experiments on engineering and bioengineering surfaces demonstrate good performances of the proposed method.
Parametric dictionary learning for modeling eap and odf in diffusion mri
 in: Lecture Notes in Computer Science series MICCAI 2012
"... Abstract. In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover Diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover Diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF). Some attempts to sparsely represent the diffusion signal have already been performed. However and contrarly to what has been presented in CS dMRI, in this work we propose and advocate the use of a well adapted learned dictionary and show that it leads to a sparser signal estimation as well as to an efficient reconstruction of very important diffusion features. We first propose to learn and design a sparse and parametric dictionary from a set of training diffusion data. Then, we propose a framework to analytically estimate in closed form two important diffusion features: the EAP and the ODF. Various experiments on synthetic, phantom and human brain data have been carried out and promising results with reduced number of atoms have been obtained on diffusion signal reconstruction, thus illustrating the added value of our method over stateoftheart SHORE and SPF based approaches. 1
Improved Iterative Curvelet Thresholding for Compressed Sensing
"... A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing reconstruction in the fields of remote sensing. Some recent strategies including BioucasDias and Figueiredo’s twostep iteration, Beck and Teboulle’s fast method, and Osher et al’s linearized Bregman iteration are applied to iterative curvelet thresholding in order to accelerate convergence. Advantages and disadvantages of the proposed methods are studied using the socalled pseudoPareto curve in the numerical experiments on singlepixel remote sensing and Fourierdomain random imaging.