Results 1  10
of
20
NESTA: A Fast and Accurate FirstOrder Method for Sparse Recovery
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradientdescent algorithms. This paper demonstrates that this approach is ideally suited for solving largescale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed stateoftheart methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as totalvariation minimization, and
Compressive Sensing
, 2010
"... Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many poten ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.
Optimally tuned iterative reconstruction algorithms for compressed sensing
 Selected Topics in Signal Processing
"... Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at spar ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Abstract — We conducted an extensive computational experiment, lasting multiple CPUyears, to optimally select parameters for two important classes of algorithms for finding sparse solutions of underdetermined systems of linear equations. We make the optimally tuned implementations available at sparselab.stanford.edu; they run ‘out of the box ’ with no user tuning: it is not necessary to select thresholds or know the likely degree of sparsity. Our class of algorithms includes iterative hard and soft thresholding with or without relaxation, as well as CoSaMP, subspace pursuit and some natural extensions. As a result, our optimally tuned algorithms dominate such proposals. Our notion of optimality is defined in terms of phase transitions, i.e. we maximize the number of nonzeros at which the algorithm can successfully operate. We show that the phase transition is a welldefined quantity with our suite of random underdetermined linear systems. Our tuning gives the highest transition possible within each class of algorithms. We verify by extensive computation the robustness of our recommendations to the amplitude distribution of the nonzero coefficients as well as the matrix ensemble defining the underdetermined system. Our findings include: (a) For all algorithms, the worst amplitude distribution for nonzeros is generally the constantamplitude randomsign distribution, where all nonzeros are the same amplitude. (b) Various random matrix ensembles give the same phase transitions; random partial isometries may give different transitions and require different tuning; (c) Optimally tuned subspace pursuit dominates optimally tuned CoSaMP, particularly so when the system is almost square. I.
CurveletWavelet Regularized Split Bregman Iteration for Compressed Sensing
"... Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent pointlike features while curvelets represent linelike features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + ɛ with the measurement matrix Φ ∈ R K×N, K<<N, we consider a jointly sparsityconstrained optimization problem of the form argmin{‖ΛcΨcu‖1 + ‖ΛwΨwu‖1 + u 1 2‖f − Φu‖22}. Here Ψcand Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the DouglasRachford Split algorithm. Numerical experiments for compressed sensing based Fourierdomain random imaging show good performances of the proposed curveletwavelet regularized split Bregman (CWSpB) methods,whereweparticularlyuseacombination of wavelet and curvelet coefficients as sparsity constraints.
Relaxed Conditions for Sparse Signal Recovery with General Concave Priors
"... Sensing challenges the convention of modern digital signal processing by establishing that exact signal reconstruction is possible for many problems where the sampling rate falls well below the Nyquist limit. Following the landmark works of Candès et al. and Donoho on the performance of ℓ1minimizat ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Sensing challenges the convention of modern digital signal processing by establishing that exact signal reconstruction is possible for many problems where the sampling rate falls well below the Nyquist limit. Following the landmark works of Candès et al. and Donoho on the performance of ℓ1minimization models for signal reconstruction, several authors demonstrated that certain nonconvex reconstruction models consistently outperform the convex ℓ1model in practice at very low sampling rates despite the fact that no global minimum can be theoretically guaranteed. Nevertheless, there has been little theoretical investigation into the performance of these nonconvex models. In this work, a notion of weak signal recoverability is introduced and the performance of nonconvex reconstruction models employing general concave metric priors is investigated under this model. The sufficient conditions for establishing weak signal recoverability are shown to substantially relax as the prior functional is parameterized to more closely resemble the targeted ℓ0model, offering new insight into the empirical performance of this general class of reconstruction methods. Examples of relaxation trends are shown for several different prior models.
A numerical exploration of compressed sampling recovery
 LINEAR ALGEBRA AND ITS APPLICATIONS
, 2010
"... ..."
RareAllele Detection Using Compressed Se(que)nsing,” arXiv
, 2009
"... Detection of rare variants by resequencing is important for the identification of individuals carrying disease variants. Rapid sequencing by new technologies enables lowcost resequencing of target regions, although it is still prohibitive to test more than a few individuals. In order to improve cos ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Detection of rare variants by resequencing is important for the identification of individuals carrying disease variants. Rapid sequencing by new technologies enables lowcost resequencing of target regions, although it is still prohibitive to test more than a few individuals. In order to improve cost tradeoffs, it has recently been suggested to apply pooling designs which enable the detection of carriers of rare alleles in groups of individuals. However, this was shown to hold only for a relatively low number of individuals in a pool, and requires the design of pooling schemes for particular cases. We propose a novel pooling design, based on a compressed sensing approach, which is both general, simple and efficient. We model the experimental procedure and show via computer simulations that it enables the recovery of rare allele carriers out of larger groups than were possible before, especially in situations where high coverage is obtained for each individual. Our approach can also be combined with barcoding techniques to enhance performance and provide a feasible solution based on current resequencing costs. For example, when targeting a small enough genomic region (∼100 basepairs) and using only ∼ 10 sequencing lanes and ∼10 distinct barcodes, one can recover the identity of 4 rare allele carriers out of a population of over 4000 individuals. 1
Astronomical image restoration using variational methods and model combination, Statistical Methodology 9
, 2012
"... In this work we develop a variational framework for the combination of several prior models in Bayesian image restoration and apply it to astronomical images. Since each combination of a given observation model and a prior model produces a different posterior distribution of the underlying image, th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this work we develop a variational framework for the combination of several prior models in Bayesian image restoration and apply it to astronomical images. Since each combination of a given observation model and a prior model produces a different posterior distribution of the underlying image, the use of variational posterior distribution approximation on each posterior will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the unknown image given the observations that minimizes a linear convex combination of the KullbackLeibler divergences associated with each poste
PARNES: A RAPIDLY CONVERGENT ALGORITHM FOR ACCURATE RECOVERY OF SPARSE AND APPROXIMATELY SPARSE SIGNALS
"... In this article we propose an algorithm, parnes, for the basis pursuit denoise problem bp(σ) which approximately finds a minimum onenorm solution to an underdetermined least squares problem. parnes, (1) combines what we think are the best features of currently available methods spgl1 [35] and nes ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this article we propose an algorithm, parnes, for the basis pursuit denoise problem bp(σ) which approximately finds a minimum onenorm solution to an underdetermined least squares problem. parnes, (1) combines what we think are the best features of currently available methods spgl1 [35] and nesta [3], and (2) incorporates a new improvement that exhibits linear convergence under the assumption of the restricted isometry property (rip). As with spgl1, our approach ‘probes the Pareto frontier ’ and determines a solution to the bpdn problem bp(σ) by exploiting the relation between the lasso problem ls(τ) and bp(σ) given by their Pareto curve. As with nesta we rely on the accelerated proximal gradient method proposed by Yu. Nesterov [27, 26] that takes a remarkable O ( √ L/ε) iterations to come within ɛ> 0 of the optimal value, where L is the Lipschitz constant of the gradient of the objective function. Furthermore we introduce an ‘outer loop’ that regularly updates the prox center. Nesterov’s accelerated proximal gradient method then becomes the ‘inner loop’. The restricted isometry property together with the Lipschitz differentiability of our objective function permits us to derive a condition for switching between the inner and outer loop in a provably optimal manner. A byproduct of our approach is a new algorithm for the lasso problem that also exhibits linear convergence under rip.