Results 1  10
of
45
Robust Recovery of Subspace Structures by LowRank Representation
"... In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method ter ..."
Abstract

Cited by 128 (24 self)
 Add to MetaCart
(Show Context)
In this work we address the subspace recovery problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible errors as well. To this end, we propose a novel method termed LowRank Representation (LRR), which seeks the lowestrank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that LRR well solves the subspace recovery problem: when the data is clean, we prove that LRR exactly captures the true subspace structures; for the data contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for the data corrupted by arbitrary errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace segmentation and error correction, in an efficient way.
Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data
 Int. Symp. Biomedical Imaing
, 2009
"... Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer kspace samples, thereby reducing scanning time. Previous work has shown that noncon ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
(Show Context)
Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer kspace samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourierbased algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of stateoftheart convex methods. Index Terms — Magnetic resonance imaging, image reconstruction, compressive sensing, nonconvex optimization.
An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise
 SIAM J. SCI. COMPUT
, 2009
"... We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anis ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
(Show Context)
We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1norm. We derive the algorithm by applying the wellknown quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global qlinear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a lowcomplexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.
Analysis and generalizations of the linearized Bregman method
 SIAM J. IMAGING SCI
, 2010
"... This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique.
Parametric Maximum Flow Algorithms for Fast Total Variation Minimization
, 2007
"... This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, wh ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
(Show Context)
This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, while the TVL¹ model [2, 29, 7, 42] is able to remove impulsive noise from greyscale images, and perform multi scale decompositions of them. For largescale applications such as those in medical image (pre)processing, we propose here fast and memoryefficient algorithms, based on a parametric maximum flow algorithm [19] and the minimum st cut representation of TVbased energy functions [26, 17]. Preliminary numerical results on largescale twodimensional CT and threedimensional Brain MRI images that illustrate the effectiveness of our approaches are presented.
GROUP SPARSE OPTIMIZATION BY ALTERNATING DIRECTION METHOD
, 2011
"... Abstract. This paper proposes efficient algorithms for group sparse optimization with mixed ℓ2,1regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group in ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes efficient algorithms for group sparse optimization with mixed ℓ2,1regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The ℓ2,1regularization promotes group sparsity, but the resulting problem, due to the mixednorm structure and possible grouping irregularity, is considered more difficult to solve than the conventional ℓ1regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the ℓ2,1regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.
An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing
, 2009
"... ..."
AUGMENTED LAGRANGIAN METHOD FOR TOTAL VARIATION RESTORATION WITH NONQUADRATIC FIDELITY
"... Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three subproblems need to be solved, two of which can be very efficiently solved via FFT implementation or closed form solution. In general the third subproblem need iterative solvers. We then apply our method to TV restoration with L1 and KullbackLeibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third subproblem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given, which cannot be obtained by previous analysis techniques.
A NONCONVEX ADMM ALGORITHM FOR GROUP SPARSITY WITH SPARSE GROUPS
"... We present an efficient algorithm for computing sparse representations whose nonzero coefficients can be divided into groups, few of which are nonzero. In addition to this group sparsity, we further impose that the nonzero groups themselves be sparse. We use a nonconvex optimization approach for thi ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We present an efficient algorithm for computing sparse representations whose nonzero coefficients can be divided into groups, few of which are nonzero. In addition to this group sparsity, we further impose that the nonzero groups themselves be sparse. We use a nonconvex optimization approach for this purpose, and use an efficient ADMM algorithm to solve the nonconvex problem. The efficiency comes from using a novel shrinkage operator, one that minimizes nonconvex penalty functions for enforcing sparsity and group sparsity simultaneously. Our numerical experiments show that combining sparsity and group sparsity improves signal reconstruction accuracy compared with either property alone. We also find that using nonconvex optimization significantly improves results in comparison with convex optimization. Index Terms — Sparse representations, group sparsity, shrinkage, nonconvex optimization, alternating direction method of multipliers 1.
1 A Fast TVL1L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data
"... Abstract—Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from partial Fourier data. The algorithm minimizes the sum of three terms corresponding to total variation, ℓ1norm regularization and least squares data fitting. It uses an alternating minimization scheme in which the main computation involves shrinkage and fast Fourier transforms (FFTs), or alternatively discrete cosine transforms (DCTs) when available data are in the DCT domain. We analyze the convergence properties of this algorithm, and compare its numerical performance with two recently proposed algorithms. Our numerical simulations on recovering magnetic resonance images (MRI) indicate that the proposed algorithm is highly efficient, stable and robust. Index Terms—compressive sensing, compressed sensing, MRI, MRI reconstruction, fast Fourier transform, discrete cosine transform. I.