Results 1  10
of
39
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 192 (12 self)
 Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easytoimplement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a softthresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
Proximal Splitting Methods in Signal Processing
"... The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems ..."
Abstract

Cited by 85 (20 self)
 Add to MetaCart
The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several wellknown algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
Fast Linearized Bregman Iteration for Compressed Sensing
 and Sparse Denoising, 2008. UCLA CAM Reprots
, 2008
"... Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm sh ..."
Abstract

Cited by 56 (16 self)
 Add to MetaCart
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [28, 32] in approximating a minimal ℓ1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing. 1. Introduction. Let A ∈ R m×n with n> m and f ∈ R m be given. The aim of a basis pursuit problem is to find u ∈ R n by solving the following constrained minimization problem min
Convergence of the Linearized Bregman Iteration for ℓ1norm Minimization
, 2008
"... Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which is described in detail with numerical simulations in [35]. A convergence analysis of the smoothed version of this algorithm was given in [11]. The purpose of this paper is to prove that the linearized Bregman iteration proposed in [40] for the basis pursuit problem indeed converges. 1.
Linearized Bregman Iterations for FrameBased Image Deblurring
, 2008
"... Abstract. Real images usually have sparse approximations under some tight frame systems derived from either framelets, over sampled discrete (window) cosine or Fourier transform. In this paper, we propose a method for image deblurring in tight frame domains. It is reduced to finding a sparse solutio ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
Abstract. Real images usually have sparse approximations under some tight frame systems derived from either framelets, over sampled discrete (window) cosine or Fourier transform. In this paper, we propose a method for image deblurring in tight frame domains. It is reduced to finding a sparse solution of a system of linear equations whose coefficients matrix is rectangular. Then, a modified version of the linearized Bregman iteration proposed and analyzed in [10,11,43,50] can be applied. Numerical examples show that the method is very simple to implement, robust to noise and effective for image deblurring. 1. Introduction. Image
Convergence analysis of tight framelet approach for missing data recovery
 Adv. Comput. Math. xx
"... How to recover missing data from an incomplete samples is a fundamental problem in mathematics and it has wide range of applications in image analysis and processing. Although many existing methods, e.g. various data smoothing methods and PDE approaches, are available in the literature, there is alw ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
How to recover missing data from an incomplete samples is a fundamental problem in mathematics and it has wide range of applications in image analysis and processing. Although many existing methods, e.g. various data smoothing methods and PDE approaches, are available in the literature, there is always a need to find new methods leading to the best solution according to various cost functionals. In this paper, we propose an iterative algorithm based on tight framelets for image recovery from incomplete observed data. The algorithm is motivated from our framelet algorithm used in highresolution image reconstruction and it exploits the redundance in tight framelet systems. We prove the convergence of the algorithm and also give its convergence factor. Furthermore, we derive the minimization properties of the algorithm and explore the roles of the redundancy of tight framelet systems. As an illustration of the effectiveness of the algorithm, we give an application of it in impulse noise removal. 1
Dual wavelet frames and Riesz bases in Sobolev spaces
, 2007
"... Abstract. This paper generalizes the mixed extension principle in L2(R d) of [50] to a pair of dual Sobolev spaces H s (R d) and H −s (R d). In terms of masks for φ, ψ 1,..., ψ L ∈ H s (R d) and ˜φ, ˜ ψ 1,..., ˜ ψ L ∈ H −s (R d), simple sufficient conditions are given to ensure that (X s (φ; ψ 1,. ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Abstract. This paper generalizes the mixed extension principle in L2(R d) of [50] to a pair of dual Sobolev spaces H s (R d) and H −s (R d). In terms of masks for φ, ψ 1,..., ψ L ∈ H s (R d) and ˜φ, ˜ ψ 1,..., ˜ ψ L ∈ H −s (R d), simple sufficient conditions are given to ensure that (X s (φ; ψ 1,..., ψ L), X −s ( ˜ φ; ˜ ψ 1,..., ˜ ψ L)) forms a pair of dual wavelet frames in (H s (R d), H −s (R d)), where X s (φ; ψ 1,..., ψ L): = {φ( · − k) : k ∈ Z d} ∪ � 2 j(d/2−s) ψ ℓ (2 j · −k) : j ∈ N0, k ∈ Z d, ℓ = 1,..., L �. For s> 0, the key of this general mixed extension principle is the regularity of φ, ψ 1,..., ψ L, and the vanishing moments of ˜ ψ 1,..., ˜ ψ L, while allowing ˜ φ, ˜ ψ 1,..., ˜ ψ L to be tempered distributions not in L2(R d) and ψ 1,..., ψ L to have no vanishing moments. So, the systems X s (φ; ψ 1,..., ψ L) and X −s ( ˜ φ; ˜ ψ 1,..., ˜ ψ L) may not be able to be normalized into a frame of L2(R d). As an example, we show that {2 j(1/2−s) Bm(2 j · −k) : j ∈ N0, k ∈ Z} is a wavelet frame in H s (R) for any 0 < s < m − 1/2, where Bm is the Bspline of order m. This simple construction is also applied to multivariate box splines to obtain wavelet frames with short supports, noting that it is hard to construct nonseparable multivariate wavelet frames with small supports. Applying this general mixed extension principle, we obtain and characterize dual Riesz bases (X s (φ; ψ 1,..., ψ L), X −s ( ˜ φ; ˜ ψ 1,..., ˜ ψ L)) in Sobolev spaces (H s (R d), H −s (R d)). For example, all interpolatory wavelet systems in [25] generated by an interpolatory refinable function φ ∈ H s (R) with s> 1/2 are Riesz bases of the Sobolev space H s (R). This general mixed extension principle also naturally leads to a characterization of the Sobolev norm of a function in terms of weighted norm of its wavelet coefficient sequence (decomposition sequence) without requiring that dual wavelet frames should be in L2(R d), which is quite different to other approaches in the literature. 1.
Simultaneously Inpainting in Image and Transformed Domains
"... In this paper, we focus on the restoration of images that have incomplete data in either the image domain or the transformed domain or in both. The transform used can be any orthonormal or tight frame transforms such as orthonormal wavelets, tight framelets, the discrete Fourier transform, the Gabor ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In this paper, we focus on the restoration of images that have incomplete data in either the image domain or the transformed domain or in both. The transform used can be any orthonormal or tight frame transforms such as orthonormal wavelets, tight framelets, the discrete Fourier transform, the Gabor transform, the discrete cosine transform, and the discrete local cosine transform. We propose an iterative algorithm that can restore the incomplete data in both domains simultaneously. We prove the convergence of the algorithm and derive the optimal properties of its limit. The algorithm generalizes, unifies, and simplifies the inpainting algorithm in image domains given in [8] and the inpainting algorithms in the transformed domains given in [7,16,19]. Finally, applications of the new algorithm to superresolution image reconstruction with different zooms are presented. 1
Dualization of signal recovery problems
, 2009
"... In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in particular in signal recovery. These problems are not easily amenab ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in particular in signal recovery. These problems are not easily amenable to solution by current methods but they feature FenchelMoreauRockafellar dual problems that can be solved by forwardbackward splitting. The proposed algorithm produces simultaneously a sequence converging weakly to a dual solution, and a sequence converging strongly to the primal solution. Our framework is shown to capture and extend several existing dualitybased signal recovery methods and to be applicable to a variety of new problems beyond their scope.