Results 1  10
of
123
M.: Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems
 IEEE Transaction on Image Processing
, 2009
"... This paper studies gradientbased schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TVbased image deburring problem. To achieve this task we combine an acceleration of ..."
Abstract

Cited by 67 (1 self)
 Add to MetaCart
This paper studies gradientbased schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TVbased image deburring problem. To achieve this task we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradientbased algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projectionsbased methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints. 1
Solving monotone inclusions via compositions of nonexpansive averaged operators
 Optimization
, 2004
"... A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analys ..."
Abstract

Cited by 63 (21 self)
 Add to MetaCart
A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analysis covers proximal methods for common zero problems as well as various splitting methods for finding a zero of the sum of monotone operators.
Practical Aspects of the MoreauYosida Regularization I: Theoretical Properties
, 1994
"... When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result co ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result concerns secondorder differentiability and is as follows. Under assumptions that are quite reasonable in optimization, the MoreauYosida is twice diffferentiable if and only if f is twice differentiable as well. In the course of our development, we give some results of general interest in convex analysis. In particular, we establish primaldual relationship between the remainder terms in the firstorder development of a convex function and its conjugate.
A douglasRachford splitting approach to nonsmooth convex variational signal recovery
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is propo ..."
Abstract

Cited by 47 (14 self)
 Add to MetaCart
Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to solve it. The convergence of the method, which is based on the DouglasRachford algorithm for monotone operatorsplitting, is obtained under general conditions. Applications to nonGaussian image denoising in a tight frame are also demonstrated. Index Terms — Convex optimization, denoising, DouglasRachford, frame, nondifferentiable optimization, Poisson noise,
A frameletbased image inpainting algorithm
 Applied and Computational Harmonic Analysis
"... Abstract. Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the c ..."
Abstract

Cited by 44 (23 self)
 Add to MetaCart
Abstract. Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the convergence of this frameletbased algorithm by interpreting it as an iteration for minimizing a special functional. The proof of the convergence is under the framework of convex analysis and optimization theory. We also discuss the relationship of our method with other waveletbased methods. Numerical experiments are given to illustrate the performance of the proposed algorithm. Key words. Tight frame, inpainting, convex analysis 1. Introduction. The problem of inpainting [2] occurs when part of the pixel data in a picture is missing or overwritten by other means. This arises for example in restoring ancient drawings, where a portion of the picture is missing or damaged due to aging or scratch; or when an image is transmitted through a noisy channel. The task of inpainting is to recover the missing region from the incomplete data observed. Ideally, the restored image should possess shapes and patterns consistent
A variational formulation for framebased inverse problems
 Inverse Problems
, 2007
"... A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smo ..."
Abstract

Cited by 41 (18 self)
 Add to MetaCart
A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsityconstrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinitedimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to framebased image restoration. 1
Variable Metric Bundle Methods: from Conceptual to Implementable Forms
, 1996
"... To minimize a convex function, we combine MoreauYosida regularizations, quasiNewton matrices and bundling mechanisms. First we develop conceptual forms using "reversal " quasiNewton formulae and we state their global and local convergence. Then, to produce implementable versions, we inco ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
To minimize a convex function, we combine MoreauYosida regularizations, quasiNewton matrices and bundling mechanisms. First we develop conceptual forms using "reversal " quasiNewton formulae and we state their global and local convergence. Then, to produce implementable versions, we incorporate a bundle strategy together with a "curvesearch". No convergence results are given for the implementable versions; however some numerical illustrations show their good behaviour even for largescale problems.
Proximal thresholding algorithm for minimization over orthonormal bases
 SIAM Journal on Optimization
, 2007
"... The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convexanalytical tools, we extend this notion to that of proximal thresholding and inve ..."
Abstract

Cited by 40 (13 self)
 Add to MetaCart
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convexanalytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated. 1 Problem formulation Throughout this paper, H is a separable infinitedimensional real Hilbert space with scalar product 〈 ·  ·〉, norm ‖·‖, and distance d. Moreover, Γ0(H) denotes the class of proper lower semicontinuous convex functions from H to]−∞, +∞], and (ek)k∈N is an orthonormal basis of H. The standard denoising problem in signal theory consists of recovering the original form of a signal x ∈ H from an observation z = x + v, where v ∈ H is the realization of a noise process. In many instances, x is known to admit a sparse representation with respect to (ek)k∈N and an estimate x of x can be constructed by removing the coefficients of smallest magnitude in the 1 representation (〈z  ek〉)k∈N of z with respect to (ek)k∈N. A popular method consists of performing a socalled soft thresholding of each coefficient 〈z  ek 〉 at some predetermined level ωk ∈]0, +∞[, namely (see Fig. 1) (1.1) x = ∑ soft [−ωk,ωk] (〈z  ek〉)ek, where soft [−ωk,ωk] : ξ ↦ → sign(ξ) max{ξ  − ωk, 0}. k∈N This approach has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing;
A Hybrid ProjectionProximal Point Algorithm
, 1998
"... We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the ..."
Abstract

Cited by 35 (13 self)
 Add to MetaCart
We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allow...