Results 1  10
of
255
Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems
 IEEE TRANSACTION ON IMAGE PROCESSING
, 2009
"... This paper studies gradientbased schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TVbased image deburring problem. To achieve this task we combine an acceleration of ..."
Abstract

Cited by 166 (2 self)
 Add to MetaCart
(Show Context)
This paper studies gradientbased schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TVbased image deburring problem. To achieve this task we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradientbased algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projectionsbased methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.
Solving monotone inclusions via compositions of nonexpansive averaged operators
 Optimization
, 2004
"... A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analys ..."
Abstract

Cited by 145 (31 self)
 Add to MetaCart
(Show Context)
A unified fixed point theoretic framework is proposed to investigate the asymptotic behavior of algorithms for finding solutions to monotone inclusion problems. The basic iterative scheme under consideration involves nonstationary compositions of perturbed averaged nonexpansive operators. The analysis covers proximal methods for common zero problems as well as various splitting methods for finding a zero of the sum of monotone operators.
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 124 (7 self)
 Add to MetaCart
(Show Context)
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
A frameletbased image inpainting algorithm
 Applied and Computational Harmonic Analysis
"... Abstract. Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the c ..."
Abstract

Cited by 90 (40 self)
 Add to MetaCart
(Show Context)
Abstract. Image inpainting is a fundamental problem in image processing and has many applications. Motivated by the recent tight frame based methods on image restoration in either the image or the transform domain, we propose an iterative tight frame algorithm for image inpainting. We consider the convergence of this frameletbased algorithm by interpreting it as an iteration for minimizing a special functional. The proof of the convergence is under the framework of convex analysis and optimization theory. We also discuss the relationship of our method with other waveletbased methods. Numerical experiments are given to illustrate the performance of the proposed algorithm. Key words. Tight frame, inpainting, convex analysis 1. Introduction. The problem of inpainting [2] occurs when part of the pixel data in a picture is missing or overwritten by other means. This arises for example in restoring ancient drawings, where a portion of the picture is missing or damaged due to aging or scratch; or when an image is transmitted through a noisy channel. The task of inpainting is to recover the missing region from the incomplete data observed. Ideally, the restored image should possess shapes and patterns consistent
A douglasRachford splitting approach to nonsmooth convex variational signal recovery
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is propo ..."
Abstract

Cited by 89 (23 self)
 Add to MetaCart
(Show Context)
Abstract — Under consideration is the large body of signal recovery problems that can be formulated as the problem of minimizing the sum of two (not necessarily smooth) lower semicontinuous convex functions in a real Hilbert space. This generic problem is analyzed and a decomposition method is proposed to solve it. The convergence of the method, which is based on the DouglasRachford algorithm for monotone operatorsplitting, is obtained under general conditions. Applications to nonGaussian image denoising in a tight frame are also demonstrated. Index Terms — Convex optimization, denoising, DouglasRachford, frame, nondifferentiable optimization, Poisson noise,
Practical Aspects of the MoreauYosida Regularization I: Theoretical Properties
, 1994
"... When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result co ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result concerns secondorder differentiability and is as follows. Under assumptions that are quite reasonable in optimization, the MoreauYosida is twice diffferentiable if and only if f is twice differentiable as well. In the course of our development, we give some results of general interest in convex analysis. In particular, we establish primaldual relationship between the remainder terms in the firstorder development of a convex function and its conjugate.
Equilibrium programming in Hilbert spaces
 2005), 117–136. CONVERGENCE THEOREMS FOR EP FIX 91
"... Several methods for solving systems of equilibrium problems in Hilbert spaces – and for finding best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximallike blockiterative algorithms for general systems, as well as regular ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
(Show Context)
Several methods for solving systems of equilibrium problems in Hilbert spaces – and for finding best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximallike blockiterative algorithms for general systems, as well as regularization and splitting algorithms for single equilibrium problems. The problem of constructing approximate equilibria in the case of inconsistent systems is also considered. 1
Proximal thresholding algorithm for minimization over orthonormal bases
 SIAM Journal on Optimization
, 2007
"... The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convexanalytical tools, we extend this notion to that of proximal thresholding and inve ..."
Abstract

Cited by 63 (17 self)
 Add to MetaCart
(Show Context)
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convexanalytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated. 1 Problem formulation Throughout this paper, H is a separable infinitedimensional real Hilbert space with scalar product 〈 ·  ·〉, norm ‖·‖, and distance d. Moreover, Γ0(H) denotes the class of proper lower semicontinuous convex functions from H to]−∞, +∞], and (ek)k∈N is an orthonormal basis of H. The standard denoising problem in signal theory consists of recovering the original form of a signal x ∈ H from an observation z = x + v, where v ∈ H is the realization of a noise process. In many instances, x is known to admit a sparse representation with respect to (ek)k∈N and an estimate x of x can be constructed by removing the coefficients of smallest magnitude in the 1 representation (〈z  ek〉)k∈N of z with respect to (ek)k∈N. A popular method consists of performing a socalled soft thresholding of each coefficient 〈z  ek 〉 at some predetermined level ωk ∈]0, +∞[, namely (see Fig. 1) (1.1) x = ∑ soft [−ωk,ωk] (〈z  ek〉)ek, where soft [−ωk,ωk] : ξ ↦ → sign(ξ) max{ξ  − ωk, 0}. k∈N This approach has received considerable attention in various areas of applied mathematics ranging from nonlinear approximation theory to statistics, and from harmonic analysis to image processing;
A variational formulation for framebased inverse problems
 Inverse Problems
, 2007
"... A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smo ..."
Abstract

Cited by 62 (24 self)
 Add to MetaCart
(Show Context)
A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsityconstrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinitedimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to framebased image restoration. 1