Results 1  10
of
57
Proximal Splitting Methods in Signal Processing
"... The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems ..."
Abstract

Cited by 85 (20 self)
 Add to MetaCart
The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several wellknown algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
A parallel inertial proximal optimization methods
 Pac. J. Optim
, 2012
"... The DouglasRachford algorithm is a popular iterative method for finding a zero of a sum of two maximally monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The DouglasRachford algorithm is a popular iterative method for finding a zero of a sum of two maximally monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of maximal operators. Based on this algorithm, parallel proximal algorithms are proposed to minimize over a linear subspace of a Hilbert space the sum of a finite number of proper, lower semicontinuous convex functions composed with linear operators. It is shown that particular cases of these methods are the simultaneous direction method of multipliers proposed by Stetzer et al., the parallel proximal algorithm developed by Combettes and Pesquet, and a parallelized version of an algorithm proposed by Attouch and Soueycatt.
A MONOTONE + SKEW SPLITTING MODEL FOR COMPOSITE MONOTONE INCLUSIONS IN DUALITY
, 2011
"... The principle underlying this paper is the basic observation that the problem of simultaneously solving a large class of composite monotone inclusions and their duals can be reduced to that of finding a zero of the sum of a maximally monotone operator and a linear skewadjoint operator. An algorith ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The principle underlying this paper is the basic observation that the problem of simultaneously solving a large class of composite monotone inclusions and their duals can be reduced to that of finding a zero of the sum of a maximally monotone operator and a linear skewadjoint operator. An algorithmic framework is developed for solving this generic problem in a Hilbert space setting. New primaldual splitting algorithms are derived from this framework for inclusions involving composite monotone operators, and convergence results are established. These algorithms draw their simplicity and efficacy from the fact that they operate in a fully decomposed fashion in the sense that the monotone operators and the linear transformations involved are activated separately at each iteration. Comparisons with existing methods are made and applications to composite variational problems are demonstrated.
Recent progress on Monotone Operator Theory”; http://arxiv.org/abs/1210.3401v2
, 2012
"... In this paper, we survey recent progress on the theory of maximally monotone operators in general Banach space. We also extend various of the results and leave some open questions. 2010 Mathematics Subject Classification: ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
In this paper, we survey recent progress on the theory of maximally monotone operators in general Banach space. We also extend various of the results and leave some open questions. 2010 Mathematics Subject Classification:
Proximity for Sums of Composite Functions
"... We propose an algorithm for computing the proximity operator of a sum of composite convex functions in Hilbert spaces and investigate its asymptotic behavior. Applications to best approximation and image recovery are described. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
We propose an algorithm for computing the proximity operator of a sum of composite convex functions in Hilbert spaces and investigate its asymptotic behavior. Applications to best approximation and image recovery are described.
Minimization and parameter estimation for seminorm regularization models with Idivergence constraints
, 2012
"... In this papers we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded Idivergence D(b,H·) for rather general linear operators H and L. The Idivergence is also known as KullbackLeibler divergence and appears in many models in imaging science, in particular when d ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this papers we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded Idivergence D(b,H·) for rather general linear operators H and L. The Idivergence is also known as KullbackLeibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. We prove relations between the the parameters of Idivergence constrained and penalized problems without assuming the uniqueness of their minimizers. To solve the Idivergence constrained problem we apply firstorder primaldual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an Idivergence constrained least squares problem which can be solved based on Morosov’s discrepancy principle by a Newton method. Interestingly, the algorithm produces not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which convergences to a regularization parameter so that the corresponding penalized problem has the same solution as our constrained one. We demonstrate the performance of various algorithms for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. 1
Duality and Convex Programming
, 2010
"... We survey some key concepts in convex duality theory and their application to the analysis and numerical solution of problem archetypes in imaging. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We survey some key concepts in convex duality theory and their application to the analysis and numerical solution of problem archetypes in imaging.