Results 1  10
of
80
Proximal Splitting Methods in Signal Processing
"... The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems ..."
Abstract

Cited by 264 (32 self)
 Add to MetaCart
(Show Context)
The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several wellknown algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
Equilibrium programming in Hilbert spaces
 2005), 117–136. CONVERGENCE THEOREMS FOR EP FIX 91
"... Several methods for solving systems of equilibrium problems in Hilbert spaces – and for finding best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximallike blockiterative algorithms for general systems, as well as regular ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
(Show Context)
Several methods for solving systems of equilibrium problems in Hilbert spaces – and for finding best approximations thereof – are presented and their convergence properties are established. The proposed methods include proximallike blockiterative algorithms for general systems, as well as regularization and splitting algorithms for single equilibrium problems. The problem of constructing approximate equilibria in the case of inconsistent systems is also considered. 1
QuasiFejérian Analysis of Some Optimization Algorithms
"... A quasiFejér sequence is a sequence which satisfies the standard Fejér monotonicity property to within an additional error term. This notion is studied in detail in a Hilbert space setting and shown to provide a powerful framework to analyze the convergence of a wide range of optimization algorithm ..."
Abstract

Cited by 56 (14 self)
 Add to MetaCart
A quasiFejér sequence is a sequence which satisfies the standard Fejér monotonicity property to within an additional error term. This notion is studied in detail in a Hilbert space setting and shown to provide a powerful framework to analyze the convergence of a wide range of optimization algorithms in a systematic fashion. A number of convergence theorems covering and extending existing results are thus established. Special emphasis is placed on the design and the analysis of parallel algorithms.
Projection and proximal point methods: convergence results and counterexamples
, 2003
"... Recently, Hundal has constructed a hyperplane H, a cone K, and a starting point y0 in `2 such that the sequence of alternating projections (PKPH)ny0 n∈N converges weakly to some point in H ∩K, but not in norm. We show how this construction results in a counterexample to norm convergence for iterates ..."
Abstract

Cited by 49 (19 self)
 Add to MetaCart
(Show Context)
Recently, Hundal has constructed a hyperplane H, a cone K, and a starting point y0 in `2 such that the sequence of alternating projections (PKPH)ny0 n∈N converges weakly to some point in H ∩K, but not in norm. We show how this construction results in a counterexample to norm convergence for iterates of averaged projections; hence, we give an affirmative answer to a question raised by Reich two decades ago. Furthermore, new counterexamples to norm convergence for iterates of firmly nonexpansive maps (a ̀ la Genel and Lindenstrauss) and for the proximal point algorithm (a ̀ la Güler) are provided. We also present a counterexample, along with some weak and norm convergence results, for the new framework of stringaveraging projection methods introduced by Censor, Elfving, and Herman. Extensions to Banach spaces and the situation for the Hilbert ball are discussed as well.
A MONOTONE + SKEW SPLITTING MODEL FOR COMPOSITE MONOTONE INCLUSIONS IN DUALITY
, 2011
"... The principle underlying this paper is the basic observation that the problem of simultaneously solving a large class of composite monotone inclusions and their duals can be reduced to that of finding a zero of the sum of a maximally monotone operator and a linear skewadjoint operator. An algorith ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
The principle underlying this paper is the basic observation that the problem of simultaneously solving a large class of composite monotone inclusions and their duals can be reduced to that of finding a zero of the sum of a maximally monotone operator and a linear skewadjoint operator. An algorithmic framework is developed for solving this generic problem in a Hilbert space setting. New primaldual splitting algorithms are derived from this framework for inclusions involving composite monotone operators, and convergence results are established. These algorithms draw their simplicity and efficacy from the fact that they operate in a fully decomposed fashion in the sense that the monotone operators and the linear transformations involved are activated separately at each iteration. Comparisons with existing methods are made and applications to composite variational problems are demonstrated.
Combettes, A monotone+skew splitting model for composite monotone inclusions in duality
"... ar ..."
On the Effectiveness of Projection Methods for Convex Feasibility Problems with Linear Inequality Constraints
"... The effectiveness of projection methods for solving systems of linear inequalities is investigated. It is shown that they often have a computational advantage over alternatives that have been proposed for solving the same problem and that this makes them successful in many realworld applications. ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
The effectiveness of projection methods for solving systems of linear inequalities is investigated. It is shown that they often have a computational advantage over alternatives that have been proposed for solving the same problem and that this makes them successful in many realworld applications. This is supported by experimental evidence provided in this paper on problems of various sizes (up to tens of thousands of unknowns satisfying up to hundreds of thousands of constraints) and by a discussion of the demonstrated efficacy of projection methods in numerous scientific publications and commercial patents (dealing with problems that can have over a billion unknowns and a similar number of constraints).
A Unified Framework for Some Inexact Proximal Point Algorithms
, 2001
"... We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projectionproximal and extragradient ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projectionproximal and extragradientproximal methods. Specifically, we introduce an even more exible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sucient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem.
Forcing strong convergence of proximal point iterations in a Hilbert space
, 2000
"... This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinitedimensional case. We propose a new proximaltype algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns.
X.: Convergence analysis of primaldual algorithms for total variation image restoration
, 2010
"... Abstract. Recently, some attractive primaldual algorithms have been proposed for solving a saddlepoint problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primaldual algorithms and shows that the i ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, some attractive primaldual algorithms have been proposed for solving a saddlepoint problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primaldual algorithms and shows that the involved parameters of those primaldual algorithms (including the step sizes) can be significantly enlarged if some simple correction steps are supplemented. As a result, we present some primaldualbased contraction methods for solving the saddlepoint problem. These contraction methods are in the predictioncorrection fashion in the sense that the predictor is generated by a primaldual method and it is corrected by some simple correction step at each iteration. In addition, based on the context of contraction type methods, we provide a novel theoretical framework for analyzing the convergence of primaldual algorithms which simplifies existing convergence analysis substantially.