Results 1  10
of
20
A Hybrid ProjectionProximal Point Algorithm
, 1998
"... We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allow...
A Hybrid Approximate ExtragradientProximal Point Algorithm Using The Enlargement Of A Maximal Monotone Operator
, 1999
"... We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of ..."
Abstract

Cited by 24 (15 self)
 Add to MetaCart
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter [2]) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximalalgorithmbased framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forwardbackward splitting algorithm of Tseng [35]...
Forcing strong convergence of proximal point iterations in a Hilbert space
, 2000
"... This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinitedimensional case. We propose a new proximaltype algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns.
A Practical General Approximation Criterion for Methods of Multipliers Based on Bregman Distances
, 2000
"... This paper demonstrates that for generalized methods of multipliers for convex programming based on Bregman distance kernels  including the classical quadratic method of multipliers  the minimization of the augmented Lagrangian can be truncated using a simple, generally implementable stopping ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper demonstrates that for generalized methods of multipliers for convex programming based on Bregman distance kernels  including the classical quadratic method of multipliers  the minimization of the augmented Lagrangian can be truncated using a simple, generally implementable stopping criterion based only on the norms of the primal iterate and the gradient (or a subgradient) of the augmented Lagrangian at that iterate. Previous results in this and related areas have required conditions that are much harder to verify, such as ffloptimality with respect to the augmented Lagrangian, or strong conditions on the convex program to be solved. Here, only existence of a KKT pair is required, and the convergence properties of the exact form of the method are preserved. The key new element in the analysis is the use of a full conjugate duality framework, as opposed to mainly examining the action of the method on the standard dual function of the convex program. An existence resul...
Rescaling and Stepsize Selection in Proximal Methods using Separable Generalized Distances
 SIAM JOURNAL ON OPTIMIZATION
, 2001
"... This paper presents a convergence proof technique for a broad class of proximal algorithms in which the perturbation term is separable and may contain barriers enforcing interval constraints. There are two key ingredients in the analysis: a mild regularity condition on the differential behavior of t ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper presents a convergence proof technique for a broad class of proximal algorithms in which the perturbation term is separable and may contain barriers enforcing interval constraints. There are two key ingredients in the analysis: a mild regularity condition on the differential behavior of the barrier as one approaches an interval boundary, and a lower stepsize limit that takes into account the curvature of the proximal term. We give two applications of our approach. First, we prove subsequential convergence of a very broad class of proximal minimization algorithms for convex optimization, where different stepsizes can be used for each coordinate. Applying these methods to the dual of a convex program, we obtain a wide class of multiplier methods, with subsequential convergence of both primal and dual iterates, and independent adjustment of the penalty parameter for each constraint. The adjustment rules for the penalty parameters generalize a wellestablished scheme for the exponential method of multipliers. The results may also be viewed as a generalization of recent work by BenTal/Zibulevsky and Auslender et al. on methods derived from 'divergences. The second application established full convergence, under a novel stepsize condition, of Bregmanfunctionbased proximal methods for general monotone operator problems over a box. Prior results in this area required strong restrictive assumptions on the monotone operator.
On The Relation Between Bundle Methods For Maximal Monotone Inclusions And Hybrid Proximal Point Algorithms
 Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, volume 8 of Studies in Computational Mathematics
, 2001
"... this paper we consider bundle methods under the light of inexact proximal point algorithms, namely the hybrid variant of [36], see also [35,37,38]. The insight given by this new interpretation is twofold. First, it provides an alternative convergence proof, which is technically simple, for serious s ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
this paper we consider bundle methods under the light of inexact proximal point algorithms, namely the hybrid variant of [36], see also [35,37,38]. The insight given by this new interpretation is twofold. First, it provides an alternative convergence proof, which is technically simple, for serious steps of bundle methods by invoking the corresponding results for hybrid proximal point methods. Second, relating the two methodologies supplies a computationally realistic implementation of hybrid proximal point methods for the most general case, i.e., when the operator may not have any special structure. Our paper is organized as follows. In x 2 we outline the hybrid proximal point algorithm, together with its relevant convergence properties. Some useful theory from [9] and [8] on certain enlargements of maximal monotone operators is reviewed in x 3. Finally, in Section 4 we establish the connection between bundle and hybrid proximal methods and give some new convergence results, including the linear rate of convergence for bundle methods.
Finding the projection of a point onto the intersection of convex sets via projections onto halfspaces, preprint
, 2002
"... We present a modification of Dykstra’s algorithm which allows us to avoid projections onto general convex sets. Instead, we calculate projections onto either a halfspace or onto the intersection of two halfspaces. Convergence of the algorithm is established and special choices of the halfspaces are ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present a modification of Dykstra’s algorithm which allows us to avoid projections onto general convex sets. Instead, we calculate projections onto either a halfspace or onto the intersection of two halfspaces. Convergence of the algorithm is established and special choices of the halfspaces are proposed. The option to project onto halfspaces instead of general convex sets makes the algorithm more practical. The fact that the halfspaces 1 are quite general enables us to apply the algorithm in a variety of cases and to generalize a number of known projection algorithms. The problem of projecting a point onto the intersection of closed convex sets receives considerable attention in many areas of mathematics and physics as well as in other fields of science and engineering such as image reconstruction from projections. In this work we propose a new class of algorithms which allow projection onto certain super halfspaces, i.e., halfspaces which contain the convex sets. Each one of the algorithms that we present gives the user freedom to choose the specific super halfspace from a family of such halfspaces. Since projecting a point onto a halfspace is an easy task to perform, the new algorithms may be more useful in practical situations in which the construction of the super halfspaces themselves is not too difficult. 1
Iterating Bregman Retractions
, 2002
"... The notion of a Bregman retraction of a closed convex set in Euclidean space is introduced. Bregman retractions include backward Bregman projections, forward Bregman projections, as well as their convex combinations, and are thus quite exible. The main result on iterating Bregman retractions unifies ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The notion of a Bregman retraction of a closed convex set in Euclidean space is introduced. Bregman retractions include backward Bregman projections, forward Bregman projections, as well as their convex combinations, and are thus quite exible. The main result on iterating Bregman retractions unifies several convergence results on projection methods for solving convex feasibility problems. It is also used to construct new sequential and parallel algorithms.
Convex envelopes of complexity controlling penalties: the case against
"... premature envelopment ..."
A relative error tolerance for a family of Generalized Proximal Point Methods
"... We propose a new kind of inexact scheme for a family of generalized proximal point methods for the monotone complementarity problem. These methods, studied by Auslender, Teboulle and BenTiba, converge under the sole assumption of existence of solutions. We prove convergence of our new scheme, as we ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a new kind of inexact scheme for a family of generalized proximal point methods for the monotone complementarity problem. These methods, studied by Auslender, Teboulle and BenTiba, converge under the sole assumption of existence of solutions. We prove convergence of our new scheme, as well as discuss its implementability. Key Words. maximal monotone operator, nonlinear complementarity problem, interior proximal point algorithm, extragradient method, enlargement of a maximal monotone operator.