Results 1 
3 of
3
Practical Aspects of the MoreauYosida Regularization I: Theoretical Properties
, 1994
"... When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result co ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result concerns secondorder differentiability and is as follows. Under assumptions that are quite reasonable in optimization, the MoreauYosida is twice diffferentiable if and only if f is twice differentiable as well. In the course of our development, we give some results of general interest in convex analysis. In particular, we establish primaldual relationship between the remainder terms in the firstorder development of a convex function and its conjugate.
A Hybrid Approximate ExtragradientProximal Point Algorithm Using The Enlargement Of A Maximal Monotone Operator
, 1999
"... We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter [2]) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximalalgorithmbased framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forwardbackward splitting algorithm of Tseng [35]...
A Comparison of Rates of Convergence of Two Inexact Proximal Point Algorithms
, 2000
"... We compare the linear rate of convergence estimates for two inexact proximal point methods. The first one is the classical inexact scheme introduced by Rockafellar, for which we obtain a slightly better estimate than the one given in [16]. The second one is the hybrid inexact proximal point approach ..."
Abstract
 Add to MetaCart
We compare the linear rate of convergence estimates for two inexact proximal point methods. The first one is the classical inexact scheme introduced by Rockafellar, for which we obtain a slightly better estimate than the one given in [16]. The second one is the hybrid inexact proximal point approach introduced in [25, 22]. The advantage of the hybrid methods is that they use more constructive and less restrictive tolerance criteria in inexact solution of subproblems, while preserving all the favorable properties of the classical method, including global convergence and local linear rate of convergence under standard assumptions. In this paper, we obtain a linear convergence estimate for the hybrid algorithm [22], which is better than the one for the classical method [16], even if our improved estimate is used for the latter.