Results 1 
2 of
2
Practical Aspects of the MoreauYosida Regularization I: Theoretical Properties
, 1994
"... When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result co ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
When computing the infimal convolution of a convex function f with the squared norm, one obtains the socalled MoreauYosida regularization of f . Among other things, this function has a Lipschitzian gradient. We investigate some more of its properties, relevant for optimization. Our main result concerns secondorder differentiability and is as follows. Under assumptions that are quite reasonable in optimization, the MoreauYosida is twice diffferentiable if and only if f is twice differentiable as well. In the course of our development, we give some results of general interest in convex analysis. In particular, we establish primaldual relationship between the remainder terms in the firstorder development of a convex function and its conjugate.
A Comparison of Rates of Convergence of Two Inexact Proximal Point Algorithms
, 2000
"... We compare the linear rate of convergence estimates for two inexact proximal point methods. The first one is the classical inexact scheme introduced by Rockafellar, for which we obtain a slightly better estimate than the one given in [16]. The second one is the hybrid inexact proximal point approach ..."
Abstract
 Add to MetaCart
(Show Context)
We compare the linear rate of convergence estimates for two inexact proximal point methods. The first one is the classical inexact scheme introduced by Rockafellar, for which we obtain a slightly better estimate than the one given in [16]. The second one is the hybrid inexact proximal point approach introduced in [25, 22]. The advantage of the hybrid methods is that they use more constructive and less restrictive tolerance criteria in inexact solution of subproblems, while preserving all the favorable properties of the classical method, including global convergence and local linear rate of convergence under standard assumptions. In this paper, we obtain a linear convergence estimate for the hybrid algorithm [22], which is better than the one for the classical method [16], even if our improved estimate is used for the latter.