## CONVERGENCE ANALYSIS OF DEFLECTED CONDITIONAL APPROXIMATE SUBGRADIENT METHODS (2009)

Citations: | 7 - 4 self |

### BibTeX

@MISC{D'Antonio09convergenceanalysis,

author = {Giacomo D'Antonio and Antonio Frangioni},

title = {CONVERGENCE ANALYSIS OF DEFLECTED CONDITIONAL APPROXIMATE SUBGRADIENT METHODS },

year = {2009}

}

### OpenURL

### Abstract

Subgradient methods for nondifferentiable optimization benefit from deflection, i.e., defining the search direction as a combination of the previous direction and the current subgradient. In the constrained case they also benefit from projection of the search direction onto the feasible set prior to computing the steplength, that is, from the use of conditional subgradient techniques. However, combining the two techniques is not straightforward, especially if an inexact oracle is available which can only compute approximate function values and subgradients. We present a convergence analysis of several different variants, both conceptual and implementable, of approximate conditional deflected subgradient methods. Our analysis extends the available results in the literature by using the main stepsize rules presented so far, while allowing deflection in a more flexible way. Furthermore, to allow for (diminishing/square summable) rules where the stepsize is tightly controlled a priori, we propose a new class of deflection-restricted approaches where it is the deflection parameter, rather than the stepsize, which is dynamically adjusted using the “target value ” of the optimization sequence. For both Polyak-type and diminishing/square summable stepsizes, we propose a “correction ” of the standard formula which shows that, in the inexact case, knowledge about the error computed by the oracle (which is available in several practical applications) can be exploited in order to strengthen the convergence properties of the method. The analysis allows for several variants of the algorithm; at least one of them is likely to show numerical performances similar to these of “heavy ball ” subgradient methods, popular within backpropagation approaches to train neural networks, while possessing stronger convergence properties.