Results 1  10
of
72
On the DouglasRachford splitting method and the proximal point algorithm for maximal monotone operators
, 1992
"... ..."
A Modified ForwardBackward Splitting Method For Maximal Monotone Mappings
 SIAM J. Control Optim
, 1998
"... We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for mon ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
(Show Context)
We consider the forwardbackward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for monotone variational inequalities, under which the method converges assuming only the forward mapping is monotone and (Lipschitz) continuous on some closed convex subset of its domain. The modification entails an additional forward step and a projection step at each iteration. Applications of the modified method to decomposition in convex programming and monotone variational inequalities are discussed.
FIXEDPOINT CONTINUATION FOR ℓ1MINIMIZATION: METHODOLOGY AND CONVERGENCE
"... We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers ..."
Abstract

Cited by 47 (9 self)
 Add to MetaCart
We present a framework for solving largescale ℓ1regularized convex minimization problem: min �x�1 + µf(x). Our approach is based on two powerful algorithmic ideas: operatorsplitting and continuation. Operatorsplitting results in a fixedpoint algorithm for any given scalar µ; continuation refers to approximately following the path traced by the optimal value of x as µ increases. In this paper, we study the structure of optimal solution sets; prove finite convergence for important quantities; and establish qlinear convergence rates for the fixedpoint algorithm applied to problems with f(x) convex, but not necessarily strictly convex. The continuation framework, motivated by our convergence results, is demonstrated to facilitate the construction of practical algorithms.
A Hybrid Approximate ExtragradientProximal Point Algorithm Using The Enlargement Of A Maximal Monotone Operator
, 1999
"... We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of ..."
Abstract

Cited by 29 (17 self)
 Add to MetaCart
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradienttype step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter [2]) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximalalgorithmbased framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forwardbackward splitting algorithm of Tseng [35]...
Convergence rates in forwardbackward splitting
, 1989
"... Forwardbackward splitting methods provide a range of approaches to solving largescale optimization problems and variational inequalities in which structure conducive to decomposition can be utilized. Apart from special cases where the forward step is absent and a version of the proximal point alg ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Forwardbackward splitting methods provide a range of approaches to solving largescale optimization problems and variational inequalities in which structure conducive to decomposition can be utilized. Apart from special cases where the forward step is absent and a version of the proximal point algorithm comes out, efforts at evaluating the convergence potential of such methods have so far relied on Lipschitz properties and strong monotonicity, or inverse strong monotonicity, of the mapping involved in the forward step, the perspective mainly being that of projection algorithms. Here convergence is analyzed by a technique that allows properties of the mapping in the backward step to be brought in as well. For the first time in such a general setting, global and local contraction rates are derived, moreover in a form making it possible to determine the optimal step size relative to certain constants associated with the given problem. Insights are thereby gained into the effects of shifting strong monotonicity between the forward and backward mappings when a splitting is selected.
A VariablePenalty Alternating Directions Method for Convex Optimization
"... We study a generalized version of the method of alternating directions as applied to the minimization of the sum of two convex functions subject to linear constraints. The method consists of solving consecutively in each iteration two optimization problems which contain in the objective function bot ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We study a generalized version of the method of alternating directions as applied to the minimization of the sum of two convex functions subject to linear constraints. The method consists of solving consecutively in each iteration two optimization problems which contain in the objective function both Lagrangian and proximal terms. The minimizers determine the new proximal terms and a simple update of the Lagrangian terms follows. We prove a convergence theorem which extends existing results by relaxing the assumption of uniqueness of minimizers. Another novelty is that we allow penalty matrices, and these may vary per iteration. This can be beneficial in applications, since it allows additional tuning of the method to the problem and can lead to faster convergence relative to fixed penalties. As an application, we derive a decomposition scheme for block angular optimization and present computational results on a class of dual block angular problems. Keywords: parallel computing, alter...
Parallel Function Decomposition and Space Decomposition Methods with Applications to Optimization, Splitting and Domain Decomposition
 BEIJING MATHEMATICS
, 1992
"... Some methods which we call function decomposition methods and space decomposition methods are developed. These methods deal with a convex programming problem, i.e. a minimization problem of a convex function over a space or a convex set of a space. If the function can be decomposed into the sum of c ..."
Abstract

Cited by 17 (15 self)
 Add to MetaCart
(Show Context)
Some methods which we call function decomposition methods and space decomposition methods are developed. These methods deal with a convex programming problem, i.e. a minimization problem of a convex function over a space or a convex set of a space. If the function can be decomposed into the sum of convex functions or the space can be decomposed into the sum of subspaces, then parallel methods can be used for the minimization. In practical problems, there are many different ways to decompose a function and to decompose a space. Many partial differential equations can be formulated as a minimization problem in some way. Therefore, we get some parallel methods for partial differential equations. The method is not restricted to linear problems, nonlinear problems are naturally included into the theory. In the paper, the applications to linear and quasilinear selfadjoint elliptic equations, to strongly nonlinear elliptic equation like the pLaplace equation, to the Stokes equation a...
Scaling MPE Inference for Constrained Continuous Markov Random Fields with Consensus Optimization
"... Probabilistic graphical models are powerful tools for analyzing constrained, continuous domains. However, finding mostprobable explanations (MPEs) in these models can be computationally expensive. In this paper, we improve the scalability of MPE inference in a class of graphical models with piecewi ..."
Abstract

Cited by 12 (12 self)
 Add to MetaCart
(Show Context)
Probabilistic graphical models are powerful tools for analyzing constrained, continuous domains. However, finding mostprobable explanations (MPEs) in these models can be computationally expensive. In this paper, we improve the scalability of MPE inference in a class of graphical models with piecewiselinear and piecewisequadratic dependencies and linear constraints over continuous domains. We derive algorithms based on a consensusoptimization framework and demonstrate their superior performance over state of the art. We show empirically that in a largescale voterpreference modeling problem our algorithms scale linearly in the number of dependencies and constraints. 1
General Projective Splitting Methods for Sums of Maximal Monotone Operators
, 2007
"... R u t c o r ..."