Results 1  10
of
38
On the DouglasRachford splitting method and the proximal point algorithm for maximal monotone operators
, 1992
"... ..."
A Hybrid ProjectionProximal Point Algorithm
, 1998
"... We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the ..."
Abstract

Cited by 49 (19 self)
 Add to MetaCart
We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allow...
An Inexact Hybrid Generalized Proximal Point Algorithm And Some New Results On The Theory Of Bregman Functions
 Mathematics of Operations Research
, 2000
"... We present a new Bregmanfunctionbased algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator. The principal advantage of the presented algorithm is that it allows a more constructive error tolerance ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
(Show Context)
We present a new Bregmanfunctionbased algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator. The principal advantage of the presented algorithm is that it allows a more constructive error tolerance criterion in solving the proximal point subproblems. Furthermore, we eliminate the assumption of pseudomonotonicity which was, until now, standard in proving convergence for paramonotone operators. Thus we obtain a convergence result which is new even for exact generalized proximal point methods. Finally, we present some new results on the theory of Bregman functions. For example, we show that the standard assumption of convergence consistency is a consequence of the other properties of Bregman functions, and is therefore superfluous.
Forcing strong convergence of proximal point iterations in a Hilbert space
, 2000
"... This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinitedimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinitedimensional case. We propose a new proximaltype algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns.
Local behavior of an iterative framework for generalized equations with nonisolated solutions
 MATH. PROGRAM., SER. A
, 2002
"... ..."
A Globally Convergent Inexact Newton Method for Systems of Monotone Equations
, 1998
"... We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regula ..."
Abstract

Cited by 27 (15 self)
 Add to MetaCart
We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, under standard assumptions the local superlinear rate of convergence is achieved. As opposed to classical globalization strategies for Newton methods, for computing the stepsize we do not use linesearch aimed at decreasing the value of some merit function. Instead, linesearch in the approximate Newton direction is used to construct an appropriate hyperplane which separates the current iterate from the solution set. This step is followed by projecting the current iterate onto this hyperplane, which ensures global convergence of the algorithm. Computational cost of each iteration of our method is of the same order as that of the classical damped Newton method. The c...
A Variable Metric Proximal Point Algorithm for Monotone Operators
, 1997
"... The Proximal Point Algorithm (PPA) is a method for solving inclusions of the form 0 2 T (z) where T is a monotone operator on a Hilbert space. The algorithm is one of the most powerful and versatile solution techniques for solving variational inequalities, convex programs, and convexconcave mini ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
The Proximal Point Algorithm (PPA) is a method for solving inclusions of the form 0 2 T (z) where T is a monotone operator on a Hilbert space. The algorithm is one of the most powerful and versatile solution techniques for solving variational inequalities, convex programs, and convexconcave minimax problems. It possesses a robust convergence theory for very general problem classes and is the basis for a wide variety of decomposition methods called splitting methods. Yet, the classical PPA typically exhibits slow convergence in many applications. For this reason, acceleration methods for the PPA algorithm are of great practical importance. In this paper we propose a variable metric implementation of the proximal point algorithm. In essence, the method is a Newtonlike scheme applied to the MoreauYosida resolvent of the operator T . In this article, we establish the global and linear convergence of the proposed method. In addition, we characterize the superlinear convergence of ...
The LP dual active set algorithm,
 in HighPerformance Algorithms and Software in Nonlinear Optimization,
, 1998
"... ..."
Application of the dual active set algorithm to quadratic network optimization
 COMPUT. OPTIM. APPL
, 1993
"... A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
A new algorithm, the dual active set algorithm, is presented for solving a minimization problem with equality constraints and bounds on the variables. The algorithm identifies the active bound constraints by maximizing an unconstrained dual function in a finite number of iterations. Convergence of the method is established, and it is applied to convex quadratic programming. In its implementable form, the algorithm is combined with the proximal point method. A computational study of largescale quadratic network problems compares the algorithm to a coordinate ascent method and to conjugate gradient methods for the dual problem. This study shows that combining the new algorithm with the nonlinear conjugate gradient method is particularly effective on difficult network problems from the literature.