Results 1  10
of
10
Messagepassing for graphstructured linear programs: Proximal methods and rounding schemes
, 2008
"... The problem of computing a maximum a posteriori (MAP) configuration is a central computational challenge associated with Markov random fields. A line of work has focused on “treebased ” linear programming (LP) relaxations for the MAP problem. This paper develops a family of superlinearly convergen ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
The problem of computing a maximum a posteriori (MAP) configuration is a central computational challenge associated with Markov random fields. A line of work has focused on “treebased ” linear programming (LP) relaxations for the MAP problem. This paper develops a family of superlinearly convergent algorithms for solving these LPs, based on proximal minimization schemes using Bregman divergences. As with standard messagepassing on graphs, the algorithms are distributed and exploit the underlying graphical structure, and so scale well to large problems. Our algorithms have a doubleloop character, with the outer loop corresponding to the proximal sequence, and an inner loop of cyclic Bregman divergences used to compute each proximal update. Different choices of the Bregman divergence lead to conceptually related but distinct LPsolving algorithms. We establish convergence guarantees for our algorithms, and illustrate their performance via some simulations. We also develop two classes of graphstructured rounding schemes, randomized and deterministic, for obtaining integral configurations from the LP solutions. Our deterministic rounding schemes use a “reparameterization ” property of our algorithms so that when the LP solution is integral, the MAP solution can be obtained even before the LPsolver converges to the optimum. We also propose a graphstructured randomized rounding scheme that applies to iterative LP solving algorithms in general. We analyze the performance of our rounding schemes, giving bounds on the number of iterations required, when the LP is integral, for the rounding schemes to obtain the MAP solution. These bounds are expressed in terms of the strength of the potential functions, and the energy gap, which measures how well the integral MAP solution is separated from other integral configurations. We also report simulations comparing these rounding schemes. 1
Generalized bundle methods
 SIAM Journal on Optimization
, 1998
"... Abstract. We study a class of generalized bundle methods for which the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under a different hypothesis on the stabiliz ..."
Abstract

Cited by 26 (13 self)
 Add to MetaCart
Abstract. We study a class of generalized bundle methods for which the stabilizing term can be any closed convex function satisfying certain properties. This setting covers several algorithms from the literature that have been so far regarded as distinct. Under a different hypothesis on the stabilizing term and/or the function to be minimized, we prove finite termination, asymptotic convergence, and finite convergence to an optimal point, with or without limits on the number of serious steps and/or requiring the proximal parameter to go to infinity. The convergence proofs leave a high degree of freedom in the crucial implementative features of the algorithm, i.e., the management of the bundle of subgradients (βstrategy) and of the proximal parameter (tstrategy). We extensively exploit a dual view of bundle methods, which are shown to be a dual ascent approach to one nonlinear problem in an appropriate dual space, where nonlinear subproblems are approximately solved at each step with an inner linearization approach. This allows us to precisely characterize the changes in the subproblems during the serious steps, since the dual problem is not tied to the local concept of εsubdifferential. For some of the proofs, a generalization of infcompactness, called ∗compactness, is required; this concept is related to that of asymptotically wellbehaved functions. Key words. nondifferentiable optimization, bundle methods AMS subject classifications. 90C25 PII. S1052623498342186 Introduction. We are concerned with the numerical solution of the primal problem
Approximate iterations in Bregmanfunctionbased proximal algorithms
 Math. Program
, 1998
"... This paper establishes convergence of generalized Bregmanfunctionbased proximal point algorithms when the iterates are computed only approximately. The problem being solved is modeled as a general maximal monotone operator, and need not reduce to minimization of a function. The accuracy conditions ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
This paper establishes convergence of generalized Bregmanfunctionbased proximal point algorithms when the iterates are computed only approximately. The problem being solved is modeled as a general maximal monotone operator, and need not reduce to minimization of a function. The accuracy conditions on the iterates resemble those required for the classical "linear" proximal point algorithm, but are slightly stronger; they should be easier to verify or enforce in practice than conditions given in earlier analyses of approximate generalized proximal methods. Subjects to these practically enforceable accuracy restrictions, convergence is obtained under the same conditions currently established for exact Bregrnanfunctionbased
Convergence Of ProximalLike Algorithms
 SIAM JOURNAL ON OPTIMIZATION
, 1997
"... We analyze proximal methods based on entropylike distances for the minimization of convex functions subject to nonnegativity constraints. We prove global convergence results for the methods with approximate minimization steps and an ergodic convergence result for the case of finding a zero of a max ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We analyze proximal methods based on entropylike distances for the minimization of convex functions subject to nonnegativity constraints. We prove global convergence results for the methods with approximate minimization steps and an ergodic convergence result for the case of finding a zero of a maximal monotone operator. We also consider linearly constrained convex problems and establish a quadratic convergence rate result for linear programs. Our analysis allows us to simplify and extend the available convergence results for these methods.
Augmented Lagrangian methods and proximal point methods for convex optimization
 Investigaci'on Operativa
, 1999
"... We present a review of the classical proximal point method for nding zeroes of maximal monotone operators, and its application to augmented Lagrangian methods, including a rather complete convergence analysis. Next we discuss the generalized proximal point methods, either with Bregman distances ord ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We present a review of the classical proximal point method for nding zeroes of maximal monotone operators, and its application to augmented Lagrangian methods, including a rather complete convergence analysis. Next we discuss the generalized proximal point methods, either with Bregman distances ordivergences, which in turn give raise to a family of generalized augmented Lagrangians, as smooth in the primal variables as the data functions are. We give a sketch of the convergence analysis for the case of the proximal point method with Bregman distances for variational inequality problems. The di culty with these generalized augmented Lagrangians lies in establishing optimality of the cluster points of the primal sequence, which is rather immediate in the classical case. In connection with this issue we present two results. First we prove optimality of such cluster points under a strict complementarity assumption (basically that no tight constraint is redundant at any solution). In the absence of this assumption, we establish an ergodic convergence result, namely optimality of the cluster points of a sequence of weighted averages of the primal sequence given by the method, improving over weaker ergodic results previously known. Finally we discuss similar ergodic results for the augmented Lagrangian method withdivergences and give the explicit formulae of generalized augmented Lagrangian methods for di erent choices of the Bregman distances and thedivergences. 1
A relative error tolerance for a family of Generalized Proximal Point Methods
"... We propose a new kind of inexact scheme for a family of generalized proximal point methods for the monotone complementarity problem. These methods, studied by Auslender, Teboulle and BenTiba, converge under the sole assumption of existence of solutions. We prove convergence of our new scheme, as we ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a new kind of inexact scheme for a family of generalized proximal point methods for the monotone complementarity problem. These methods, studied by Auslender, Teboulle and BenTiba, converge under the sole assumption of existence of solutions. We prove convergence of our new scheme, as well as discuss its implementability. Key Words. maximal monotone operator, nonlinear complementarity problem, interior proximal point algorithm, extragradient method, enlargement of a maximal monotone operator.
An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints
, 2000
"... In this paper, we propose an infeasible interior proximal method for solving a convex programming problem with linear constraints. The interior proximal method proposed by Auslender and Haddou is a proximal method using a distancelike barrier function, and it has a global convergence property under ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we propose an infeasible interior proximal method for solving a convex programming problem with linear constraints. The interior proximal method proposed by Auslender and Haddou is a proximal method using a distancelike barrier function, and it has a global convergence property under mild assumptions. However this method is applicable only to problems whose feasible region has an interior point, because an initial point for the method must be chosen from the interior of the feasible region. The algorithm proposed in this paper is based on the idea underlying the infeasible interior point method for linear programming. This algorithm is applicable to problems whose feasible region may not have a nonempty interior, and it can be started from an arbitrary initial point. We establish global convergence of the proposed algorithm under appropriate assumptions.
A Proximal Point Algorithm with ϕDivergence to Quasiconvex Programming
, 2006
"... We use the proximal point method with the ϕdivergence given by ϕ(t) = t−log t−1 for the minimization of quasiconvex functions subject to nonnegativity constraints. We establish that the sequence generated by our algorithm is welldefined in the sense that it exists and it is not cyclical. Without ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We use the proximal point method with the ϕdivergence given by ϕ(t) = t−log t−1 for the minimization of quasiconvex functions subject to nonnegativity constraints. We establish that the sequence generated by our algorithm is welldefined in the sense that it exists and it is not cyclical. Without any assumption of boundedness level to the objective function, we obtain that the sequence converges to a stationary point. We also prove that when the regularization parameters go to zero, the sequence converges to an optimal solution.
An Extension of the Proximal Point Method for Quasiconvex Minimization
, 2008
"... In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on the Euclidean space and the nonnegative orthant. For the unconstrained minimization problem, assumming that the function is bounded from below and lower semiconti ..."
Abstract
 Add to MetaCart
In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on the Euclidean space and the nonnegative orthant. For the unconstrained minimization problem, assumming that the function is bounded from below and lower semicontinuous we prove that iterations {x k} given by 0 ∈ ∂(f(.)+(λk/2).−x k−1   2)(x k) are well defined and if, in addition, f is quasiconvex then {f(x k)} is decreasing and {x k} converges to a point of U: = {x ∈ IR n: f(x) ≤ infj≥0 f(x j)} assumed nonempty. Under the assumption that the sequence of parameters is bounded and f is continuous it is proved that {xk} converges to a generalized critical point of f. Furthermore, if {λk} converge to zero and the iterations {xk} are global minimizers of the regularized subproblems f(.) + (λk/2). − xk−1   2, the sequence converges to an optimal solution. For the quasiconvex minimization on the nonnegative orthant, using the same premises of the unconstrained case and using a general proximal distance (which includes