Results 1  10
of
50
Convergence of a block coordinate descent method for nondifferentiable minimization
 J. Optim Theory Appl
, 2001
"... Abstract. We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
Abstract. We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among NA1 coordinate blocks or f has at most one minimum in each of NA2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation. Key Words. Block coordinate descent, nondifferentiable minimization, stationary point, Gauss–Seidel method, convergence, quasiconvex functions,
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Efficiency of coordinate descent methods on hugescale optimization problems
 SIAM Journal on Optimization
"... In this paper we propose new methods for solving hugescale optimization problems. For problems of this size, even the simplest fulldimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
In this paper we propose new methods for solving hugescale optimization problems. For problems of this size, even the simplest fulldimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worstcase bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.
Unconstrained Optimization Reformulations of Variational Inequality Problems
, 1995
"... . Recently, Peng considered a merit function for the variational inequality problem (VIP), which constitutes an unconstrained dierentiable optimization reformulation of VIP. In this paper, we generalize the merit function proposed by Peng and study various properties of the generalized function. We ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
. Recently, Peng considered a merit function for the variational inequality problem (VIP), which constitutes an unconstrained dierentiable optimization reformulation of VIP. In this paper, we generalize the merit function proposed by Peng and study various properties of the generalized function. We call this function the Dgap function. We give conditions under which any stationary point of the Dgap function is a solution of VIP and conditions under which it provides a global error bound for VIP. We also present a descent method for solving VIP based on the Dgap function. Key words: Variational inequality problems, unconstrained optimization reformulation, global error bound, descent method. 1 The authors are grateful to C. Kanzow for his comments on an earlier version of the paper. They also thank the referees for their constructive comments. y The work of this author was supported by the Research Fellowships of the Japan Society for the Promotion of Science for Young Scientist...
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
A new projection method for variational inequality problems
 SIAM J. Control Optim
, 1999
"... Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. I ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijotype linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projectiontype methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported. Key words. variational inequalities, projection methods, pseudomonotone maps
A Note On A Globally Convergent Newton Method For Solving Monotone Variational Inequalities
 Operations Research Letters
, 1987
"... . It is wellknown (see Pang and Chan [7]) that Newton's method, applied to strongly monotone variational inequalities, is locally and quadratically convergent. In this paper we show that Newton's method yields a descent direction for a nonconvex, nondifferentiable merit function, even in the abscen ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
. It is wellknown (see Pang and Chan [7]) that Newton's method, applied to strongly monotone variational inequalities, is locally and quadratically convergent. In this paper we show that Newton's method yields a descent direction for a nonconvex, nondifferentiable merit function, even in the abscence of strong monotonicity. This result is then used to modify Newton's method into a globally convergent algorithm by introducing a linesearch strategy. Furthermore, under strong monotonicity (i) the optimal face is attained after a finite number of iterations (ii) the stepsize is eventually fixed to the value one, resulting in the usual Newton step. Computational results are presented. Keywords. Mathematical Programming. Variational Inequalities. Newton's method. Research supported by NSERC grants A5789 and A5491. 1. Problem formulation and basic definitions. Let \Phi be a nonempty, convex and compact subset of R n . Consider the variational inequality problem consisting in finding x ...
A New Merit Function and a Descent Method for Semidefinite Complementarity Problems
, 1997
"... Recently, Tseng extended several merit functions for the nonlinear complementarity problem to the semidefinite complementarity problem (SDCP) and investigated various properties of those functions. In this paper, we propose a new merit function for the SDCP based on the squared FischerBurmeister ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Recently, Tseng extended several merit functions for the nonlinear complementarity problem to the semidefinite complementarity problem (SDCP) and investigated various properties of those functions. In this paper, we propose a new merit function for the SDCP based on the squared FischerBurmeister function and show that it has some favorable properties. Particularly, we give conditions under which the function provides a global error bound for the SDCP and conditions under which it has bounded level sets. We also present a derivativefree method for solving the SDCP and prove its global convergence under suitable assumptions.
Complementarity And Related Problems: A Survey
, 1998
"... This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This survey gives an introduction to some of the recent developments in the field of complementarity and related problems. After presenting two typical examples and the basic existence and uniqueness results, we focus on some new trends for solving nonlinear complementarity problems. Extensions to mixed complementarity problems, variational inequalities and mathematical programs with equilibrium constraints are also discussed.
Alternating minimization and projection methods for nonconvex problems
 0801.1780v2[math.oc], arXiv
, 2008
"... Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite valued) function which couples the variables x and y. The algorithm is defined by: (x0, y0) ∈ R n × R m given, (xk, yk) → (xk+1, yk) → (xk+1, yk+1)