Results 1  10
of
89
Convergence of a block coordinate descent method for nondifferentiable minimization
 J. OPTIM THEORY APPL
, 2001
"... We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterate ..."
Abstract

Cited by 296 (3 self)
 Add to MetaCart
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among NA1 coordinate blocks or f has at most one minimum in each of NA2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.
Efficiency of coordinate descent methods on hugescale optimization problems
, 2010
"... ..."
(Show Context)
Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems
 Mathematical Programming
, 1992
"... Abstract Whether or not the general asymmetric variational inequality problem can be formulated as a differentiable optimization problem has been an open question. This paper gives an affirmative answer to this question. We provide a new optimization problem formulation of the variational inequalit ..."
Abstract

Cited by 121 (12 self)
 Add to MetaCart
(Show Context)
Abstract Whether or not the general asymmetric variational inequality problem can be formulated as a differentiable optimization problem has been an open question. This paper gives an affirmative answer to this question. We provide a new optimization problem formulation of the variational inequality problem and show that its objective function is continuously differentiable whenever the mapping involved in the latter problem is continuously differentiable. We also show that under appropriate assumptions on the latter mapping, any stationary point of the optimization problem is a global optimal solution, and hence solves the variational inequality problem. We discuss descent methods for solving the equivalent optimization problem and comment on systems of nonlinear equations and nonlinear complementarity problems.
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
An overview of bilevel optimization
, 2007
"... This paper is devoted to bilevel optimization, a branch of mathematical programming of both practical and theoretical interest. Starting with a simple example, we proceed towards a general formulation. We then present fields of application, focus on solution approaches, and make the connection wit ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
This paper is devoted to bilevel optimization, a branch of mathematical programming of both practical and theoretical interest. Starting with a simple example, we proceed towards a general formulation. We then present fields of application, focus on solution approaches, and make the connection with MPECs (Mathematical Programs with Equilibrium Constraints).
Alternating minimization and projection methods for nonconvex problems
 0801.1780v2[math.oc], arXiv
, 2008
"... Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite valued) function which couples the variables x and y. The algorithm is defined by: (x0, y0) ∈ R n × R m given, (xk, yk) → (xk+1, yk) → (xk+1, yk+1)
Unconstrained Optimization Reformulations of Variational Inequality Problems
, 1995
"... . Recently, Peng considered a merit function for the variational inequality problem (VIP), which constitutes an unconstrained dierentiable optimization reformulation of VIP. In this paper, we generalize the merit function proposed by Peng and study various properties of the generalized function. We ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
. Recently, Peng considered a merit function for the variational inequality problem (VIP), which constitutes an unconstrained dierentiable optimization reformulation of VIP. In this paper, we generalize the merit function proposed by Peng and study various properties of the generalized function. We call this function the Dgap function. We give conditions under which any stationary point of the Dgap function is a solution of VIP and conditions under which it provides a global error bound for VIP. We also present a descent method for solving VIP based on the Dgap function. Key words: Variational inequality problems, unconstrained optimization reformulation, global error bound, descent method. 1 The authors are grateful to C. Kanzow for his comments on an earlier version of the paper. They also thank the referees for their constructive comments. y The work of this author was supported by the Research Fellowships of the Japan Society for the Promotion of Science for Young Scientist...
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
(Show Context)
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
On the Effectiveness of Projection Methods for Convex Feasibility Problems with Linear Inequality Constraints
"... The effectiveness of projection methods for solving systems of linear inequalities is investigated. It is shown that they often have a computational advantage over alternatives that have been proposed for solving the same problem and that this makes them successful in many realworld applications. ..."
Abstract

Cited by 33 (17 self)
 Add to MetaCart
(Show Context)
The effectiveness of projection methods for solving systems of linear inequalities is investigated. It is shown that they often have a computational advantage over alternatives that have been proposed for solving the same problem and that this makes them successful in many realworld applications. This is supported by experimental evidence provided in this paper on problems of various sizes (up to tens of thousands of unknowns satisfying up to hundreds of thousands of constraints) and by a discussion of the demonstrated efficacy of projection methods in numerous scientific publications and commercial patents (dealing with problems that can have over a billion unknowns and a similar number of constraints).
A new projection method for variational inequality problems
 SIAM J. Control Optim
, 1999
"... Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. I ..."
Abstract

Cited by 32 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijotype linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projectiontype methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported. Key words. variational inequalities, projection methods, pseudomonotone maps