Results 1  10
of
16
A truly globally convergent Newtontype method for the monotone nonlinear complementarity problem
 SIAM Journal on Optimization
"... Abstract. The Josephy–Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularity assumptions, this method is known to be locally (superlinearly) convergent. To enlarge the domain o ..."
Abstract

Cited by 17 (15 self)
 Add to MetaCart
Abstract. The Josephy–Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularity assumptions, this method is known to be locally (superlinearly) convergent. To enlarge the domain of convergence of the Newton method, some globalization strategy based on a chosen merit function is typically used. However, to ensure global convergence to a solution, some additional restrictive assumptions are needed. These assumptions imply boundedness of level sets of the merit function and often even (global) uniqueness of the solution. We present a new globalization strategy for monotone problems which is not based on any merit function. Our linesearch procedure utilizes the regularized Newton direction and the monotonicity structure of the problem to force global convergence by means of a (computationally explicit) projection step which reduces the distance to the solution set of the problem. The resulting algorithm is truly globally convergent in the sense that the subproblems are always solvable, and the whole sequence of iterates converges to a solution of the problem without any regularity assumptions. In fact, the solution set can even be unbounded. Each iteration of the new method has the same order of computational cost as an iteration of the damped Newton method. Under natural assumptions, the local superlinear rate of convergence is also achieved. Key words. nonlinear complementarity problem, Newton method, proximal point method, projection method, global convergence, superlinear convergence
A Combined Smoothing and Regularization Method for Monotone SecondOrder Cone Complementarity Problems
 SIAM Journal on Optimization
, 2003
"... The SecondOrder Cone Complementarity Problem (SOCCP) is a wide class of problems containing the Nonlinear Complementarity Problem (NCP) and the SecondOrder Cone Programming Problem (SOCP). Recently, Fukushima, Luo and Tseng extended some merit functions and their smoothing functions for NCP to SOC ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
The SecondOrder Cone Complementarity Problem (SOCCP) is a wide class of problems containing the Nonlinear Complementarity Problem (NCP) and the SecondOrder Cone Programming Problem (SOCP). Recently, Fukushima, Luo and Tseng extended some merit functions and their smoothing functions for NCP to SOCCP. Moreover, they derived computable formulas for the Jacobians of the smoothing functions and gave conditions for the Jacobians to be invertible. In this paper, we propose a globally and quadratically convergent algorithm, which is based on smoothing and regularization methods, for solving monotone SOCCP. In particular, we study strong semismoothness and Jacobian consistency, which play an important role in establishing quadratic convergence of the algorithm. Furthermore, we examine e#ectiveness of the algorithm by means of numerical experiments. Key words. secondorder cone, complementarity problem, smoothing method, regularization method AMS subject classifications. 90C33, 65K05 1
Some methods based on the the Dgap function for solving monotone variational inequalities
 Computational Optimization and Applications
, 2000
"... Abstract. The Dgap function has been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that the Dgap function has certain properties that are useful also for monotone variational inequality problems with bounded feasible set. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract. The Dgap function has been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that the Dgap function has certain properties that are useful also for monotone variational inequality problems with bounded feasible set. Accordingly, we develop two unconstrained methods based on them that are similar in spirit to a feasible method of Zhu and Marcotte based on the regularizedgap function. We further discuss a third method based on applying the Dgap function to a regularized problem. Preliminary numerical experience is also reported. Keywords: methods monotone variational inequalities, implicit Lagrangian, Dgap function, stationary point, descent 1.
A Hybrid JosephyNewton Method For Solving Box Constrained Variational Inequality Problems Via The DGap Function
"... . A box constrained variational inequality problem can be reformulated as an unconstrained minimization problem through the Dgap function. Some basic properties of the affine variational inequality subproblems in the classical JosephyNewton method are studied. A hybrid JosephyNewton method is the ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
. A box constrained variational inequality problem can be reformulated as an unconstrained minimization problem through the Dgap function. Some basic properties of the affine variational inequality subproblems in the classical JosephyNewton method are studied. A hybrid JosephyNewton method is then proposed for minimizing the Dgap function. Under suitable conditions, the algorithm is shown to be globally convergent and locally quadratically convergent. Some numerical results are also presented. Key words: Variational inequality problem, box constraints, Dgap function, Newton's method, unconstrained optimization, global convergence, quadratic convergence. 2 The research of this author was supported by Project 19601035 of NSFC in China. 4 Current address (October 1, 1997  September 30, 1998): Computer Sciences Department, University of Wisconsin  Madison, 1210 West Dayton Street, 53706 Madison, WI; email: kanzow@cs.wisc.edu. The research of this author was supported by th...
Solving Box Constrained Variational Inequalities By Using The Natural Residual With DGap Function Globalization
, 1997
"... . We present a new method for the solution of the box constrained variational inequality problem, BVIP for short. Basically, this method is a nonsmooth Newton method applied to a reformulation of BVIP as a system of nonsmooth equations involving the natural residual. The method is globalized by usin ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We present a new method for the solution of the box constrained variational inequality problem, BVIP for short. Basically, this method is a nonsmooth Newton method applied to a reformulation of BVIP as a system of nonsmooth equations involving the natural residual. The method is globalized by using the Dgap function. We show that the proposed algorithm is globally and fast locally convergent. Moreover, if the problem is described by an affine function, the algorithm has a finite termination property. Numerical results for some largescale variational inequality problems are reported. Key words: Variational inequality problem, mixed complementarity problem, natural residual, Dgap function, Newton's method, global convergence, quadratic convergence, finite termination. 2 Current address (October 1, 1997  September 30, 1998): Computer Sciences Department, University of Wisconsin  Madison, 1210 West Dayton Street, 53706 Madison, WI; email: kanzow@cs.wisc.edu. The research of t...
SmoothingNonsmooth Reformulations of Variational Inequality Problems
 Preprint, School of Mathematics, the University of New South
, 1998
"... It has long been known that variational inequality problems can be reformulated as nonsmooth equations. Recently, locally highorder convergent nonsmooth Newton methods for nonsmooth equations have been well established via the concept of semismoothness. In this paper, we focus our discussions on a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
It has long been known that variational inequality problems can be reformulated as nonsmooth equations. Recently, locally highorder convergent nonsmooth Newton methods for nonsmooth equations have been well established via the concept of semismoothness. In this paper, we focus our discussions on a way of globalizing nonsmooth Newton methods based on a smoothingnonsmooth reformulation of nonsmooth equations. Various properties of the reformulated functions will be investigated and Newtontype methods will be applied to solve the reformulated systems. 1 Introduction It has been a long history in mathematical programming field to construct smoothing functions to approximate nonsmooth functions. In this paper we will restrict our study to the smoothing functions of those nonsmooth functions arising from variational inequality problems. The variational inequality problem (VIP for abbreviation) is to find x # # X such that (x  x # ) T F (x # ) # 0 for all x # X, (1.1) where X ...
A New ProximalBased Globalization Strategy For The JosephyNewton Method For Variational Inequalities
"... this paper we assume that C is closed convex, F is continuously dierentiable and monotone (i.e., hF (x) F (y); x yi 0 for all x; y 2 < ), and the solution set of VIP(F; C) is nonempty ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
this paper we assume that C is closed convex, F is continuously dierentiable and monotone (i.e., hF (x) F (y); x yi 0 for all x; y 2 < ), and the solution set of VIP(F; C) is nonempty
Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem
, 2007
"... H. ..."
Two Methods Based on the DGap Function for Solving Monotone Variational Inequalities
, 1999
"... The implicit Lagrangian and, more generally, the Dgap function have been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that these merit functions have certain properties that are useful also for monotone variational inequal ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The implicit Lagrangian and, more generally, the Dgap function have been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that these merit functions have certain properties that are useful also for monotone variational inequality problems with bounded feasible set. Accordingly, we develop two unconstrained methods based on them that are similar in spirit to a feasible method of Zhu and Marcotte based on the regularizedgap function. We further discuss a third method based on applying the Dgap function to a regularized problem. Preliminary numerical experience is also reported. Key Words. Monotone variational inequalities, implicit Lagrangian, Dgap function, stationary point, descent methods. Research of the first author is supported by CNPq Grant 300734/956 and by PRONEXOptimization. The second author is supported by National Science Foundation Grant CCR9731273. y Instituto de Matem'atica Pura e Aplica...