Results 1  10
of
21
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 74 (6 self)
 Add to MetaCart
(Show Context)
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
(Show Context)
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
New NCPFunctions and Their Properties
, 1997
"... . Recently, Luo and Tseng proposed a class of merit functions for the nonlinear complementarity problem (NCP) and showed that it enjoys several interesting properties under some assumptions. In this paper, adopting a similar idea to Luo and Tseng's, we present new merit functions for the NCP, w ..."
Abstract

Cited by 33 (13 self)
 Add to MetaCart
. Recently, Luo and Tseng proposed a class of merit functions for the nonlinear complementarity problem (NCP) and showed that it enjoys several interesting properties under some assumptions. In this paper, adopting a similar idea to Luo and Tseng's, we present new merit functions for the NCP, which can be decomposed into component functions. We show that these merit functions not only share many properties with the one proposed by Luo and Tseng but also enjoy additional favorable properties owing to their decomposable structure. In particular, we present fairly mild conditions under which these merit functions have bounded level sets. Key words: Nonlinear complementarity problem, NCPfunction, merit function, unconstrained optimization reformulation, error bound, bounded level sets. 3 The work of the second and third authors was supported in part by the Scienti c Research GrantinAid from the Ministry of Education, Science and Culture, Japan. The work of the second author was also su...
A hybrid Newton method for solving the variational inequality problem via the Dgap function
, 1997
"... The variational inequality problem (VIP) can be reformulated as an unconstrained minimization problem through the Dgap function. It is proved that the Dgap function has bounded level sets for the strongly monotone VIP. A hybrid Newtontype method is proposed for minimizing the Dgap function. Unde ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
The variational inequality problem (VIP) can be reformulated as an unconstrained minimization problem through the Dgap function. It is proved that the Dgap function has bounded level sets for the strongly monotone VIP. A hybrid Newtontype method is proposed for minimizing the Dgap function. Under some conditions, it is shown that the algorithm is globally convergent and locally quadratically convergent. Keywords: Variational inequality problem, Dgap function, Newton's method, unconstrained optimization, global convergence, quadratic convergence. 1 Introduction The variational inequality problem (VIP) is to nd a vector x 3 2 X such that hF (x 3 ); y x 3 i 0; 8 y 2 X; (1) where X is a nonempty closed convex subset of < n , F is a mapping from < n into itself, and h; i denotes the inner product in < n . If the constraint set X is the nonnegative orthant in < n , then the VIP reduces to the complementarity problem (CP). VIPs and CPs have been widely studied in var...
A new unconstrained differentiable merit function for box constrained variational inequality problems and a damped GaussNewton method
 Applied Mathematics Report AMR 96/37, School of Mathematics, the University of New South
, 1996
"... Abstract. In this paper we propose a new unconstrained differentiable merit function f for box constrained variational inequality problems VIP(l, u, F). We study various desirable properties of this new merit function f and propose a Gauss–Newton method in which each step requires only the solution ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose a new unconstrained differentiable merit function f for box constrained variational inequality problems VIP(l, u, F). We study various desirable properties of this new merit function f and propose a Gauss–Newton method in which each step requires only the solution of a system of linear equations. Global and superlinear convergence results for VIP(l, u, F) are obtained. Key results are the boundedness of the level sets of the merit function for any uniform Pfunction and the superlinear convergence of the algorithm without a nondegeneracy assumption. Numerical experiments confirm the good theoretical properties of the method.
A linearly convergent derivativefree descent method for strongly monotone complementarity problems
 Computational Optimization and Applications
"... Abstract. We establish the first rate of convergence result for the class of derivativefree descent methods for solving complementarity problems. The algorithm considered here is based on the implicit Lagrangian reformulation [26, 35] of the nonlinear complementarity problem, and makes use of the d ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We establish the first rate of convergence result for the class of derivativefree descent methods for solving complementarity problems. The algorithm considered here is based on the implicit Lagrangian reformulation [26, 35] of the nonlinear complementarity problem, and makes use of the descent direction proposed in [42], but employs a different Armijotype linesearch rule. We show that in the strongly monotone case, the iterates generated by the method converge globally at a linear rate to the solution of the problem. Keywords: convergence complementarity problems, implicit Lagrangian, descent algorithms, derivativefree methods, linear 1.
Convergence analysis of perturbed feasible descent methods
 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
, 1997
"... We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. There ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. Therefore, a new approach is needed. We show that, in the presence of perturbations, a certain eapproximate solution can be obtained, where e depends linearly on the level of perturbations. Applications to the gradient projection, proximal minimization, extragradient and incremental gradient algorithms are described.
New Inexact Parallel Variable Distribution Algorithms
 Computational Optimization and Applications
, 1997
"... Abstract. We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables whil ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining “secondary ” variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach. Keywords: convergence parallel optimization, asynchronous algorithms, load balancing, unconstrained minimization, linear 1.
On the convergence of constrained parallel variable distribution algorithms
 SIAM Journal on Optimization
, 1998
"... Abstract. We consider the parallel variable distribution (PVD) approach proposed by Ferris and Mangasarian [SIAM J. Optim., 4 (1994), pp. 815–832] for solving optimization problems. The problem variables are distributed among p processors with each processor having the primary responsibility for upd ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the parallel variable distribution (PVD) approach proposed by Ferris and Mangasarian [SIAM J. Optim., 4 (1994), pp. 815–832] for solving optimization problems. The problem variables are distributed among p processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining “secondary ” variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs, convergence in [M. C. Ferris and O. L. Mangasarian, SIAM J. Optim., 4 (1994), pp. 815–832] was established in the special case of convex blockseparable constraints. For general (inseparable) constraints, it was suggested that a dual differentiable exact penalty function reformulation of the problem be used. We propose to apply the PVD approach to problems with general convex constraints directly and show that the algorithm converges, provided certain conditions are imposed on the change of secondary variables. These conditions are both natural and practically implementable. We also show that the original requirement of exact global solution of the parallel subproblems can be replaced by a less stringent sufficient descent condition. The first rate of convergence result for the class of constrained PVD algorithms is also given. Key words. parallel optimization, nonlinear programming, linear convergence