Results 1  10
of
11
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
(Show Context)
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Error Bounds for Convex Inequality Systems
 Generalized Convexity
, 1996
"... Using convex analysis, this paper gives a systematic and unified treatment for the existence of a global error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a global error bound in ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Using convex analysis, this paper gives a systematic and unified treatment for the existence of a global error bound for a convex inequality system. We establish a necessary and sufficient condition for a closed convex set defined by a closed proper convex function to possess a global error bound in terms of a natural residual. We derive many special cases of the main characterization, including the case where a Slater assumption is in place. Our results show clearly the essential conditions needed for convex inequality systems to satisfy global error bounds; they unify and extend a large number of existing results on global error bounds for such systems. The research of this author was based on work supported by the Natural Sciences and Engineering Research Council of Canada. y The research of this author was based on work supported by the National Science Foundation under grant CCR9213739 and the Office of Naval Research under grant N000149310228. 1 Introduction Let f : ! ...
SUFFICIENT CONDITIONS FOR ERROR BOUNDS
, 2001
"... For a lower semicontinuous (l.s.c.) inequality system on a Banach space, it is shown that error bounds hold, provided every element in an abstract subdifferential of the constraint function at each point outside the solution set is norm bounded away from zero. A sufficient condition for a global e ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
For a lower semicontinuous (l.s.c.) inequality system on a Banach space, it is shown that error bounds hold, provided every element in an abstract subdifferential of the constraint function at each point outside the solution set is norm bounded away from zero. A sufficient condition for a global error bound to exist is also given for an l.s.c. inequality system on a real normed linear space. It turns out that a global error bound closely relates to metric regularity, which is useful for presenting sufficient conditions for an l.s.c. system to be regular at sets. Under the generalized Slater condition, a continuous convex system on R n is proved to be metrically regular at bounded sets.
Weak Sharp Minima Revisited Part I: Basic Theory
, 2002
"... The notion of sharp minima, or strongly unique local minima, emerged in the late 1970’s as an important tool in the analysis of the perturbation behavior of certain classes of optimization problems as well as in the convergence analysis of algorithms designed to solve these problems. The work of Cro ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The notion of sharp minima, or strongly unique local minima, emerged in the late 1970’s as an important tool in the analysis of the perturbation behavior of certain classes of optimization problems as well as in the convergence analysis of algorithms designed to solve these problems. The work of Cromme and Polyak is of particular importance in this development. In the late 1980’s Ferris coined the term weak sharp minima to describe the extension of the notion of sharp minima to include the possibility of a nonunique solution set. This notion was later extensively studied by many authors. Of particular note in this regard is the paper by Burke and Ferris which gives an extensive exposition of the notion and its impact on convex programming and convergence analysis in finite dimensions. In this paper we build on the work of Burke and Ferris. Specifically, we generalize their work to the normed linear space setting, further dissect the normal cone inclusion characterization for weak sharp minima, study the asymptotic properties of weak sharp minima in terms of associated recession functions, and give new characterizations for local weak sharp minima and boundedly weak sharp minima. This paper is the first of a two part work on this subject. In Part II, we study the links between the notions of weak sharp minima, bounded linear regularity, linear regularity, metric regularity, and error bounds in convex programming. Along the way, we obtain both new results and reproduce many existing results from a fresh perspective.
Error Bounds for Nondifferentiable Convex Inequalities under a Strong Slater Constraint Qualification
, 1996
"... A global error bound is given on the distance between an arbitrary point in the ndimensional real space R n and its projection on a nonempty convex set determined by m convex, possibly nondifferentiable, inequalities. The bound is in terms of a natural residual that measures the violations of th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
A global error bound is given on the distance between an arbitrary point in the ndimensional real space R n and its projection on a nonempty convex set determined by m convex, possibly nondifferentiable, inequalities. The bound is in terms of a natural residual that measures the violations of the inequalities multiplied by a new simple condition constant that embodies a single strong Slater constraint qualification (CQ) which implies the ordinary Slater CQ. A very simple bound on the distance to the projection relative to the distance to a point satisfying the ordinary Slater CQ is given first and then used to derive the principal global error bound. Key Words. Convex inequalities, error bounds, Strong Slater constraint qualification. Abbreviated Title. Error bounds under a strong Slater CQ 1 Introduction We consider the nonempty feasible region S := fx j j j g(x) 0g (1) where g : R n \Gamma! R m is a convex function on the ndimensional real space R n . We are intereste...
Global Error Bounds for Convex Conic Problems
, 1998
"... In this paper Lipschitzian type error bounds are derived for general convex conic problems under various regularity conditions. Specifically, it is shown that if the recession directions satisfy Slater's condition then a global Lipschitzian type error bound holds. Alternatively, if the feasible ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper Lipschitzian type error bounds are derived for general convex conic problems under various regularity conditions. Specifically, it is shown that if the recession directions satisfy Slater's condition then a global Lipschitzian type error bound holds. Alternatively, if the feasible region is bounded, then the ordinary Slater condition guarantees a global Lipschitzian type error bound. These can be considered as generalizations of previously known results for inequality systems. Moreover, some of the results are also generalized to the intersection of multiple cones. Under Slater's condition alone, a global Lipschitzian type error bound may not hold. However, it is shown that such an error bound holds for a specific region. For linear systems we show that the constant involved in Hoffman's error bound can be estimated by the socalled condition number for linear programming. Key words: Error bound, convex conic problems, LMIs, condition number. AMS subject classification: 5...
Smoothing Methods in Mathematical Programming
, 1995
"... ... function. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for sufficiently small positive value of the smooth ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
... function. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for sufficiently small positive value of the smoothing param eter fl. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite fl. Speedup over the linear/nonlinear programming package MINOS 5.4 was as high as 1142 times for linear inequali ties of size 2000 x 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems(LCPs) were treated by converting them into a system of smooth nonlinear equations and are solved by a quadratically con vergent Newton method. For monotone LCPs with as many as 10,000 variables, the proposed approach was as much as 63 times faster than Lemke's method. Our smooth approach can also be used to solve nonlinear and mixed complemenrarity problems (NCPs and MCPs) by converting them to classes of smooth parametric nonlinear equations. For any solvable NCP or MCP, existence of an arbitrarily accurate solution to the smooth nonlinear equation as well as the NCP or MCP, is established for sufficiently large value of a smoothing parameter c = l. An efficient smooth algorithm, based on the NewtonArmijo approach with an adjusted smoothing parameter, is also given and its global and local quadratic convergence is established. For NCPs, exact solutions of our smooth nonlinear equation for various values of the parameter c, generate an interior path, which is different from the central path for the interior point method. Computational results for 52 test problems compare favorably with those for another Ne...
Conditioning of linearquadratic twostage stochastic optimization problems ∗
, 2013
"... In this paper a condition number for linearquadratic twostage stochastic optimization problems is introduced as the Lipschitz modulus of the multifunction assigning to a (discrete) probability distribution the solution set of the problem. Being the outer norm of the Mordukhovich coderivative of th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper a condition number for linearquadratic twostage stochastic optimization problems is introduced as the Lipschitz modulus of the multifunction assigning to a (discrete) probability distribution the solution set of the problem. Being the outer norm of the Mordukhovich coderivative of this multifunction, the condition number can be estimated from above explicitly in terms of the problem data by applying appropriate calculus rules. Here, a chain rule for the extended partial secondorder subdifferential recently proved by Mordukhovich and Rockafellar plays a crucial role. The obtained results are illustrated for the example of twostage stochastic optimization problems with simple recourse. Keywords: Stochastic optimization, twostage linearquadratic problems, conditioning, coderivative calculus, simple
LOJASIEWICZTYPE INEQUALITIES FOR NONSMOOTH DEFINABLE FUNCTIONS IN OMINIMAL STRUCTURES AND GLOBAL ERROR
"... Abstract. In this paper, we give some Lojasiewicztype inequalities and a nonsmooth slope inequality on noncompact domains for continuous definable functions in an ominimal structure. We also give a necessary and sufficicent condition for which global error bound exists. Moreover, we point out the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper, we give some Lojasiewicztype inequalities and a nonsmooth slope inequality on noncompact domains for continuous definable functions in an ominimal structure. We also give a necessary and sufficicent condition for which global error bound exists. Moreover, we point out the relationship between the PalaisSmale condition and this global error bound. 1.
Digital Object Identifier (DOI) 10.1007/s101079900086
, 1997
"... Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization ..."
Abstract
 Add to MetaCart
(Show Context)
Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization