## Smoothing Methods for Convex Inequalities and Linear Complementarity Problems (1993)

### Cached

### Download Links

- [ftp.cs.wisc.edu]
- [ftp.cs.wisc.edu]
- [ftp.cs.wisc.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Mathematical Programming |

Citations: | 62 - 6 self |

### BibTeX

@ARTICLE{Chen93smoothingmethods,

author = {Chunhui Chen and O. L. Mangasarian},

title = {Smoothing Methods for Convex Inequalities and Linear Complementarity Problems},

journal = {Mathematical Programming},

year = {1993},

volume = {71},

pages = {51--69}

}

### Years of Citing Articles

### OpenURL

### Abstract

A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...

### Citations

3267 | Convex Analysis
- Rockafellar
- 1970
(Show Context)
Citation Context ... solution to the convex inequalities (12). Let rc(g) denote the recession cone of a proper convex function g, that is rc(g) = fyj sup x2dom g (g(x + y) \Gamma g(x))s0g, where dom g is the domain of g =-=[14]-=-. Now we will state a condition under which (15) has a solution. Theorem 3.1 Let g : R n ! R m be continuous and convex and let f(x) be defined as in (13) or (14). The following are equivalent: 1. For... |

942 | Numerical Methods for Unconstrained Optimization and Nonlinear Equations - Dennis, Schnabel - 1983 |

530 | The Linear Complementarity Problem
- Cottle, Pang, et al.
- 1992
(Show Context)
Citation Context ...mate solution to LCP (M; q). Let f(x) = 1 2 ke \Gammaffx + e \Gammaff(M x+q) \Gamma 1k 2 2 (21) We will show that under the assumption that M is a P 0 matrix, that is a matrix with nonnegative minors =-=[1]-=-, then all the stationary points of (21) are solutions of (20). First we will state a simple lemma for P 0 matrices. Lemma 4.2 Suppose M 2 R n\Thetan is a P 0 matrix. For any positive diagonal matrix ... |

148 | The path solver: A non-monotone stabilization scheme for mixed complementarity problems
- Dirkse, Ferris
- 1995
(Show Context)
Citation Context ...gorithm with Lemke's method which was implemented in FORTRAN. For sparse problems with density between 0.012 and 0.15 percent, we compared the smooth algorithm with a sparse version of lemke's method =-=[3]-=-, which employs sparse basis updating techniques. The SOR method of Delone and Tork Roth [6] does not apply to this class of nonsymmetric LCP nor do other splitting methods described in [1]. In fact, ... |

93 |
MINOS 5.0 user's guide
- Murtagh, Saunders
- 1983
(Show Context)
Citation Context ...hing algorithms were implemented in C. Lemke's method was written in FORTRAN. The CPU times for the smoothing algorithms and Lemke's method do not include the time to input data. The time of MINOS5.4 =-=[9]-=- is the execution time for subroutine M5SOLV and also does not include the input time. For linear and convex inequalities, we use the BFGS algorithm to solve the unconstrained minimization problem for... |

92 |
On approximate solutions of systems of linear inequalities
- Hoffman
- 1952
(Show Context)
Citation Context ...olution is unique. In the following, we will prove that a solution of (5) gives an approximate solution of (2). First we will state an error bound lemma for linear inequalities. Lemma 2.1 Error bound =-=[2] [5]-=- Suppose that the linear inequalities Axsb have a nonempty solution set X. For any x, there exists an x 2 X such that kx \Gamma xk fis�� fi (A)k(Ax \Gamma b) + k fi ; (7) for some positive constan... |

83 | Theory of Algorithms for Unconstrained Optimization
- Nocedal
- 1992
(Show Context)
Citation Context ... the BFGS algorithm to solve the unconstrained minimization problem for variables up to 400 for linear inequalities and 150 for convex inequalities. For larger problems, limited memory BFGS algorithm =-=[10]-=- was used. Starting with ff = 5, we increased ff by a factor of 1.05 to 1.2. The algorithm terminates when infeasibilities are less than 1.0e-7. Figure 5 depicts the ratio of CPU time taken by MINOS5.... |

76 |
A polynomial-time algorithm for a class of linear complementary problems
- Kojima, Mizuno, et al.
- 1989
(Show Context)
Citation Context ...f (20) can be purified to a solution of LCP (M; q). In the following theorem, we assume that all the elements of matrix M and vector q are integers and ns2. Let L is the size of LCP (M; q) defined by =-=[3]-=- L = b i=n X i=1 j=n X j=1 log(ja ij j) + i=n X i=1 log(jq i j) + log(n 2 )c + 1: Theorem 4.4 Suppose that LCP (M; q) is solvable. Let x(ff) be a solution of (20) with ffsff = p n2 L . Then x(ff) can ... |

62 |
Newton-type minimization via the lanczos method
- Nash
- 1984
(Show Context)
Citation Context ...method of Motzkin and Schoenberg [13]. The relaxation method was implemented in C. All the algorithms for linear inequalities were run on a Sun SPARCstation 10. We used the truncated Newton algorithm =-=[15]-=- to solve the smooth unconstrained minimization problem. We started with ff = 1000:0 and increased it by a factor of 2 at each major iteration. The algorithms terminate when the infeasibilities are le... |

42 |
The relaxation method for linear inequalities
- Motzkin, Shoenberg
- 1954
(Show Context)
Citation Context ... solving linear equations by a sparse LU decomposition from MINOS. For linear inequalities, we compared the smooth algorithm with MINOS as well as with the relaxation method of Motzkin and Schoenberg =-=[13]-=-. The relaxation method was implemented in C. All the algorithms for linear inequalities were run on a Sun SPARCstation 10. We used the truncated Newton algorithm [15] to solve the smooth unconstraine... |

40 | Mathematical programming in neural networks
- Mangasarian
- 1993
(Show Context)
Citation Context ...ion Grant CCR-9101801. where ff is a positive number. Note that (x) + = R x \Gamma1 oe(y)dy, where oe(x) is the step function: oe(x) = ( 1 if x ? 0 0 if xs0 In the extensive neural network literature =-=[7]-=-, the step function is very effectively approximated by the sigmoid function s(x; ff) := 1 1 + e \Gammaffx ; ff ? 0 See Figures 1 and 3. In this work we utilize the integral of the sigmoid function as... |

17 |
New error bounds for the linear complementarity problem
- Luo, Mangasarian, et al.
- 1994
(Show Context)
Citation Context ...es of x and the remaining 50 percent of w. The vector q was then defined by: q = w \Gamma Mx. We chose the parameter ff inversely proportional to the 2-norm of the natural residual: k minfx; Mx+qgk 2 =-=[12]-=- . The algorithm terminates when the infinity-norm of the natural residual is less than 1.0e-6. For dense problems, we compared the smooth algorithm with Lemke's method which was implemented in FORTRA... |

13 |
An application of error bounds for convex programming in a linear space
- Robinson
- 1975
(Show Context)
Citation Context ...)) and x 2 (x 2 (ff)), both in X, such that kx 1 (ff) \Gamma x 1 (x 1 (ff))k 1slog 2 ff mC 1 and kx 2 (ff) \Gamma x 2 (x 2 (ff))k 2slog 2 ff p mC 2 ; where C 1 and C 2 are constants dependent on g(x) =-=[13, 6]-=-. (ii) If the Slater constraint qualification is satisfied by g(x)s0, then there exists an ff ? 0 such that for any ffsff, x 1 (ff) and x 2 (ff) solve the convex inequalities (12) exactly. Note that f... |

11 |
Massively parallel solution of quadratic programs via successive overrelaxation, Computer Sciences
- DeLeone, Tork-Roth
- 1991
(Show Context)
Citation Context ...ty between 0.012 and 0.15 percent, we compared the smooth algorithm with a sparse version of lemke's method [3], which employs sparse basis updating techniques. The SOR method of Delone and Tork Roth =-=[6]-=- does not apply to this class of nonsymmetric LCP nor do other splitting methods described in [1]. In fact, the SOR method of [6] failed on all test problems. Figures 9 and 10 show the CPU times for t... |

10 |
A smoothing algorithm for linear ` 1 estimation
- Madsen, Nielsen
- 1993
(Show Context)
Citation Context ...1) for all x 2 R; ff ? 0. The inverse function p \Gamma1 is well defined for x 2 (0; 1). 8. p(x; ff) ? p(x; fi), for ff ! fi, x 2 R. Smoothing techniques have been used for l 1 -minimization problems =-=[4]-=- and in multi-commodity flows problem [11] using a linear quadratic smoothing function with encouraging numerical results. We now summarize our results. In Section 2 we treat linear inequalities by co... |

9 |
A condition number for differentiable convex inequalities
- Mangasarian
- 1985
(Show Context)
Citation Context ...and f = f 2 respectively. (i) Let X be bounded and let g satisfy Slater constraint qualification: g(x) ! 0 or let g(x) be differentiable and satisfy the Slater and asymptotic constraint qualification =-=[6]-=-. Then there exist x 1 (x 1 (ff)) and x 2 (x 2 (ff)), both in X, such that kx 1 (ff) \Gamma x 1 (x 1 (ff))k 1slog 2 ff mC 1 and kx 2 (ff) \Gamma x 2 (x 2 (ff))k 2slog 2 ff p mC 2 ; where C 1 and C 2 a... |

7 |
A Condition Number for Linear Inequalities and Linear Programs
- Mangasarian
- 1981
(Show Context)
Citation Context ...ion is unique. In the following, we will prove that a solution of (5) gives an approximate solution of (2). First we will state an error bound lemma for linear inequalities. Lemma 2.1 Error bound [2] =-=[5] Suppose-=- that the linear inequalities Axsb have a nonempty solution set X. For any x, there exists an x 2 X such that kx \Gamma xk fis�� fi (A)k(Ax \Gamma b) + k fi ; (7) for some positive constant ��... |

6 |
On smoothing exact penalty functions for convex constrained optimization
- Pinar, Zenios
- 1994
(Show Context)
Citation Context ...ction p \Gamma1 is well defined for x 2 (0; 1). 8. p(x; ff) ? p(x; fi), for ff ! fi, x 2 R. Smoothing techniques have been used for l 1 -minimization problems [4] and in multi-commodity flows problem =-=[11]-=- using a linear quadratic smoothing function with encouraging numerical results. We now summarize our results. In Section 2 we treat linear inequalities by converting them to unconstrained differentia... |

4 |
Computable error bounds in mathematical programming
- Ren
- 1993
(Show Context)
Citation Context ...near programming: minimize x;z 1 T z subject to Ax \Gamma bsz zs0 (10) Let u be a dual solution of above LP, then (x 1 (ff); (Ax 1 (ff) \Gamma b) + ; u) is an approximate dual pair. By Lemma 5.2.1 of =-=[12]-=-, there exists a x 1 (x 1 (ff)) 2 X 1 such that kx 1 (ff) \Gamma x 1 (x 1 (ff))k 1soe 1 (A; b)(k(Ax 1 (ff) \Gamma b) + k 1 \Gamma k(Ax 1 (x 1 (ff)) \Gamma b) + k 1 ) +soe 1 (A; b) m log 2 ff : Similar... |

2 | Error bounds for inconsistent linear inequalities and programs
- Mangasarian
- 1994
(Show Context)
Citation Context ...2R n f(x) that minimizes the infeasibility approximately. In fact a multiple of value of f(x) bounds the distance of x to the set of minimizers of k(Ax \Gamma b) + k 1 for the case when f = f 1 , see =-=[8]-=-. If we let x 1 and x 2 denote solutions of the inconsistent system Axsb in the sense of least l 1 -norm and l 2 -norm respectively, and if we let x 1 (ff) and x 2 (ff) be minimizers of f as defined i... |