Results 1 
7 of
7
The Extended Linear Complementarity Problem
, 1993
"... We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the biline ..."
Abstract

Cited by 530 (23 self)
 Add to MetaCart
We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the bilinear objective function under a monotonicity assumption, the polyhedrality of the solution set of a monotone XLCP, and an error bound result for a nondegenerate XLCP. We also present a finite, sequential linear programming algorithm for solving the nonmonotone XLCP.
Some Perturbation Theory for Linear Programming
 Mathematical Programming
, 1992
"... This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the opt ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs defined in arbitrary real normed spaces. A HahnBanach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X;Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Incorporating Condition Measures Into The Complexity Theory Of Linear Programming
 SIAM Journal on Optimization
, 1995
"... this paper, we take the approach of traditional complexity theory: Requiring the input ¯ ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
this paper, we take the approach of traditional complexity theory: Requiring the input ¯
Convergence of a Class of Inexact InteriorPoint Algorithms for Linear Programs
 Mathematics of Operations Research
, 1996
"... . We present a convergence analysis for a class of inexact infeasibleinteriorpoint methods for solving linear programs. The main feature of inexact methods is that the linear systems defining the search direction at each interiorpoint iteration need not be solved to high accuracy. More precisely, ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
. We present a convergence analysis for a class of inexact infeasibleinteriorpoint methods for solving linear programs. The main feature of inexact methods is that the linear systems defining the search direction at each interiorpoint iteration need not be solved to high accuracy. More precisely, we allow that these linear systems are only solved to a moderate relative accuracy in the residual , but no assumptions are made on the accuracy of the search direction in the search space. In particular, our analysis does not require that feasibility is maintained even if the initial iterate happened to be a feasible solution of the linear program. AMS 1991 subject classification. Primary: 90C05, Secondary: 65K05, 90C06. Key words. Linear program, infeasibleinteriorpoint method, inexact search direction, linear system, residual, convergence. 1. Introduction Since the publication [6] of Karmarkar's original interiorpoint algorithm for linear programs, numerous variants of the method ...
Sensitivity theorems in integer linear programming
 Mathematical Programming
, 1986
"... We consider integer linear programming problems with a fixed coefficient matrix and varying objective function and righthandside vector. Among our results, we show that, for any optimal solution to a linear program max{wx: Ax < ~ b}, the distance to the nearest optimal solution to the correspondin ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider integer linear programming problems with a fixed coefficient matrix and varying objective function and righthandside vector. Among our results, we show that, for any optimal solution to a linear program max{wx: Ax < ~ b}, the distance to the nearest optimal solution to the corresponding integer program is at most the dimension of the problem multiplied by the largest subdeterminant of the integral matrix A. Using this, we strengthen several integer programming 'proximity ' results of Blair and Jeroslow; Graver; and Wolsey. We also show that the Chv~ital rank of a polyhedron {x: Ax ~ b} can be bounded above by a function of the matrix A, independent of the vector b, a result which, as Blair observed, is equivalent to Blair and Jeroslow's theorem that 'each integer programming value function is a Gomory function.'
Error Bounds for Inconsistent Linear Inequalities and Programs
 Operations Research Letters
, 1994
"... For any system of linear inequalities, consistent or not, the norm of the violations of the inequalities by a given point, multiplied by a condition constant that is independent of the point, bounds the distance between the point and the nonempty set of points that minimize these violations. Similar ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
For any system of linear inequalities, consistent or not, the norm of the violations of the inequalities by a given point, multiplied by a condition constant that is independent of the point, bounds the distance between the point and the nonempty set of points that minimize these violations. Similarly, for a dual pair of possibly infeasible linear programs, the norm of violations of primaldual feasibility and primaldual objective equality, when multiplied by a condition constant, bounds the distance between a given point and the nonempty set of minimizers of these violations. These results extend error bounds for consistent linear inequalities and linear programs to inconsistent systems. Keywords error bounds; linear inequalities; linear programs Error bounds are playing an increasingly important role in mathematical programming. Beginning with Hoffman's classical error bound for linear inequalities [3], many papers have examined error bounds for linear and convex inequalities, line...