Results 1 
5 of
5
Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
, 2003
"... The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theo ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CP d ) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CP d ), from sizes of optimal solutions to the complexity of algorithms for solving (CP d ). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after prepreprocessing by CPLEX 7.1. Here we found that 19% of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) of the postprocessed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) of the postprocessed problem instances.
Polynomiality of PrimalDual Affine Scaling Algorithms for Nonlinear Complementarity Problems
, 1995
"... This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
This paper provides an analysis of the polynomiality of primaldual interior point algorithms for nonlinear complementarity problems using a wide neighborhood. A condition for the smoothness of the mapping is used, which is related to Zhu's scaled Lipschitz condition, but is also applicable to mappings that are not monotone. We show that a family of primaldual affine scaling algorithms generates an approximate solution (given a precision ffl) of the nonlinear complementarity problem in a finite number of iterations whose order is a polynomial of n, ln(1=ffl) and a condition number. If the mapping is linear then the results in this paper coincide with the ones in [13].
Two InteriorPoint Algorithms for a Class of Convex Programming Problems
, 1994
"... This paper describes two algorithms for the problem of minimizing a linear function over the intersection of an affine set and a convex set which is required to be the closure of the domain of a strongly selfconcordant barrier function. One algorithm is a pathfollowing method, while the other is a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper describes two algorithms for the problem of minimizing a linear function over the intersection of an affine set and a convex set which is required to be the closure of the domain of a strongly selfconcordant barrier function. One algorithm is a pathfollowing method, while the other is a primal potentialreduction method. We give bounds on the number of iterations necessary to attain a given accuracy.
Summary Conclusions on Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
"... The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format £¥¤§¦©¨����§���� � ���� � ���������¥��������� ¤��� � , and several aspects of the theory have now been extended to handle nonconic formats as well. In t ..."
Abstract
 Add to MetaCart
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format £¥¤§¦©¨����§���� � ���� � ���������¥��������� ¤��� � , and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition ¤�£��� � measure £¥¤§¦�¨� � for has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics £¥¤§¦�¨� � of, from sizes of optimal solutions to the complexity of algorithms for £¥¤§¦�¨� � solving. Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found ���� � that of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after prepreprocessing by CPLEX 7.1. Here we found ���� � that of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and � ����¤�£��� � that of the postprocessed problems is fairly nicely distributed. Furthermore, there is a positive linear relationship between IPM iterations � ����¤�£��� � and of the postprocessed problem instances (significant at ����� the confidence level), ���� � and of the variation in IPM iterations among the NETLIB suite problem instances is accounted for � ����¤�£��� � by of the postprocessed problem instances. I.
COMPUTATIONAL EXPERIENCE AND THE EXPLANATORY VALUE OF CONDITION MEASURES FOR LINEAR OPTIMIZATION ∗
"... Abstract. The modern theory of condition measures for convexoptimization problems was initially developed for convexproblems in the conic format (CPd) z ∗: = min{c x t x  Ax − b ∈ CY,x ∈ CX}, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory ..."
Abstract
 Add to MetaCart
Abstract. The modern theory of condition measures for convexoptimization problems was initially developed for convexproblems in the conic format (CPd) z ∗: = min{c x t x  Ax − b ∈ CY,x ∈ CX}, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CPd) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CPd), from sizes of optimal solutions to the complexity of algorithms for solving (CPd). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71 % of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern interiorpointmethod (IPM) solver, we also computed condition measures for the NETLIB suite problems after preprocessing by CPLEX 7.1. Here we found that 19 % of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) ofthe postprocessed problem instances (significant at the 95 % confidence level), and 42 % of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) ofthe postprocessed problem instances.