Results 1 
8 of
8
Some Perturbation Theory for Linear Programming
 Mathematical Programming
, 1992
"... This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the opt ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs defined in arbitrary real normed spaces. A HahnBanach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X;Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X
Condition Measures and Properties of the Central Trajectory of a Linear Program
 Mathematical Programming
, 1997
"... Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrel ..."
Abstract

Cited by 36 (15 self)
 Add to MetaCart
Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrelated measure ae(d) called the "distance to illposedness." (The distance to illposedness essentially measures how close the data instance d = (A; b; c) is to being primal or dual infeasible.) We present lower and upper bounds on sizes of optimal solutions along the central trajectory, and on rates of change of solutions along the central trajectory, as either the barrier parameter ¯ or the data d = (A; b; c) of the linear program is changed. These bounds are all linear or polynomial functions of certain natural parameters associated with the linear program, namely the condition number C(d), the distance to illposedness ae(d), the norm of the data kdk, and the dimensions m and n. 1 Introdu...
A New Condition Measure, PreConditioners, and Relations between Different Measures of Conditioning for Conic Linear Systems
, 2001
"... In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FP d ): Ax = b; x 2 CX , whose data is d = (A; b). We present a new measure of conditioning, denoted d , and we show implications of d for problem geometry and algorithm complexity, and demonstrate that the value of = d is independent of the speci c data representation of (FP d ). We then prove certain relations among a variety of condition measures for (FP d ), including d , d , d , and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we introduce the notion of a "preconditioner" for (FP d ) which results in an equivalent formulation (FP ~ d ) of (FP d ) with a better condition number C( ~ d). We characterize the best such preconditioner and provide an algorithm and complexity analysis for constructing an equivalent data instance ~ d whose condition number C( ~ d) is within a known factor of the best possible.
Condition Number Complexity of an Elementary Algorithm for Resolving a Conic Linear System
, 1997
"... We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b).
Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
, 2003
"... The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theo ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CP d ) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CP d ), from sizes of optimal solutions to the complexity of algorithms for solving (CP d ). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after prepreprocessing by CPLEX 7.1. Here we found that 19% of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) of the postprocessed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) of the postprocessed problem instances.
ConditionMeasure Bounds on the Behavior of the Central Trajectory of a SemiDefinite Program
, 2000
"... We present bounds on various quantities of interest regarding the central trajectory of a semidenite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturallyoccurring quantities such as the dimensions n and m. The condition number C(d) is dened in ter ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We present bounds on various quantities of interest regarding the central trajectory of a semidenite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturallyoccurring quantities such as the dimensions n and m. The condition number C(d) is dened in terms of the data instance d = (A; b; C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of illposed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter dening the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate prop...
On an extension of condition number theory to nonconic convex optimization
 Math. Oper. Res
, 2005
"... The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: to the more general nonconic format: z ∗: = minx ctx s.t. Ax − b ∈ CY x ∈ CX, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to extend, as much as possible, the modern theory of condition numbers for conic convex optimization: to the more general nonconic format: z ∗: = minx ctx s.t. Ax − b ∈ CY x ∈ CX,