Results 1  10
of
27
Some Perturbation Theory for Linear Programming
 Mathematical Programming
, 1992
"... This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the opt ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs defined in arbitrary real normed spaces. A HahnBanach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X;Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X
Some Characterizations And Properties Of The "Distance To IllPosedness" And The Condition Measure Of A Conic Linear System
, 1998
"... A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the sta ..."
Abstract

Cited by 47 (21 self)
 Add to MetaCart
A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the status of the system (the system remains solvable or not). Renegar defined the "distance to illposedness," ae(d), to be the smallest change in the data \Deltad = (\DeltaA; \Deltab) for which the system P (d + \Deltad) is "illposed," i.e., d + \Deltad is in the intersection of the closure of feasible and infeasible instances d 0 = (A 0 ; b 0 ) of P (\Delta). Renegar also defined the "condition measure" of the data instance d as C(d) := kdk=ae(d), and showed that this measure is a natural extension of the familiar condition measure associated with systems of linear equations. This study presents two categories of results related to ae(d), the distance to illposedness, and C(d), the condition me...
ConditionBased Complexity Of Convex Optimization In Conic Linear Form Via The Ellipsoid Algorithm
, 1998
"... A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T
Condition Measures and Properties of the Central Trajectory of a Linear Program
 Mathematical Programming
, 1997
"... Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrel ..."
Abstract

Cited by 36 (15 self)
 Add to MetaCart
Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrelated measure ae(d) called the "distance to illposedness." (The distance to illposedness essentially measures how close the data instance d = (A; b; c) is to being primal or dual infeasible.) We present lower and upper bounds on sizes of optimal solutions along the central trajectory, and on rates of change of solutions along the central trajectory, as either the barrier parameter ¯ or the data d = (A; b; c) of the linear program is changed. These bounds are all linear or polynomial functions of certain natural parameters associated with the linear program, namely the condition number C(d), the distance to illposedness ae(d), the norm of the data kdk, and the dimensions m and n. 1 Introdu...
Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
, 2003
"... The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theo ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CP d ) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CP d ), from sizes of optimal solutions to the complexity of algorithms for solving (CP d ). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after prepreprocessing by CPLEX 7.1. Here we found that 19% of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) of the postprocessed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) of the postprocessed problem instances.
A New Condition Measure, PreConditioners, and Relations between Different Measures of Conditioning for Conic Linear Systems
, 2001
"... In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) f ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FP d ): Ax = b; x 2 CX , whose data is d = (A; b). We present a new measure of conditioning, denoted d , and we show implications of d for problem geometry and algorithm complexity, and demonstrate that the value of = d is independent of the speci c data representation of (FP d ). We then prove certain relations among a variety of condition measures for (FP d ), including d , d , d , and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we introduce the notion of a "preconditioner" for (FP d ) which results in an equivalent formulation (FP ~ d ) of (FP d ) with a better condition number C( ~ d). We characterize the best such preconditioner and provide an algorithm and complexity analysis for constructing an equivalent data instance ~ d whose condition number C( ~ d) is within a known factor of the best possible.
Condition Number Complexity of an Elementary Algorithm for Resolving a Conic Linear System
, 1997
"... We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
We develop an algorithm for resolving a conic linear system (FP d ), which is a system of the form (FP d ): b Ax 2 C Y x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b).
Some characterizations and properties of the "distance to illposedness" and the condition measure of a conic linear system
 Mathematical Programming
, 1999
"... A conic linear system is a system of the form P: find x that solves b Ax E Cy, E Cx, where Cx and Cy are closed convex cones, and the data for the system is d = (A, b). This system is"wellposed " to the extent that (small) changes in the data (A, b) do not alter the status of the ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
A conic linear system is a system of the form P: find x that solves b Ax E Cy, E Cx, where Cx and Cy are closed convex cones, and the data for the system is d = (A, b). This system is&quot;wellposed &quot; to the extent that (small) changes in the data (A, b) do not alter the status of the system (the system remains solvable or not). Intuitively, the more wellposed the system is, the easier it should be to solve the system or to demonstrate its infeasibility via a theorem of the alternative. Renegar defined the &quot;distance to illposedness, &quot; p(d), to be the smallest distance of the data d = (A, b) to other data d = (A, b) for which the system P is &quot;ill=posed, &quot; i.e., d = (A, b) is in the intersection of the closure of feasible and infeasible instances d ' = (A', b') of P. Renegar also defined the &quot;condition measure &quot; of the data instance d as C(d) Alldll/p(d), and showed that this measure is a natural extension of the familiar condition measure associated with systems of linear equation. This study presents two categories of results related to p(d), the distance to illposedness, and C(d), the condition measure of d. The first category of results involves the approximation of p(d) as the optimal value of certain mathematical programs. We present ten
On the Complexity of Computing Estimates of Condition Measures of a Conic Linear System
, 2001
"... Condition numbers based on the "distance to illposedness" (d) have been shown to play a crucial role in the theoretical complexity of solving convex optimization models. In this paper we present two algorithms and corresponding complexity analysis for computing estimates of (d) for a fini ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Condition numbers based on the "distance to illposedness" (d) have been shown to play a crucial role in the theoretical complexity of solving convex optimization models. In this paper we present two algorithms and corresponding complexity analysis for computing estimates of (d) for a finitedimensional convex feasibility problem P (d) in standard primal form: find x that satisfies Ax = b; x 2 CX , where d = (A; b) is the data for the problem P (d). Under one choice of norms for the m and n dimensional spaces, the problem of estimating (d) is hard (coNP complete even when CX = < n + ). However, when the norms are suitably chosen, the problem becomes much easier: we can estimate (d) to within a constant factor of its true value with complexity bounds that are linear in ln(C(d)) (where C(d) is the condition number of the data d for P (d)), plus other quantities that arise naturally in consideration of the problem P (d). The first algorithm is an interiorpoint algorithm, and the second algorithm is a variant of the ellipsoid algorithm. The main conclusion of this work is that when the norms are suitably