Results 1  10
of
12
Smoothed analysis of Renegar’s condition number for linear programming
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of linear programming is O(n 3 log(nd/σ)).
Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization
, 2003
"... The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theo ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
The modern theory of condition measures for convex optimization problems was initially developed for convex problems in the following conic format: (CP d ) : z := min x {c }, and several aspects of the theory have now been extended to handle nonconic formats as well. In this theory, the (Renegar) condition measure C(d) for (CP d ) has been shown to be connected to bounds on a wide variety of behavioral and computational characteristics of (CP d ), from sizes of optimal solutions to the complexity of algorithms for solving (CP d ). Herein we test the practical relevance of the condition measure theory, as applied to linear optimization problems that one might typically encounter in practice. Using the NETLIB suite of linear optimization problems as a test bed, we found that 71% of the NETLIB suite problem instances have infinite condition measure. In order to examine condition measures of the problems that are the actual input to a modern IPM solver, we also computed condition measures for the NETLIB suite problems after prepreprocessing by CPLEX 7.1. Here we found that 19% of the postprocessed problem instances in the NETLIB suite have infinite condition measure, and that log C(d) of the postprocessed problems is fairly nicely distributed. Furthermore, among those problem instances with finite condition measure after preprocessing, there is a positive linear relationship between IPM iterations and log C(d) of the postprocessed problem instances (significant at the 95% confidence level), and 42% of the variation in IPM iterations among these NETLIB suite problem instances is accounted for by log C(d) of the postprocessed problem instances.
Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming
, 2009
"... We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satis ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satisfying ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1, E [log C(A, b, c)] = O(log(nd/σ)), A,b,c where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2 and C(A, b, c) is the condition number of the linear program defined by (A, b, c). From this bound, we obtain a smoothed analysis of interior point algorithms. By combining this with the smoothed analysis of finite termination of Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of interior point algorithms for linear programming is O(n 3 log(nd/σ)).
Behavioral measures and their correlation with IPM iteration counts on semidefinite programming problems
, 2005
"... We study four measures of problem instance behavior that might account for the observed differences in interiorpoint method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasib ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We study four measures of problem instance behavior that might account for the observed differences in interiorpoint method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar) condition measure C(d) of the data instance, (iii) a measure of the nearabsence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The nearabsence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.
A geometric analysis of Renegar’s condition number, and its interplay with conic curvature
 Math. Programming
, 2008
"... For a conic linear system of the form Ax ∈ K, K a convex cone, several condition measures have been extensively studied in the last dozen years. Among these, Renegar’s condition number C(A) is arguably the most prominent for its relation to data perturbation, error bounds, problem geometry, and comp ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
For a conic linear system of the form Ax ∈ K, K a convex cone, several condition measures have been extensively studied in the last dozen years. Among these, Renegar’s condition number C(A) is arguably the most prominent for its relation to data perturbation, error bounds, problem geometry, and computational complexity of algorithms. Nonetheless, C(A) is a representationdependent measure which is usually difficult to interpret and may lead to overlyconservative bounds of computational complexity and/or geometric quantities associated with the set of feasible solutions. Herein we show that Renegar’s condition number is bounded from above and below by certain purely geometric quantities associated with A and K, and highlights the role of the singular values of A and their relationship with the condition number. Moreover, by using the notion of conic curvature, we show how Renegar’s condition number can be used to provide both lower and upper bounds on the width of the set of feasible solutions. This complements the literature where only lower bounds have heretofore been developed. 1
Condition and Complexity Measures for Infeasibility Certificates of Systems of Linear Inequalities and Their Sensitivity Analysis
, 2002
"... We begin with a study of the infeasibility measures for linear programming problems. For this purpose, we consider feasibility problems in Karmarkar's standard form. Our main focus is on the complexity measures which can be used to bound the amount of computational effort required to solve systems o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We begin with a study of the infeasibility measures for linear programming problems. For this purpose, we consider feasibility problems in Karmarkar's standard form. Our main focus is on the complexity measures which can be used to bound the amount of computational effort required to solve systems of linear inequalities and related problems in certain ways.
Numerical Stability in Linear Programming and Semidefinite Programming
, 2006
"... We study numerical stability for interiorpoint methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the di#culties inherent in current methods and present robust algorithms. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study numerical stability for interiorpoint methods applied to Linear Programming, LP, and Semidefinite Programming, SDP. We analyze the di#culties inherent in current methods and present robust algorithms.
Equivalence of Convex Problem Geometry and Computational Complexity in the Separation Oracle Model
"... Consider the following supposedlysimple problem: compute x satisfying x ∈ S, where S is a convex set conveyed by a separation oracle, with no further information (e.g., no bounding ball containing or intersecting S, etc.). Our interest in this problem stems from fundamental issues involving the int ..."
Abstract
 Add to MetaCart
Consider the following supposedlysimple problem: compute x satisfying x ∈ S, where S is a convex set conveyed by a separation oracle, with no further information (e.g., no bounding ball containing or intersecting S, etc.). Our interest in this problem stems from fundamental issues involving the interplay of (i) the computational complexity of computing a point x ∈ S, (ii) the geometry of S, and (iii) the stability or conditioning of S under perturbation. Under suitable definitions of these terms, we show herein that problem instances with favorable geometry have favorable computational complexity, validating conventional wisdom. We also show a converse of this implication, by showing that there exist problem instances in certain families characterized by unfavorable geometry, that require more computational effort to solve. This in turn leads, under certain assumptions, to a form of equivalence among computational complexity, the geometry of S, and the conditioning of S. Our measures of the geometry of S, relative to a given (reference) point ¯x, are the aspect ratio A = R/r, as well as R and 1/r, where B(¯x, R) ∩ S contains a ball of radius r. The aspect ratio arises in the analyses of many algorithms for convex problems, and its importance in convex algorithm analysis has been wellknown for several decades. However, the terms R and 1/r in our complexity results are a bit counterintuitive; nevertheless, we show that the computational complexity must involve these terms in addition to the aspect ratio
Preprocessing and . . .
, 2013
"... This paper presentsa backward stable preprocessing technique for (nearly) illposed semidefinite programming, SDP, problems, i.e., programs for which the Slater constraint qualification, existence of strictly feasible points, (nearly) fails. Current popular algorithms for semidefinite programming r ..."
Abstract
 Add to MetaCart
This paper presentsa backward stable preprocessing technique for (nearly) illposed semidefinite programming, SDP, problems, i.e., programs for which the Slater constraint qualification, existence of strictly feasible points, (nearly) fails. Current popular algorithms for semidefinite programming rely on primaldual interiorpoint, pd ip methods. These algorithms require the Slater constraint qualification for both the primal and dual problems. This assumption guarantees the existence of Lagrange multipliers, wellposedness of the problem, and stability of algorithms. However, there are many instances of SDPs where the Slater constraint qualification fails or nearly fails. Our backward stable preprocessing technique is based on applying the BorweinWolkowicz facial reduction process to find a finite number, k, of rankrevealing orthogonal rotations of the problem. After an appropriate truncation, this results in a smaller, wellposed, nearby problem that satisfies the Robinson constraint qualification, and one that can be solved by standard SDP solvers. The