Results 1  10
of
69
Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Abstract

Cited by 146 (14 self)
 Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
Some Characterizations And Properties Of The "Distance To IllPosedness" And The Condition Measure Of A Conic Linear System
, 1998
"... A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the status of the ..."
Abstract

Cited by 45 (21 self)
 Add to MetaCart
A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the status of the system (the system remains solvable or not). Renegar defined the "distance to illposedness," ae(d), to be the smallest change in the data \Deltad = (\DeltaA; \Deltab) for which the system P (d + \Deltad) is "illposed," i.e., d + \Deltad is in the intersection of the closure of feasible and infeasible instances d 0 = (A 0 ; b 0 ) of P (\Delta). Renegar also defined the "condition measure" of the data instance d as C(d) := kdk=ae(d), and showed that this measure is a natural extension of the familiar condition measure associated with systems of linear equations. This study presents two categories of results related to ae(d), the distance to illposedness, and C(d), the condition me...
ConditionBased Complexity Of Convex Optimization In Conic Linear Form Via The Ellipsoid Algorithm
, 1998
"... A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T ..."
Abstract

Cited by 40 (17 self)
 Add to MetaCart
A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T
WarmStart Strategies In InteriorPoint Methods For Linear Programming
 SIAM Journal on Optimization
, 2000
"... . We study the situation in which, having solved a linear program with an interiorpoint method, we are presented with a new problem instance whose data is slightly perturbed from the original. We describe strategies for recovering a "warmstart" point for the perturbed problem instance from the iter ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
. We study the situation in which, having solved a linear program with an interiorpoint method, we are presented with a new problem instance whose data is slightly perturbed from the original. We describe strategies for recovering a "warmstart" point for the perturbed problem instance from the iterates of the original problem instance. We obtain worstcase estimates of the number of iterations required to converge to a solution of the perturbed instance from the warmstart points, showing that these estimates depend on the size of the perturbation and on the conditioning and other properties of the problem instances. 1. Introduction. This paper describes and analyzes warmstart strategies for interiorpoint methods applied to linear programming (LP) problems. We consider the situation in which one linear program, the "original instance," has been solved by an interiorpoint method, and we are then presented with a new problem of the same dimensions, the "perturbed instance," in which ...
Condition Measures and Properties of the Central Trajectory of a Linear Program
 Mathematical Programming
, 1997
"... Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrel ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrelated measure ae(d) called the "distance to illposedness." (The distance to illposedness essentially measures how close the data instance d = (A; b; c) is to being primal or dual infeasible.) We present lower and upper bounds on sizes of optimal solutions along the central trajectory, and on rates of change of solutions along the central trajectory, as either the barrier parameter ¯ or the data d = (A; b; c) of the linear program is changed. These bounds are all linear or polynomial functions of certain natural parameters associated with the linear program, namely the condition number C(d), the distance to illposedness ae(d), the norm of the data kdk, and the dimensions m and n. 1 Introdu...
InfeasibleStart PrimalDual Methods And Infeasibility Detectors For Nonlinear Programming Problems
 Mathematical Programming
, 1996
"... In this paper we present several "infeasiblestart" pathfollowing and potentialreduction primaldual interiorpoint methods for nonlinear conic problems. These methods try to find a recession direction of the feasible set of a selfdual homogeneous primaldual problem. The methods under considerat ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
In this paper we present several "infeasiblestart" pathfollowing and potentialreduction primaldual interiorpoint methods for nonlinear conic problems. These methods try to find a recession direction of the feasible set of a selfdual homogeneous primaldual problem. The methods under consideration generate an fflsolution for an fflperturbation of an initial strictly (primal and dual) feasible problem in O( p ln fflae f ) iterations, where is the parameter of a selfconcordant barrier for the cone, ffl is a relative accuracy and ae f is a feasibility measure. We also discuss the behavior of pathfollowing methods as applied to infeasible problems. We prove that strict infeasibility (primal or dual) can be detected in O( p ln ae \Delta ) iterations, where ae \Delta is a primal or dual infeasibility measure. 1 Introduction Nesterov and Nemirovskii [9] first developed and investigated extensions of several classes of interiorpoint algorithms for linear programming t...
The Radius of Metric Regularity
, 2007
"... Metric regularity is a central concept in variational analysis for the study of solution mappings associated with “generalized equations,” including variational inequalities and parameterized constraint systems. Here it is employed to characterize the distance to irregularity or infeasibility with r ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Metric regularity is a central concept in variational analysis for the study of solution mappings associated with “generalized equations,” including variational inequalities and parameterized constraint systems. Here it is employed to characterize the distance to irregularity or infeasibility with respect to perturbations of the system structure. Generalizations of the EckartYoung theorem in numerical analysis are obtained in particular.
Understanding the Geometry of Infeasible Perturbations of a Conic Linear System
, 1998
"... We discuss some properties of the distance to infeasibility of a conic linear system Ax = b; x 2 C; where C is a closed convex cone. Some interesting connections between the distance to infeasibility and the solution of certain optimization problems are established. Such connections provide insigh ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
We discuss some properties of the distance to infeasibility of a conic linear system Ax = b; x 2 C; where C is a closed convex cone. Some interesting connections between the distance to infeasibility and the solution of certain optimization problems are established. Such connections provide insight into the estimation of the distance to infeasibility and the explicit computation of infeasible perturbations of a given system. We also investigate the properties of the distance to infeasibility assuming that the perturbations are restricted to have a particular structure. Finally, we extend most of our results to more general conic systems Ax \Gamma b 2 CY ; x 2 CX ; where CX and CY are closed, convex cones. 1 Introduction The distance to infeasibility of a conic linear system, as introduced by Renegar, plays an interesting role in the study of interiorpoint methods (c.f. [2, 5, 6]). Given finitedimensional Hilbert spaces X;Y , a linear operator A : X ! Y , a vector b 2 Y , and a c...
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Typical Properties of Winners and Losers in Discrete Optimization
, 2004
"... We present a probabilistic analysis for a large class of combinatorial optimization problems containing, e.g., all binary optimization problems defined by linear constraints and a linear objective function over {0,1} n. By parameterizing which constraints are of stochastic and which are of adversari ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
We present a probabilistic analysis for a large class of combinatorial optimization problems containing, e.g., all binary optimization problems defined by linear constraints and a linear objective function over {0,1} n. By parameterizing which constraints are of stochastic and which are of adversarial nature, we obtain a semirandom input model that enables us to do a general averagecase analysis for a large class of optimization problems while at the same time taking care for the combinatorial structure of individual problems. Our analysis covers various probability distributions for the choice of the stochastic numbers and includes smoothed analysis with Gaussian and other kinds of perturbation models as a special case. In fact, we can exactly characterize the smoothed complexity of optimization problems in terms of their random worstcase complexity. A binary optimization problem has a polynomial smoothed