Results 1  10
of
42
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 463 (16 self)
 Add to MetaCart
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Abstract

Cited by 146 (14 self)
 Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
AVERAGECASE STABILITY OF GAUSSIAN ELIMINATION
, 1990
"... Gaussian elimination with partial pivoting is unstable in the worst case: the "growth factor" can be as large as 2" l, where n is the matrix dimension, resulting in a loss of n bits of precision. It is proposed that an averagecase analysis can help explain why it is nevertheless stable in practice ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
Gaussian elimination with partial pivoting is unstable in the worst case: the "growth factor" can be as large as 2" l, where n is the matrix dimension, resulting in a loss of n bits of precision. It is proposed that an averagecase analysis can help explain why it is nevertheless stable in practice. The results presented begin with the observation that for many distributions of matrices, the matrix elements after the first few steps of elimination are approximately normally distributed. From here, with the aid of estimates from extreme value statistics, reasonably accurate predictions ofthe average magnitudes of elements, pivots, multipliers, and growth factors are derived. For various distributions of matrices with dimensions n =< 1024, the average growth factor (normalized by the standard deviation of the initial matrix elements) is within a few percent of n 2/3 for partial pivoting and approximately n 1/2 for complete pivoting. The average maximum element of the residual with both kinds of pivoting appears to be of magnitude O(n), as compared with O(n /2) for QR factorization. The experiments and analysis presented show that small multipliers alone are not enough to explain the averagecase stability of Gaussian elimination; it is also important that the correction introduced in the remaining matrix at each elimination step is of rank 1. Because of this lowrank property, the signs of the elements and multipliers in Gaussian elimination are not independent, but are interrelated in such a way as to retard growth. By contrast, alternative pivoting strategies involving highrank corrections are sometimes unstable even though the multipliers are small.
The Many Facets of Linear Programming
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction A ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949 on Activity Analysis of Production and Allocation [64]; the fortyfifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twentyfifth of the awarding of the 1975 Nobe...
Linear programming  randomization and abstract frameworks
 In Proc. 13th annu. Symp. on Theoretical Aspects of Computer Science (STACS
, 1996
"... Recent years have brought some progress in the knowledge of the complexity of linear programming in the unit cost model, and the best result known at this point is a randomized ‘combinatorial ’ algorithm which solves a linear program over d variables and n constraints with expected O(d 2 n + e O( d ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
Recent years have brought some progress in the knowledge of the complexity of linear programming in the unit cost model, and the best result known at this point is a randomized ‘combinatorial ’ algorithm which solves a linear program over d variables and n constraints with expected O(d 2 n + e O( d log d)) arithmetic operations. The bound relies on two algorithms by Clarkson, and the subexponential algorithms due to Kalai, and to Matouˇsek, Sharir & Welzl. frameworks like LPtype problems and abstract optimization problems (due to Gärtner) which allow the application of these algorithms to a number of nonlinear optimization problems (like polytope distance and smallest enclosing ball of points).
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Linear Programming, the Simplex Algorithm and Simple Polytopes
 Math. Programming
, 1997
"... In the first part of the paper we survey some farreaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot ru ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
In the first part of the paper we survey some farreaching applications of the basic facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concerning the simplex algorithm. We describe subexponential randomized pivot rules and upper bounds on the diameter of graphs of polytopes. 1 Introduction: A convex polyhedron is the intersection P of a finite number of closed halfspaces in R d . P is a ddimensional polyhedron (briefly, a dpolyhedron) if the points in P affinely span R d . A convex ddimensional polytope (briefly, a dpolytope) is a bounded convex dpolyhedron. Alternatively, a convex dpolytope is the convex hull of a finite set of points which affinely span R d . A (nontrivial) face F of a dpolyhedron P is the intersection of P with a supporting hyperplane. F itself is a polyhedron of some lower dimension. If the dimension of F is k we call F a kface of P . The empty set and P itself are...
A randomized polynomialtime simplex algorithm for linear programming
 In STOC
, 2006
"... We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to a special form in which we merely need to certify boundedness. As boundedness does not depend upon the righthandside vector, we run the shadowvertex simplex method with a random righthandside vector. Thus, we do not need to bound the diameter of the original polytope. Our analysis rests on a geometric statement of independent interest: given a polytope Ax ≤ b in isotropic position, if one makes a polynomially small perturbation to b then the number of edges of the projection of the perturbed polytope onto a random 2dimensional subspace is expected to be polynomial. 1.
Randomized Simplex Algorithms on KleeMinty Cubes
 COMBINATORICA
, 1994
"... We investigate the behavior of randomized simplex algorithms on special linear programs. For this, we use combinatorial models for the KleeMinty cubes [22] and similar linear programs with exponential decreasing paths. The analysis of two most natural randomized pivot rules on the KleeMinty cubes ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We investigate the behavior of randomized simplex algorithms on special linear programs. For this, we use combinatorial models for the KleeMinty cubes [22] and similar linear programs with exponential decreasing paths. The analysis of two most natural randomized pivot rules on the KleeMinty cubes leads to (nearly) quadratic lower bounds for the complexity of linear programming with random pivots. Thus we disprove two bounds (for the expected running time of the randomedge simplex algorithm on KleeMinty cubes) conjectured in the literature. At the same time, we establish quadratic upper bounds for the expected length of a path for a simplex algorithm with random pivots on the classes of linear programs under investigation. In contrast to this, we find that the average length of an increasing path in a KleeMinty cube is exponential when all paths are taken with equal probability.
CrissCross Methods: A Fresh View on Pivot Algorithms
 Mathematical Programming
, 1997
"... this paper is to present mathematical ideas and ..."