Results 1  10
of
17
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co ..."
Abstract

Cited by 188 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time
, 2003
"... We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We me ..."
Abstract

Cited by 146 (14 self)
 Add to MetaCart
We introduce the smoothed analysis of algorithms, which continuously interpolates between the worstcase and averagecase analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
The Many Facets of Linear Programming
, 2000
"... . We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction A ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We examine the history of linear programming from computational, geometric, and complexity points of view, looking at simplex, ellipsoid, interiorpoint, and other methods. Key words. linear programming  history  simplex method  ellipsoid method  interiorpoint methods 1. Introduction At the last Mathematical Programming Symposium in Lausanne, we celebrated the 50th anniversary of the simplex method. Here, we are at or close to several other anniversaries relating to linear programming: the sixtieth of Kantorovich's 1939 paper on "Mathematical Methods in the Organization and Planning of Production" (and the fortieth of its appearance in the Western literature) [55]; the fiftieth of the historic 0th Mathematical Programming Symposium that took place in Chicago in 1949 on Activity Analysis of Production and Allocation [64]; the fortyfifth of Frisch's suggestion of the logarithmic barrier function for linear programming [37]; the twentyfifth of the awarding of the 1975 Nobe...
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Beyond Hirsch conjecture: Walks on random polytopes and smoothed complexity of the simplex method
 In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the shadowvertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an ar ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the shadowvertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an arbitrary linear program, the simplex method finds the solution after a walk on polytope(s) with expected length polynomial in the number of constraints n, the number of variables d and the inverse standard deviation of the perturbation 1/σ. We show that the length of walk in the simplex method is actually polylogarithmic in the number of constraints n. SpielmanTeng’s bound on the walk was O ∗ (n 86 d 55 σ −30), up to logarithmic factors. We improve this to O(log 7 n(d 9 + d 3 σ −4)). This shows that the tight Hirsch conjecture n − d on the length of walk on polytopes is not a limitation for the smoothed Linear Programming. Random perturbations create short paths between vertices. We propose a randomized phaseI for solving arbitrary linear programs, which is of independent interest. Instead of finding a vertex of a feasible set, we add a vertex at
A Survey on Pivot Rules for Linear Programming
 ANNALS OF OPERATIONS RESEARCH. (SUBMITTED
, 1991
"... The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. Th ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. There are some other important topics in linear programming, e.g. complexity theory or implementations, that are not included in the scope of this paper. We do not discuss ellipsoid methods nor interior point methods. Well known classical results concerning the simplex method are also not particularly discussed in this survey, but the connection between the new methods and the classical ones are discussed if there is any. In this paper we discuss three classes of recently developed pivot rules for linear programming. The first class (the largest one) of the pivot rules we discuss is the class of essentially combinatorial pivot rules. Namely these rules only use labeling and signs of the variab...
On Tail Decay and Moment Estimates of a Condition Number for Random Linear Conic Systems
, 2003
"... In this paper we study the distribution tails and the moments of C (A) and log C (A), where C (A) is a condition number for the linear conic system Ax 0, x 6= 0, with A 2 IR . We consider the case where A is a Gaussian random matrix. For this input model we characterise the exact decay rates of ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
In this paper we study the distribution tails and the moments of C (A) and log C (A), where C (A) is a condition number for the linear conic system Ax 0, x 6= 0, with A 2 IR . We consider the case where A is a Gaussian random matrix. For this input model we characterise the exact decay rates of the distribution tails, we improve the existing moment estimates, and we prove various limit theorems for the cases where either n or m and n tend to in nity. Our results are of complexity theoretic interest, because interiorpoint methods and relaxation methods for the solution of Ax 0, x 6= 0 have running times that are bounded in terms of log C (A) and C (A) respectively. AMS Classi cation: primary 90C31,15A52; secondary 90C05,90C60,62H10. Key Words: condition number, random matrices, linear programming, probabilistic analysis, complexity theory.
A Monotonic BuildUp Simplex Algorithm for Linear Programming
, 1991
"... We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the ba ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the basis, and all reduced costs which were originally nonnegative remain nonnegative. The pivot rule thus monotonically builds up to a dual feasible, and hence optimal, basis. A surprising property of the pivot rule is that the pivot sequence results in intermediate bases which are neither primal nor dual feasible. We prove correctness of the procedure, give a geometric interpretation, and relate it to other pivoting rules for linear programming.