Results 1  10
of
15
A randomized polynomialtime simplex algorithm for linear programming
 In STOC
, 2006
"... We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to a special form in which we merely need to certify boundedness. As boundedness does not depend upon the righthandside vector, we run the shadowvertex simplex method with a random righthandside vector. Thus, we do not need to bound the diameter of the original polytope. Our analysis rests on a geometric statement of independent interest: given a polytope Ax ≤ b in isotropic position, if one makes a polynomially small perturbation to b then the number of edges of the projection of the perturbed polytope onto a random 2dimensional subspace is expected to be polynomial. 1.
Beyond Hirsch conjecture: Walks on random polytopes and smoothed complexity of the simplex method
 In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the shadowvertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an ar ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Abstract. The smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the shadowvertex simplex method has polynomial smoothed complexity. On a slight random perturbation of an arbitrary linear program, the simplex method finds the solution after a walk on polytope(s) with expected length polynomial in the number of constraints n, the number of variables d and the inverse standard deviation of the perturbation 1/σ. We show that the length of walk in the simplex method is actually polylogarithmic in the number of constraints n. SpielmanTeng’s bound on the walk was O ∗ (n 86 d 55 σ −30), up to logarithmic factors. We improve this to O(log 7 n(d 9 + d 3 σ −4)). This shows that the tight Hirsch conjecture n − d on the length of walk on polytopes is not a limitation for the smoothed Linear Programming. Random perturbations create short paths between vertices. We propose a randomized phaseI for solving arbitrary linear programs, which is of independent interest. Instead of finding a vertex of a feasible set, we add a vertex at
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 Commun. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise. 1. MODELING REAL DATA “My experiences also strongly confirmed my previous opinion that the best theory is inspired by practice and the best practice is inspired by theory. ” [Donald E. Knuth: “Theory and Practice”, Theoretical Computer Science, 90 (1), 1–15, 1991.] Algorithms are highlevel descriptions of how computational tasks are performed. Engineers and experimentalists design and implement algorithms, and generally consider them a success if they work in practice. However, an algorithm that works well in one practical domain might perform poorly in another. Theorists also design and analyze algorithms, with the goal of providing provable guarantees about their performance. The traditional goal of theoretical computer science is to prove that an algorithm performs well This material is based upon work supported by the National
A Survey on Pivot Rules for Linear Programming
 ANNALS OF OPERATIONS RESEARCH. (SUBMITTED
, 1991
"... The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. Th ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The purpose of this paper is to survey the various pivot rules of the simplex method or its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with the finiteness property of simplex type pivot rules. There are some other important topics in linear programming, e.g. complexity theory or implementations, that are not included in the scope of this paper. We do not discuss ellipsoid methods nor interior point methods. Well known classical results concerning the simplex method are also not particularly discussed in this survey, but the connection between the new methods and the classical ones are discussed if there is any. In this paper we discuss three classes of recently developed pivot rules for linear programming. The first class (the largest one) of the pivot rules we discuss is the class of essentially combinatorial pivot rules. Namely these rules only use labeling and signs of the variab...
Multiplesource shortest paths in embedded graphs
, 2012
"... Let G be a directed graph with n vertices and nonnegative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortestpath distance from any vertex on the boundary of ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Let G be a directed graph with n vertices and nonnegative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess the graph in O(gn log n) time, so that the shortestpath distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the O(n log n)time algorithm of Klein [Multiplesource shortest paths in planar graphs. In Proc. 16th Ann. ACMSIAM Symp. Discrete Algorithms, 2005] for multiplesource shortest paths in planar graphs. Intuitively, our preprocessing algorithm maintains a shortestpath tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to compute a shortest noncontractible or nonseparating cycle in embedded, undirected graphs in O(g² n log n) time.
A Monotonic BuildUp Simplex Algorithm for Linear Programming
, 1991
"... We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the ba ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the basis, and all reduced costs which were originally nonnegative remain nonnegative. The pivot rule thus monotonically builds up to a dual feasible, and hence optimal, basis. A surprising property of the pivot rule is that the pivot sequence results in intermediate bases which are neither primal nor dual feasible. We prove correctness of the procedure, give a geometric interpretation, and relate it to other pivoting rules for linear programming.
Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms
 Proceedings of the conference on Banach Spaces and their applications in analysis (in honor of N. Kalton’s 60th birthday
"... Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their inverses. 1. Asyptotic convex geometry and Linear Programming Linear Programming studies the problem of maximizing a linear functional subject to linear constraints. Given an objective vector z ∈ R d and constraint vectors a1,...,an ∈ R d, we consider the linear program (LP) maximize 〈z, x〉 subject to 〈ai, x 〉 ≤ 1, i = 1,...,n. This linear program has d unknowns, represented by x, and n constraints. Every linear program can be reduced to this form by a simple interpolation argument [36]. The feasible set of the linear program is the polytope P: = {x ∈ R d: 〈ai, x 〉 ≤ 1, i = 1,..., n}. The solution of (LP) is then a vertex of P. We can thus look at (LP) from a geometric viewpoint: for a polytope P in R d given by n faces, and for a vector z, find the vertex that maximizes the linear functional 〈z, x〉. The oldest and still the most popular method to solve this problem is the simplex method. It starts at some vertex of P and generates a walk on the edges of P toward the solution vertex. At each step, a pivot rule determines a choice of the next vertex; so there are many variants of the simplex method with different pivot rules. (We are not concerned here with how to find the initial vertex, which is a nontrivial problem in itself).
Characterization and computation of restless bandit marginal productivity indices. SMCtools ’07
 Proc. 2007 Workshop on Tools for Solving Structured Markov Chains
"... Appl. Probab. 25A, 287298] yields a practical scheduling rule for the versatile yet intractable multiarmed restless bandit problem, involving the optimal dynamic priority allocation to multiple stochastic projects, modeled as restless bandits, i.e., binaryaction (active/passive) (semi) Markov de ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Appl. Probab. 25A, 287298] yields a practical scheduling rule for the versatile yet intractable multiarmed restless bandit problem, involving the optimal dynamic priority allocation to multiple stochastic projects, modeled as restless bandits, i.e., binaryaction (active/passive) (semi) Markov decision processes. A growing body of evidence shows that such a rule is nearly optimal in a wide variety of applications, which raises the need to efficiently compute the Whittle index and more general marginal productivity index (MPI) extensions in largescale models. For such a purpose, this paper extends to restless bandits the parametric linear programming (LP) approach deployed 3 in [J. NiñoMora. A ( 2 / 3) n fastpivoting algorithm for the Gittins index and optimal stopping of a Markov chain, INFORMS J. Comp., in press], which yielded a fast Gittinsindex algorithm. Yet the extension is not straightforward, as the MPI is only defined for the limited range of socalled indexable bandits, which motivates the quest for methods to establish indexability. This paper furnishes algorithmic and analytical tools to realize the potential of MPI policies in largescale applications, presenting the following contributions: (i) a complete algorithmic
S (2010) Robust and stochastically weighted multiobjective optimization models and reformulations
"... We introduce and study a family of models for multiexpert multiobjective/criteria decision making. These models use a concept of weight robustness to generate a risk averse decision. In particular, the multiexpert multicriteria robust weighted sum approach (McRow) introduced in this paper identi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We introduce and study a family of models for multiexpert multiobjective/criteria decision making. These models use a concept of weight robustness to generate a risk averse decision. In particular, the multiexpert multicriteria robust weighted sum approach (McRow) introduced in this paper identifies a (robust) Pareto optimum decision that minimizes the worst case weighted sum of objectives over a given weight region. The corresponding objective value, called the robustvalue of a decision, is shown to be increasing and concave in the weight set. Compact reformulations of the models with polyhedral and conic descriptions of the weight regions. The McRow model is developed further for stochastic multiexpert multicriteria decision making by allowing ambiguity or randomness in the weight region as well as the objective functions. The properties of the proposed approach is illustrated with a few examples. The usefulness of the stochastic (McRow) model is demonstrated using a disaster planning example and an agriculture revenue management example.