Results 1  10
of
24
A simplex based algorithm to solve separated continuous linear programs
 Mathematical Programming
, 2008
"... We consider the separated continuous linear programming problem with linear data. We characterize the form of its optimal solution, and present an algorithm which solves it in a finite number of steps, using simplex pivot iterations. 1 ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
We consider the separated continuous linear programming problem with linear data. We characterize the form of its optimal solution, and present an algorithm which solves it in a finite number of steps, using simplex pivot iterations. 1
A randomized polynomialtime simplex algorithm for linear programming
 In STOC
, 2006
"... We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
We present the first randomized polynomialtime simplex algorithm for linear programming. Like the other known polynomialtime algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input. We begin by reducing the input linear program to a special form in which we merely need to certify boundedness. As boundedness does not depend upon the righthandside vector, we run the shadowvertex simplex method with a random righthandside vector. Thus, we do not need to bound the diameter of the original polytope. Our analysis rests on a geometric statement of independent interest: given a polytope Ax ≤ b in isotropic position, if one makes a polynomially small perturbation to b then the number of edges of the projection of the perturbed polytope onto a random 2dimensional subspace is expected to be polynomial. 1.
THE CENTRAL CURVE IN LINEAR PROGRAMMING
, 2010
"... The central curve of a linear program is an algebraic curve specified by linear and quadratic constraints arising from complementary slackness. It is the union of the various central paths for minimizing or maximizing the cost function over any region in the associated hyperplane arrangement. We ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
The central curve of a linear program is an algebraic curve specified by linear and quadratic constraints arising from complementary slackness. It is the union of the various central paths for minimizing or maximizing the cost function over any region in the associated hyperplane arrangement. We determine the degree, arithmetic genus and defining prime ideal of the central curve, thereby answering a question of Bayer and Lagarias. These invariants, along with the degree of the Gauss image of the curve, are expressed in terms of the matroid of the input matrix. Extending work of Dedieu, Malajovich and Shub, this yields an instancespecific bound on the total curvature of the central path, a quantity relevant for interior point methods. The global geometry of central curves is studied in detail.
On a dual network exterior point simplex type algorithm and its computational behavior
 RAIRO  Operations Research
"... Abstract. The minimum cost network flow problem, (MCNFP) constitutes a wide category of network flow problems. Recently a new dual network exterior point simplex algorithm (DNEPSA) for the MCNFP has been developed. This algorithm belongs to a special “exterior point simplex type ” category. Similar ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The minimum cost network flow problem, (MCNFP) constitutes a wide category of network flow problems. Recently a new dual network exterior point simplex algorithm (DNEPSA) for the MCNFP has been developed. This algorithm belongs to a special “exterior point simplex type ” category. Similar to the classical dual network simplex algorithm (DNSA), this algorithm starts with a dual feasible treesolution and after a number of iterations, it produces a solution that is both primal and dual feasible, i.e. it is optimal. However, contrary to the DNSA, the new algorithm does not always maintain a dual feasible solution. Instead, it produces treesolutions that can be infeasible for the dual problem and at the same time infeasible for the primal problem. In this paper, we present for the first time, the mathematical proof of correctness of DNEPSA, a detailed comparative computational study of DNEPSA and DNSA on sparse and dense random problem instances, a statistical analysis of the experimental results, and finally some new results on the empirical complexity of DNEPSA. The analysis proves the superiority of DNEPSA compared to DNSA in terms of cpu time and iterations.
On the Existence of a Short Admissible Pivot Sequences for Feasibility and Linear Optimization Problems
, 1999
"... this paper, for the feasibility problem, we prove the existence of a short admissible pivot sequence from an arbitrary basis to a feasible basis. Regarding the general LP problem, the existence of a short admissible pivot sequence from an arbitrary basis to an optimal basis is proved without any non ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
this paper, for the feasibility problem, we prove the existence of a short admissible pivot sequence from an arbitrary basis to a feasible basis. Regarding the general LP problem, the existence of a short admissible pivot sequence from an arbitrary basis to an optimal basis is proved without any nondegeneracy assumptions. Our constructive proofs are based on techniques that are used in stronglypolynomial basis identification schemes of interior point methods. The result can be regarded as an admissible pivot version of the dstep
New Variants Of Finite CrissCross Pivot Algorithms For Linear Programming
, 1997
"... In this paper we generalize the socalled firstinlastout pivot rule and the mostoftenselectedvariable pivot rule for the simplex method, as proposed in Zhang [13], to the crisscross pivot setting where neither the primal nor the dual feasibility is preserved. The finiteness of the new crisscr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper we generalize the socalled firstinlastout pivot rule and the mostoftenselectedvariable pivot rule for the simplex method, as proposed in Zhang [13], to the crisscross pivot setting where neither the primal nor the dual feasibility is preserved. The finiteness of the new crisscross pivot variants is proven.
The Finite CrissCross Method for Hyperbolic Programming
 INFORMATICA, TECHNISCHE UNIVERSITEIT DELFT, THE NETHERLANDS
, 1996
"... In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper the finite crisscross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the crisscross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm.
Linear Concurrent Constraint Programming Over Reals
, 1998
"... . We introduce a constraint system LC that handles arithmetic constraints over reals within the linear concurrent constraint programming (lcc) framework. This approach provides us with a general, extensible foundation for linear programming algorithm design that comes with a (linear) logical semant ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. We introduce a constraint system LC that handles arithmetic constraints over reals within the linear concurrent constraint programming (lcc) framework. This approach provides us with a general, extensible foundation for linear programming algorithm design that comes with a (linear) logical semantics. In particular, it allows us to build a `glassbox' version of the (constraint solver) simplex algorithm by defining (monotone) cc ask and tell agents over a higherlevel constraint system as lcc(LC) programs. We illustrate at the same time the use of the lccframework as a nontrivial concurrent algorithm specification tool. 1 Introduction Constraintbased programming languages are based on a functional separation between a program that successively generates pieces of partial information called constraints, and a constraint solver that collects, combines, simplifies and detects inconsistencies between these constraints. Initially, constraint solvers were monolithic programs written in...
Finite Pivot Algorithms and Feasibility
, 2001
"... This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig’s largestcoefficient simplex method, Bland’s smallestindex rule, and the leastindex crisscross method. We present the b'rule: a simple ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig’s largestcoefficient simplex method, Bland’s smallestindex rule, and the leastindex crisscross method. We present the b'rule: a simple algorithmbased on Bland’s smallest index rule for solving systems of linear inequalities (feasibility of linear programs). We prove that the b'rule is finite, from which we then prove Farka’s Lemma, the Duality Theorem for Linear Programming, and the Fundamental Theorem of Linear Inequalities. We present experimental results that compare the speed of the b'rule to the classical methods.