Results 1  10
of
18
A simplex algorithm whose average number of steps is bounded between two quadratic functions of the smaller dimension
 JOURNAL OF THE ACM
, 1985
"... It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplextype algorithms for linear programming. In this paper the average number of steps performed by a simplex algorithm, the socalled selfdual method, is analyzed. The algorithm is not started ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
It has been a challenge for mathematicians to confirm theoretically the extremely good performance of simplextype algorithms for linear programming. In this paper the average number of steps performed by a simplex algorithm, the socalled selfdual method, is analyzed. The algorithm is not started at the traditional point (1,..., but points of the form (1, e, e2,...)T, with t sufficiently small, are used. The result is better, in two respects, than those of the previous analyses. First, it is shown that the expected number of steps is bounded between two quadratic functions cl(min(m, n))' and cz(min(m, n)) ' of the smaller dimension of the problem. This should be compared with the previous two major results in the field. Borgwardt proves an upper bound of 0(n4m1'(n1') under a model that implies that the zero vector satisfies all the constraints, and also the algorithm under his consideration solves only problems from that particular subclass. Smale analyzes the selfdual algorithm starting at (1,..., He shows that for any fixed m there is a constant c(m) such the expected number of steps is less than ~(m)(lnn)&quot;'(&quot;+~); Megiddo has shown that, under Smale's model, an upper bound C(m) exists. Thus, for the first time, a polynomial upper bound with no restrictions (except for nondegeneracy) on the problem is proved, and, for the first time, a nontrivial lower bound of precisely the same order of magnitude is established. Both Borgwardt and Smale require the input vectors to be drawn from
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
Lower bounds in a parallel model without bit operations
 TO APPEAR IN THE SIAM JOURNAL ON COMPUTING
, 1997
"... ..."
Optimal Discrete Rate Adaptation for Distributed RealTime Systems ∗
"... Many distributed realtime systems face the challenge of dynamically maximizing system utility and meeting stringent resource constraints in response to fluctuations in system workload. Thus, online adaptation must be adopted in face of workload changes in such systems. We present the MultiParametri ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Many distributed realtime systems face the challenge of dynamically maximizing system utility and meeting stringent resource constraints in response to fluctuations in system workload. Thus, online adaptation must be adopted in face of workload changes in such systems. We present the MultiParametric Rate Adaptation (MPRA) algorithm for discrete rate adaptation in distributed realtime systems with endtoend tasks. The key novelty and advantage of MPRA is that it can efficiently produce optimal solutions in response to workload variations such as dynamic task arrivals. Through offline preprocessing MPRA transforms an NPhard utility optimization problem to the evaluation of a piecewise linear function of the CPU utilization. At run time MPRA produces optimal solutions by evaluating the function based on the CPU utilization. Analysis and simulation results show that MPRA maximizes system utility in the presence of varying workloads, while reducing the online computation complexity to polynomial time. 1
The Optimal Set and Optimal Partition Approach to Linear and Quadratic Programming
 in Advances in Sensitivity Analysis and Parametric Programming
, 1996
"... In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (usin ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this chapter we describe the optimal set approach for sensitivity analysis for LP. We show that optimal partitions and optimal sets remain constant between two consecutive transitionpoints of the optimal value function. The advantage of using this approach instead of the classical approach (using optimal bases) is shown. Moreover, we present an algorithm to compute the partitions, optimal sets and the optimal value function. This is a new algorithm and uses primal and dual optimal solutions. We also extend some of the results to parametric quadratic programming, and discuss differences and resemblances with the linear programming case.
Parametric Problems on Graphs of Bounded Treewidth
, 1992
"... We consider optimization problems on weighted graphs where vertex and edge weights are polynomial functions of a parameter . We show that, if a problem satisfies certain regularity properties and the underlying graph has bounded treewidth, the number of changes in the optimum solution is polynomial ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We consider optimization problems on weighted graphs where vertex and edge weights are polynomial functions of a parameter . We show that, if a problem satisfies certain regularity properties and the underlying graph has bounded treewidth, the number of changes in the optimum solution is polynomially bounded. We also show that the description of the sequence of optimum solutions can be constructed in polynomial time and that certain parametric search problems can be solved in O(n log n) time, where n is the number of vertices in the graph.
A Combinatorial Active Set Algorithm for Linear and Quadratic Programming, Under revision
, 2008
"... Abstract. We propose an algorithm for linear programming, which we call the Sequential Projection algorithm. This new approach is a primal improvement algorithm that keeps both a feasible point and an active set, which uniquely define an improving direction. Like the simplex method, the complexity o ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. We propose an algorithm for linear programming, which we call the Sequential Projection algorithm. This new approach is a primal improvement algorithm that keeps both a feasible point and an active set, which uniquely define an improving direction. Like the simplex method, the complexity of this algorithm need not depend explicitly on the size of the numbers of the problem instance. Unlike the simplex method, however, our approach is not an edgefollowing algorithm, and the active set need not form a row basis of the constraint matrix. Moreover, the algorithm has a number of desirable properties that ensure that it is not susceptible to the simple pathological examples (e.g., the KleeMinty problems) that are known to cause the simplex method to perform an exponential number of iterations. We also show how to randomize the algorithm so that it runs in an expected time that is on the order of mn 2 log n for most LP instances. This bound is strongly subexponential in the size of the problem instance (i.e., it does not depend on the size of the data, and it can be bounded by a function that grows more slowly than 2 m, where m is the number of constraints in the problem). Moreover, to the best of our knowledge, this is the fastest known randomized algorithm for linear programming whose running time does not depend on the size of the numbers defining the problem instance. Many of our results generalize in a straightforward manner to mathematical programs that maximize a concave quadratic objective function over linear constraints (i.e., quadratic programs), and we discuss these extensions as well.
Computational complexities of inclusion queries over polyhedral sets
"... In this paper we discuss the computational complexities of procedures for inclusion queries over polyhedral sets. The polyhedral sets that we consider occur in a wide range of applications, ranging from logistics to program verification. The goal of our study is to establish boundaries between hard ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In this paper we discuss the computational complexities of procedures for inclusion queries over polyhedral sets. The polyhedral sets that we consider occur in a wide range of applications, ranging from logistics to program verification. The goal of our study is to establish boundaries between hard and easy problems in this context. 1